Say that you need to perform a task, and that there are 2 algorithms, say A and B for performing that task. Let A finishes the task in time TA and B finishes it in time TB. If we can show that TA is less than TB, then algorithm A is indeed faster than algorithm B.

We test both the algorithms for an input of size n0, and we see that **TA(n0) < TB(n0)**. That means that algorithm A is indeed faster than algorithm B for input size n0. But we won’t be using the same input size n0 all the time. We need to be able to use different input sizes depending on the application. But if we can show that **TA(n) < TB(n) for all values of n > 0**, we can prove that algorithm A is indeed faster than algorithm B, irrespective of the input size. But we have no idea on how big or small *n* is, and a function is not always less than or equal to another function throughout the entire set of input sizes. To solve that, we have the **asymptotic analysis**.

Asymptotic analysis removes all the dependencies like hardware, OS, compiler and various other factors, and gives us a relation based purely on input size n. There are various classes of asymptotic, like the big O, theta, big Omega, little o, and little omega. Each of these can be used to calculate a relation between the running time of an algorithm (for time complexity) with its input size n. All other factors are ignored.

Big notation is used for creating an upper bound for an algorithm. It means that whatever happens, the algorithm shall not take more time than what’s shown by the big O notation. So big O notation are used mainly for *worst case analysis*. We shall strip off low order terms and constants to generate a relation purely on the input size n.

Say that you need to search for a telephone number from a listing of telephone numbers, What shall you do? You shall start at the top and goes down to the bottom, by comparing each telephone number with the number that you need to search for. This is what call a **sequential search** or** linear search**. If you add 1 more telephone number to the listing, you shall need to search for 1 more number only (*assuming that the number you need to find has not be found*). So if you add *n* telephone numbers, you just need to search atmost *n* times to find the number. So we call it linear time complexity or *O***(n).**

Suppose you need to search for a word in a dictionary, say “*Good*“. Will you start at A in the dictionary and then do a linear search for every single word in the dictionary, until you find the word? No, you wont. Because linear search is time consuming in this occasion. So what shall we do? We shall try to get to the middle of the dictionary. The word we need to find shall be in either of the halves. We shall go to the either of the halves and keep dividing the dictionary, until we find our word. Since we halve the dictionary at each step, the number of steps get divided by 2 in each step. That means that its a logarithmic complexity or *O***(log n)**. In asymptotic analysis, we do not think about the base of the logarithms. The reason is because to convert from a logarithm of one base to another, we just need to multiply by some constant, and since we ignore constants in asymptotic analysis, they have no importance.

Suppose you need to find the element in an array at the index 5 (*So you need to find the value of a[5]*). You do not have to search through a[0] to a[5] to determine the value at a[5]. A simple lookup at the a[5] shall get you the answer. Whatever the value of the element at a[5], you just need a simple lookup. It means that the time for the operation does not depend on the size, and rather needs only a constant time. This is what we call constant complexity or *O***(1)**.

When we say that the running times of 2 algorithms are asymptotically the same, it doesn’t mean that their running times are the same. For example **100n lg n** and **2n lg n** are asymptotically same (*because the constants are ignored*), though their values are not.