How to find the time complexity?

How do I find the time complexity of any given algorithm?

  • Answer:

    Consider a algorithm with following code char k = 'y'; // This will be executed 1 time int a= 0; //This will be executed 1 time All declaration and assignment statements will be executed once For a loop like the one below for (int i = 0; i < N; i++) { System.Out.Println('Hello World !'); } int i=0; This will be executed only once. The time is actually calculated to i=0 and not the declaration. i < N; This will be executed N+1 times i++ ; This will be executed N times So the number of operations required by this loop are {1+(N+1)+N} = 2N+2

Karthik CR at Quora Visit the source

Was this solution helpful to you?

Other answers

There can be no short answer to this question. Firstly, one needs to choose the appropriate notion of time complexity. There are at least three notions of time complexity, namely best-case time complexity. average-case time complexity and worst-case time complexity In all three cases, the time complexity of an algorithm [math]A[/math] is a function [math]t_A : \mathbb{N} \rightarrow \mathbb{N}[/math]. For worst-case time complexity, which is often used, we have [math]t_A(n) = k[/math] if for every input of size [math]n[/math] the algorithm [math]A[/math] will use at most [math]k[/math] steps. The two notions of input size and step must be chosen appropriately. Finally, one will need to make a choice between exact and asymptotic time complexity. Once all of this has been done, the actual analysis of the algorithm depends on how the algorithm is expressed. In the analysis of recursive algorithms, one usually needs to write down a recurrence relation for [math]t_A[/math] and to solve or estimate for [math]t_A[/math], depending on whether you are interested in exact or asymptotic time complexity. In the case of iterative algorithms, it often helps to write down the number of steps consumed for each step and then to note that sequential composition leads to addition of complexities, whereas loop constructs lead to a multiplication by the number of iterations of the loop. But again: there is no short answer to this question anymore than there are concise how-to answers to questions about, say, finding the value of a definite integral.

Hans Hyttel

Hi , Regarding time complexity of an algorithms, here's a course from which you can use materials: https://www.coursera.org/course/algo https://www.coursera.org/course/algo2 And more resources regarding the same: http://discrete.gr/complexity/ http://www.studytonight.com/data-structures/time-complexity-of-algorithms For example, assume we simplify 2N + 2 machine instructions to describe it as just O(N). For now we remove the two 2's from here. Why? To see the performance of the algorithm as N enlarges. Now to 2N and 2. What's the relative influence of these terms as N grows? Suppose N is a million. Then the 1st term is 2 million, the 2nd is just 2. This is why we drop all but the largest numbers for a large N. So now we go from 2N + 2 to 2N. But we're interested in performance up to constant factors alone. This means that we don't really care if there is some constant  multiple of difference in performance when N is large.  The unit of 2N  is not well-defined in the first place anyway.  So we can multiply or  divide by a constant factor to get to the simplest expression. So 2N becomes just N. This is what I know, but here's what I've found while reading from a page of this excellent site here (before it's gone): http://www.daniweb.com/software-development/computer-science/threads/13488/time-complexity-of-algorithm The most common metric for calculating time complexity is Big O  notation. This removes all constant factors so that the running time can  be estimated in relation to N as N approaches infinity. In general you  can think of it like this: statement; Is constant. The running time of the statement will not change in relation to N. for ( i = 0; i < N; i++ )      statement; Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time. for ( i = 0; i < N; i++ ) {   for ( j = 0; j < N; j++ )     statement; } Is quadratic. The running time of the two loops is proportional to  the square of N. When N doubles, the running time increases by N * N. while ( low <= high ) {   mid = ( low + high ) / 2;   if ( target < list[mid] )     high = mid - 1;   else if ( target > list[mid] )     low = mid + 1;   else break; } Is logarithmic. The running time of the algorithm is proportional to  the number of times N can be divided by 2. This is because the algorithm  divides the working area in half with each iteration. void quicksort ( int list[], int left, int right ) {   int pivot = partition ( list, left, right );   quicksort ( list, left, pivot - 1 );   quicksort ( list, pivot + 1, right ); } Is N * log ( N ). The running time consists of N loops (iterative or  recursive) that are logarithmic, thus the algorithm is a combination of  linear and logarithmic. In general, doing something with every item in one dimension is  linear, doing something with every item in two dimensions is quadratic,  and dividing the working area in half is logarithmic. There are other  Big O measures such as cubic, exponential, and square root, but they're  not nearly as common. Big O notation is described as O (  ) where  is  the measure. The quicksort algorithm would be described as O ( N * log (  N ) ). Note that none of this has taken into account best, average, and  worst case measures. Each would have its own Big O notation. Also note  that this is a VERY simplistic explanation. Big O is the most common,  but it's also more complex that I've shown. There are also other  notations such as big omega, little o, and big theta. You probably won't  encounter them outside of an algorithm analysis course. ;) Iniyavel Sugumar ..

Iniyavel Sugumar

There is no general rule for finding time complexity. In simple examples, with loops that have bounds in terms of the input parameters, it is a matter of careful counting. In cases of dynamic data structures you need to reason more carefully. For instance, many graph algorithms have queues were nodes are added or taken away. I'd say, study some of those algorithms to see how their complexity is calculated.

Victor Eijkhout

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.