In my previous post I discussed about the different asymptotic notations and its significance.This post discusses how to calculate the running time of an algorithm and represent it using the Big O notation.
Prerequisites:
Basic knowledge of asymptotic bounding and asymptotic notations is required.
Refer below links to get some basic knowledge on asymptotic bounding and asymptotic notations,
Explanation:
The amount of time and space a piece of code takes to run is very important. The amount of time an algorithm takes to run is known as the running time of your code or an algorithm.
The running time of an algorithm depends on many factors, some are listed below-
- How fast your computer is, is it a single processor or multi-processor.
- What is the read/write speed to the memory.
- Are you using any other applications while running your algorithm.
- Is your computer a 32 bit or a 64 bit OS.
- The input to your algorithm.
Time complexity of an algorithm is a measure of how the time taken by the algorithm grows, if the size of the input increases.
The input to the algorithm is the most important factor which affects the running time of an algorithm and we will be considering the same for calculating the time complexities.
The time complexity of an algorithm can be represented by a notation called Big O notation which is also known as the asymptotic notation.
There are other types of asymptotic notations like theta and omega as well.
How to calculate running time/time complexity of an algorithm:
- Consider the below program to calculate the square of an integer.
public int square(int a) {
return a*a;
}
Suppose it takes 1 unit of time to perform an arithmetic operation and it takes 1 unit of time for returning the square. And suppose the input passed to the function is square(2). The time required by the above function would be 2 units.
If the size of the input passed to the function square is increased, e.g. square (50) the time still required would be 2 units, one unit for performing the square and other for returning the square 2500.
If the size of the input is increase by a large amount, e.g. square(1000) the time still required would be 2 units, one unit for performing the square and other for returning the square 1000000.
The above function is said to have a constant time complexity since the time required to perform the square remains constant i.e. 2 units and can be represented by T=k, where k = some constant. Constant time complexity is represented by O(1).
Below is the graph which represents constant time complexity of the function square:

2. Consider the below program to calculate the sum of all the all the elements of an array.
public int sum(int arr[], int n) {
int total = 0; // 1 unit of time
for(int i=0; i<n; i++) { // 2*(n+1) units of time
total= total + x; // 2*n units of time
}
return total; // 1 unit of time
}
Suppose it takes 1 unit of time for the assignment total=0, 2 units of time for the for loop which will execute for (n+1) times, 2 units of time to perform the addition operation and assigning the value to total which executes for n times and it takes 1 unit of time to return the total which will execute 1 time.
Hence the total time taken for the above code to run = 1+2*(n+1)+2n+1=4n +4.
The time taken by the function to execute is T(n)=4n+4 which can be expressed as a linear function as T=an+b where n = size of the input and a,b are constants.
We have seen in the asymptotic bounding post that the constants can be ignored and we can consider the term which changes as the input value varies, in this case it is ‘n’.
Here ‘b’ is a constant which can be ignored and ‘an’ is the term which grows as the size of the input ‘n’ increases, but since ‘a’ is a coefficient it remains constant. Also we test the time complexity for larger input values. So if ‘n’ gets large and tends to infinity then the constants are insignificant for large values of ‘n’. Hence the time taken by function can be represented as T=O(n).
If the size of the input array ‘n’ is 5, then the total time taken will be 5units.
If the size of the input array is 10, then the total time taken will be 10units.
If the size of the input array is 100, then the total time taken will be 100units.
The graph for the above scenario will look like below,

As we can see in the above graph, the time taken by the function to calculate the sum of all the numbers in the array increases linearly as size of the input increases. Hence this function is said to have a linear time complexity represented by O(n), where n = size of the array.
3. Consider the below function to calculate the sum of the elements of a n*n matrix.
public int calculateSum(int matrix[][], int n) {
int total = 0; // 1 unit of time
for(int i=0;i<n;i++) { // 2*(n+1) units of time
for(int j=0;j<n;j++) { // 2*(n+1) units of time
total = total+matrix[i][j];
}
}
return total; // 1 unit of time
}
Suppose it takes 1 unit of time to initialize total=0. Suppose the outer for loop takes 2 unit of time to execute and it executes for (n+1) times hence total time taken = 2*(n+1)=2n+2. The inner for loop will also run for 2*(n+1) times. For every outer loop run, inner for loop will run for 2*(n+1) times [which is why we consider (2n+2)*(2n+2) below]. And suppose it takes 1 unit of time to return the total sum.
Hence the total time taken = 1 + (2n+2)*(2n+2)+1 = 2+4n^2+8n+4 = 4n^2+8n+6.
Since in the above quadratic equation T = 4n^2+8n+6 we have constants which we can ignore and consider only the fastest growing term i.e. n^2. The time complexity for the above function can be represented by O(n^2).
If the size of the input array ‘n’ is 2 then the total time taken will be 2*2=4 units.
If the size of the input array ‘n’ is 3 then the total time taken will be 3*3=9 units.
Below graph represents the quadratic time complexity,

4. Consider below code containing a simple for loop,
for(int i=1; i<n; i=i*2) {
//some statement
}
Let us consider various values i can take while multiplied by 2 in for loop,
| i |
| 1 |
| 1*2 = 2 |
| 2*2=2^2 |
| 2^2 *2=2^3 |
| 2^3 *2 = 2^4 |
| . |
| . |
| 2^k |
The for loop will stop when i>=n, i.e. 2^k >=n which equals 2^k=n.
If we apply log on both sides we get log2^k = logn i.e. klog2 = logn.
Hence we get k = logn.
The time complexity of the above function is O(logn).
Below graph represents logarithmic time complexity for various values of n,

Rules to follow while deriving time complexity:
Time complexity of an algorithm is analyzed for large input size ‘n’.
For example if we have a function T(n)= 3(n^3)+2(n^2)+4n+1, then the time complexity of this function is considered as O(n^3) since the other terms 2*(n^2)+4n+1 become insignificant when ‘n’ becomes large i.e. when ‘n’ tends to infinity. Note: drop the lower order terms and the constant multiplier.
Time complexity = Sum of the time complexities of all the fragments.
Consider below code,
int a=1; //runs for 1 time
for(int i=1; i<n; i++) { //runs for (n+1) times
//some statement
}
The time complexity of above code will be sum 1+(n+1) = n+2 = Since we ignore the constants and the constant multiplier, the time complexity becomes O(n).
Time complexity of the worst case is considered.
Consider below if-else condition,
if(some condition) {
for(int i=0;i<n;i++) { //O(n)
//some statement
}
} else {
for(int i=0;i<n;i++) {
for(int j=0;j<n;j++) { //O(n^2)
//some statement
}
}
}
The time complexity of the for loop inside ‘if’ condition is O(n) and the time complexity of the for loops inside ‘else’ condition is O(n^2). The total time complexity will be n^2+n = O(n^2) i.e. consider only the worst case complexity which occurs if the control goes in the ‘else’ condition.
I hope this post helped you to understand how to calculate the time complexity of a piece of code. Please do like and share this post if you found it helpful and informative.Your feedback is highly appreciated.