Introduction to Time Complexity for Programmers
The efficiency of algorithms is vital for all the programmers including competitive programming. Usually, it is easy to style an algorithm that solves
the matter slowly, but the important challenge is to invent a fast algorithm.
If the algorithm is just too slow, it'll get only partial points or no points
at all.
The time complexity of an algorithm
estimates what proportion time the algorithm will use for some input. The goal
is to represent the efficiency as a function whose parameter is the size of the
input. By calculating the time complexity, we can determine whether the
algorithm is fast enough without implementing it.
Calculation rules
The time complexity of an algorithm is
denoted as O(… ) where the three dots represent some function. Usually, the
variable n denotes the input size of the function. For example, if the
input is an array of numbers, n are going to be the dimensions of the array,
and if the input may be a string, n are going to be the length of the string.
Loops
A common reason for an algorithm is
being slow is that it contains many loops that go through the input. With increasing number of loops the algorithm gets slowed down.
If there are x nested loops, the time
complexity is O(nx).
For example, the time complexity of the
subsequent code is O(n):
for (int i = 1; i <= n; i++) {
// code
}
And the time complexity of the following code is O(n2):
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++)
{
// code
}
}
Order of magnitude
A time complexity doesn't tell us the precise number of times the
code inside a loop is executed, but it only shows the order of magnitude.
within the following examples, the code inside the loop is executed 3n, n+5 and
n times, but the time complexity of every code is O(n).
for (int i = 1; i <= 3*n; i++) {
// code
}
for (int i = 1; i <= n+5; i++) {
// code
}
for (int i = 1; i <= n; i += 2) {
// code
}
As another example, the time complexity of the following code is
O(n2):
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n;
j++) {
// code
}
}
Phases
If the algorithm consists of consecutive phases, the entire time
complexity is that the largest time complexity of one phase. the rationale for
this is often that the slowest phase is typically the bottleneck of the code.
For example, the subsequent code
consists of three phases with time complexities
O(n), O(n2) and O(n). Thus, the
total time complexity is O(n2).
for (int i = 1; i <= n; i++) {
// code
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++)
{
// code
}
}
for (int i = 1; i <= n; i++) {
// code
}
Several variables
Sometimes the time complexity depends on several factors. during
this case, the time complexity formula contains several variables.
For example, the time complexity
of the subsequent code is O(nm):
for
(int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
// code
}
}
Recursion
The time complexity of a recursive function depends on the amount
of times the function is named and therefore the time complexity of one call.
the entire time complexity is that the product of those values.
For example, consider the
subsequent function:
void
f(int n) {
if (n == 1) return;
f(n-1);
}
The call f(n) causes n function calls, and therefore the time
complexity of every call is O(1).
Thus, the entire time complexity
is O(n).
As another example, consider the
subsequent function:
void
g(int n) {
if (n == 1) return;
g(n-1);
g(n-1);
}
In this case each call generates two other calls, apart from n =
1. Let us see what happens when g is named with parameter n. the subsequent
table shows the function calls produced by this single call:
function
call number of
calls
g(n) 1
g(n-1) 2
g(n-2) 4
….. …..
g(1) 2n-1
Based on this, the time complexity is
1+2+4+…+2n-1 =
2n – 1 = O(2n)
Comments
Post a Comment
Please Share Your Views