Big O and All You Need To Know

Deepakshi Sood
4 min readDec 13, 2020

--

For all those out there starting on Competitive Coding, there are multiple languages you can choose to code in. However, the concept of Scalability will always be a standard that you will come across. It is like a threshold of the understanding you need to write efficient and extendable code.

So what is the deal with Scalability and Big O and what is it anyway?

Scalability refers to the extend up to which your code can adapt to the change in the volume of input data. That is, if the size of the requirement of code changes, it should be able to adjust accordingly.

Now comes the thing about Big O. It refers to analyzing the algorithm in terms of its scalability. Putting it in simple terms, Big O is a function that tells how the complexity of your algorithm changes with a change in the volume of input. When we plot a graph between the elements input and the number of operations needed to get the output, the curve obtained tells us about the Big O of the function. The term Big O (O: order) comes about from the asymptotic analysis as a method of determining the upper limit for a function. Time Complexity and Space Complexity are the two aspects of it. Time complexity refers to the amount of time taken by a function(or snippet of code) to run with varying input volume, while Space complexity refers to the amount of memory occupied by variables for given inputs.

Golden Rules of Big O

Before getting into the different types of Big O notations, we have some ground rules that govern them. These include:

  1. Always consider the Worst-Case Input.
  2. Remove the Constants added or multiplied to the Complexities.
  3. For different sized inputs, use different terms.
  4. Only keep the Dominant Term.

Let’s move on to some common Big O notations of time complexity. While writing a code(or function) we need to make sure that we have the least time complexity to have the minimum execution time.

O(1) — Constant Time

Consider an example of the following code in Python:

def firstFunction(array) :
print(array[1])

The function firstFunction takes an array as an input and prints its second element. This function has a time complexity of Constant Time. This is because no matter the size of the input array, the function will always print the second element in the array. So, when we plot inputs vs no. of operations graph, the line will be horizontal.

Other operations such as push(), pull(), and getting an element of an array are all Constant Time.

O(n) — Linear Time

For the following example:

def secondFunction(array):
for i in range(0,len(array)-1):
if(array[i]==20):
print(array[i])

The function secondFunction takes an array as the input and loops through it, to find an array element that equals 20 and prints it. Here, the size of the array can vary differently. The operation has to go over the entire array one by one and check if it equals 20. Hence, as the size of input increases, in the worst case(when 20 is found at the last index), the number of operations also increases. This is called Linear Time or O(n) complexity functions.

Operations such as Traversing an array, Linear Search, and Comparisons have Linear Time Complexities.

O(n²) — Quadratic Time

Consider the following example:

def thirdFunction(array1,array2):
for i in range(0, len(array1)-1):
for j in range(0,len(array2)-1):
if(array1[i]==array2[j]):
print("Equal")

The thirdFunction takes two arrays as input and loops through the two arrays and check if any two elements in the array are equal and prints “Equal”. Here, with the control is under the outer loop, the operation iterates through the inner loop and then moves again to the outer loop. So if the size of array1 is a and size of array2 is b, then total iterations would be a*b(Rule 3). If both the arrays are of the same length say n, the total operations become n². This is called Quadratic Time Complexity. Operations involving nested loops have an O(n²)

Sorting Techniques such as Bubble, Insertion, and Selection Sort have a Quadratic Time Complexity.

O(n!) — Factorial Time

This is the worst of Time Complexities that a function can have. This means having a loop for every element being iterated over. It is very uncommon to witness this type of Big O and you may need to optimize the code. One example is the Travelling Salesman Problem using Brute Force Algorithm.

The world of Big O and Complexity is vast and very crucial from a Competitive point of view. Hence, it is suggested to brush up on this concept before moving on to the actual programming part.

--

--

Deepakshi Sood

Electronics and Computer Engineering Student. Microsoft Learn Student Ambassador. Ex-Intern @JP Morgan. Incoming Analyst @Morgan Stanley. Learning Enthusiast.