Understanding Code Efficiency
Big O notation is a mathematical notation that denotes the limiting behavior of an algorithm when the input tends towards infinity. It is the most widely used notation in computer science when defining time and space complexity, and a staple to understanding code efficiency.
In this article I will specifically be going over how to use Big O notation to represent time complexity (space complexity isn’t as important since modern-day computers have tons of memory, I might go more in depth into it in another article).
First of all, what is time complexity? The time complexity of a program is, well, how much time the program takes to run, or its efficiency. Big O represents time complexity by measuring how many ‘single program processes’ are run in relation to the size of the input of an algorithm. That sounds like a bunch of jargon. Let’s look at some examples:
Programs with O(1) time complexity run for a constant number of times regardless of the input. For example in the code below (written in python, but the logic is the same for any language), no matter what gets inputted into the function, the calculation a + b will be called exactly once.
def add_a_and_b():
a + b
Now, it’s important to note that multiplicative constants can be ignored when determining the Big O for an algorithm (I’ll explain why later), so whether you are running “a + b” one time, 5 times, or 1000 times, as long as it’s a constant, your time complexity is still O(1).
def add_a_and_b_five_times():
a + b
a + b
a + b
a + b
a + b
def add_a_and_b_a_thousand_times():
for (_ in range(1000)):
a + b
# Both algorithms still have time complexity O(1)
Algorithms with O(n) time complexity run a number of times directly proportional to the size of n (the input). For example, the function below loops through an array of numerical values and adds 1 to each of them.
def add_one_to_each_number_in_array(arr):for i in range(len(arr)):
arr[i] += 1
And like the examples for O(1), no matter how many times the “arr[i]+=1” is called, as long as it’s a constant number of times, the algorithm still has O(n) complexity.
def add_one_to_each_number_in_array_five_times(arr):for i in range(len(arr)):
# still O(n)
arr[i] += 1
arr[i] += 1
arr[i] += 1
arr[i] += 1
arr[i] += 1
Now I think you can already guess what O(n²) time complexity is. An algorithm has an O(n²) complexity when it runs for a number of times proportional to n². For example, in the function below, there are two for loops, one nested inside the other.
def print_array_pair_sums(arr):
for i in range(len(arr) - 1): # O(n)
for j in range(i, len(arr)): # O(n)
print(i + j)
input = [1, 2, 3, 4, 5]
stdout -> 3, 4, 5, 6, 5, 6, 7, 6, 7, 7
Both individual loops have a time complexity of O(n), because they are each looped through for a number of times proportional to n, and n * n = n².
You might notice that the print statement doesn’t actually run n² times, but that doesn’t matter. As long as the function scales the same as O(n²) as n gets bigger and bigger, it is considered to be part an O(n²) function.
A common but less intuitive complexity that you will run into a lot is logarithmic time complexity, O(log n). If you remember learning about logarithms in math class, the logarithm of a number is the amount of times a base number can be multiplied by itself to get to the number, the reverse of an exponent.
For example, log 1000 (base 10) = 3, because 10 * 10 * 10 = 1000. In logarithmic notation, the base number is usually written as a subscript to the “log”, so log₁₀ 1000 = 3, log₅ 25 = 2.
Logarithmic complexities are a lot more desirable than O(n) because as the size of n grows to the millions, billions, and trillions, log(n) will barely experience any increase.
Below represents an algorithm with time complexity O(log n), with 2 as the base number. You can see that the algorithm only runs 30 times when the input size is a trillion!
input size = 2 -> runs 1 time
input size = 4 -> runs 2 times
input size = 8 -> runs 3 times
input size = 16 -> runs 4 times
...
input size = 1,000,000,000,000 -> runs ~30 times
There are a few other less-common complexities that you’ll see sometimes. Here’s a Big O complexity chart. You can see that the ones on the left grow extremely rapidly as the input size increases, meaning they will use up lots of program resources. The ones on the right grow gradually as the input size increases, so they are more efficient.
Especially how the scale when the input size tends towards infinity. This is why minor things like multiplicative and additive constants or factors such as the base number for a logarithmic algorithm can be thrown out when determining complexity.
Modern day computers can do about 10,000,000,000 (10 Billion) calculations per second. Because of this, the overall trend of an algorithm’s complexity matters much more. An O(1) and an O(1,000,000) are basically the same for a computer (both can be run in under a millisecond), but an O(1,000,000) and an O(n) or an O(n²) are very different.
Every decent programmer knows the value of efficient code, and understanding how well your code performs is an important part of that. I hope you got something out of this article, but know that this is only the tip of the iceberg. I’ve done the best I can to try and fit decades worth of research and innovation into a five-minute article.
If you want some more resources on Big O Notation and Time/Space Complexity, I recommend just going through a couple tutorials on Youtube. Most tech/programming videos on the platform are very well made, and it’s helped me a lot in my coding career.
In my next article, we’ll look through several algorithms and determine their time and space complexity using Big O notation.
Thanks for reading!