Your Guide For The Big-O Notation In Technical Interviews
I'm revisiting the fundamentals of software engineering! Whether for job hunting or day-to-day work, it's a must to get as deep of an understanding as possible on the fundamentals!
So, I haven’t done a lot of blog posts lately.. a lot of personal stuff happened, I took some breaks to recharge and generally a step back with the goal of taking a few forward now!
That said, I’ve experienced some situations, which led me to reflect on my technical skills - I’ve forgotten some of the fundamentals, which I believe are a must to know in order to be the best engineer possible.. that’s why I am coming back to the drawing board, starting with the Big-O Notation ⏳
I plan on sharing as much as possible here. With the goal of documenting my progress and writing most of what I learn to help me better understand it. Making it public will help me stay accountable as well 🙏
That said, on the topic 👇
The Big-O Notation
After reading the chapter in `Cracking the Coding Interview` book (link with FREE PDF here / includes exercises & solutions ✌️) and watching a few YouTube videos, I’ve re-gained most of my knowledge around the Big-O and even learned quite a few things along the way 🤘
Why Learn It?
Practically, you need it to better understand the performance of different algorithms - whether it is on technical interviews or at your job. From experience and looking around, most Leetcode type interviews do require knowledge around the Big-O and how’s your solution ranked by it.
Some quick bullet points on why you need it:
to predict how your code will scale.
to compare multiple solutions to the same problem.
to spot performance bottlenecks early on.
What Is It?
The most most simple, yet understandable explanation I found is: Big-O is used to measure the performance of different algorithms as the amount of data (input size) increases
“How code slows as data grows.” - BigO
It’s a way to describe the efficiency of algorithms in terms of time complexity and space complexity. Think of it as a tool to answer the question:
"How will this algorithm perform as the dataset gets really, really big?"
Showcasing The Difference In Time Complexities
The best way to do so in my eyes is through an example, so we have the exercise to add up all numbers from 1 to .. n 👀
First Solution
public int addUpLinear(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += i;
}
return sum;
}
Here we run a loop n times, til all numbers from 1 to n are added up to give us the final answer. Number of steps grows linearly with n 📈
O(n) time complexity (linear): The more numbers we sum, the longer it takes. If
n
doubles, the time taken also doubles.
Second Solution
public int addUpConstant(int n) {
return n * (n + 1) / 2;
}
In the second solution we use a math formula to calculate to total sum, so we get the answer instantly without looping through each element.
O(1) time complexity (constant): The time to calculate the result remains the same, even if
n
increases significantly.
Big-O Notations Explained
There are 7 core time complexities we will briefly look at. Of course we can get deeper with each one of them through exercises and more examples, but for the time being - simple is best 👇2
O(1) - Constant Time 💚
Best case scenario: No matter how big your input is, the algorithm takes the same time to execute.
Example: Accessing an element in an array by index (
arr[5]
).
O(log n) - Logarithmic Time
Runtime grows slower as the input grows larger. This is common in binary search algorithms.
Example: Searching for an element in a sorted array by repeatedly dividing the search space in half.
O(n) - Linear Time 💚
The algorithm’s runtime increases proportionally with the size of the input.
Example: A simple loop that prints each item in a list.
The example in above gives a great comparison between constant and linear time.
O(n log n) - Log-linear Time
Often seen in efficient sorting algorithms like Merge Sort or QuickSort. It combines the linear and logarithmic growth rates.
Example: Sorting algorithms that split data and sort it recursively.
O(n²) - Quadratic Time
Performance decreases rapidly as input grows. Algorithms with nested loops often have this complexity.
Example: A loop inside another loop, such as comparing all elements in a list to one another (common in Bubble Sort).
O(2ⁿ) - Exponential Time
Often arises when solving problems recursively with multiple branches, like in recursive backtracking problems.
Example: The Fibonacci sequence calculated recursively without memoization.
O(n!) - Factorial Time
Worst case scenario for time complexity. Occurs when you must examine every possible permutation of the input. ⬇️
Factorial time complexity grows incredibly fast as the input size increases. This makes algorithms with factorial time complexity practically unusable for large inputs because the time taken to compute grows at a staggering rate.
Example: Generating all permutations of a string of length
n
or solving the Traveling Salesman Problem using brute force.
Higher Notation → Better Performance ? 🎭
Here I wanted to pin this graph, which better illustrates how some notations may fit better, though they aren’t considered more efficient as a whole, depending on the size of the input data.
If the input size is small, even O(n²) may be better than O(log n) or O(1) - depending on the size of the input that needs processing!
⚡ How to Recognize and Optimize Big-O in Your Code
Here are some tips to make Big-O notation more tangible:
Nested loops: Watch out for nested loops, which often lead to O(n²) or worse. Can you reduce the number of iterations?
Recursive algorithms: Ensure that recursion is used wisely. Exponential complexities (O(2ⁿ)) often result from inefficient recursive calls.
Data structures: Choosing the right data structure (like using a hash map for lookups) can turn O(n) operations into O(1) operations.
Some tools to practice 🛠️
Go through the exercise and solution walk-throughs in the book `Cracking The Coding Interview`
Do Leetcode and/or practice on real-world algos 🚀
Hope you found this blog-post helpful! 🤘
I tried to pull-in the things I’ve learned about Big-O in the most understandable way. There are a lot more nits-and-picks on the topic, but the things we covering here should be good-enough for the theoretical part of it.
🚨 In the upcoming blog-posts, I will be going over some common and more complicated data structures and their implication in real-world examples 🔜