Cheat Sheets

Big O Insights

Lesson

Imagine you're asked to evaluate how fast an algorithm is. Think for a second about how you might do this. One idea is to just count the total number of simple operations it does (read, writes, comparisons) given an input of size nn. For example:

Input Size=nNumber of Operations=12n2+152n+6000\notag \begin{align*} &\text{Input Size} = n\\ &\text{Number of Operations} = \frac{1}{2}n^2 +152n+6000 \end{align*}

The thing is, it's hard to come up with a formula like above, and the coefficients will vary a lot based on what processor you run your program on.

To simplify this, computer scientists only talk about how the algorithm's speed scales when nn gets very large. The biggest-order term always kills the other terms when nn gets very large (plug in n=1,000,000). This happens no matter what coefficients you have. You can see that when n=1,000,000, the n2n^2 term dominates even though it has a much smaller coefficient:

So computer scientists only talk about the biggest term, without any coefficients. The above algorithm has n2n^2 as the biggest term, so we say that:

Time Complexity=O(n2)\notag \text{Time Complexity}= O(n^2)

This is called Big O notation. Verbally, you say that the algorithm takes "on the order of n2n^2 operations". In other words, the amount of time the algorithm takes to run scales with the term n2n^2.

Computer scientists describe memory based on how it scales too. If your program needs 200n+5200n + 5 units of storage, then computer scientists say Space Complexity=O(n)\text{Space Complexity} = O(n).

Common Confusion

There's a common confusion people have. People will intuitively know that something that takes nn operations is O(n)O(n). But many people get confused that something that takes, say, 2n2n operations is still O(n)O(n), or something that takes 26 operations is O(1)O(1).

The answer is that Big O is just a ballpark estimate of how your function grows. Which power of nn does it grow with? If it doesn't grow at all, then grows just like the number 1 grows (not at all), so it's O(1)O(1), the 0th power of n.

If it grows as some percent of nn, it grows as O(n)O(n). Some square and it's O(n2)O(n^2). If it grows with logn\log n, it's O(logn)O(\log n). And so on. It's just a ballpark estimate, a big mistake is to try and overly formalize it or overcomplicate it. It should be intuitive.

Time \ge Space

If you build a size 10,000 array, you must have done at least 10,000 operations. But if you did 10,000 operations, you might not have used 10,000 slots of memory. Your Time Complexity\text{Time Complexity} will always be greater than or equal to your Space Complexity\text{Space Complexity}.

This picture might give you an intuitive sense of this. You can reuse space, but not time. This is ultimately because in our universe, you can go back in space, but you can't go back in time.

It will always be true that Total TimeTotal Space\text{Total Time} \ge \text{Total Space}, even if you're writing parallel algorithms (which don't show up in interviews), because a parallel algorithm is just made of many single-processor algorithms. This is why you usually assume infinite memory in most problems, and why limited memory problems like in-place sorting are arguably not very interesting. Time limits space, so time is more interesting.

Mark as Completed:
Submits:
test
Test your code to get an output here!