Misc Complexity Tricks


Some important things

If f(x)f(x) is O(g(x))O(g(x)) and it is not Θ(g(x))\Theta(g(x)), then it is also the case that kf(x)=O(kg(x)k^{f(x)} = O(k^{g(x)}.

If f(x)=Ω(g(x))f(x) = \Omega(g(x)) and it is not Θ(g(x))\Theta(g(x)), then kf(x)=Ω(kg(x))k^{f(x)} = \Omega(k^{g(x)}). This is because f(x)f(x) grows faster than g(x)g(x) (and not just by a constant, which can mess things up).

And more generally, this holds for any monotonically increasing function composed with f,gf, g.

However, the difficulty comes when f(x)=Θ(g(x))f(x) = \Theta(g(x)). When this is the case, you can’t say anything about the relationship between kf(x)k^{f(x)} and kg(x)k^{g(x)}. It all depends on the constant factor between f,gf, g.

In general, when you see if something satisfies a certain complexity, you should look for counterexamples. For O()O(), find cases in which we go above, and for Θ\Theta, find cases in which we go below.

Chaining bounds

Say that you wanted to prove that f(n)=O(g(n))f(n) = O(g(n)). You could do the f(n)cg(n)f(n) \leq cg(n) directly, but sometimes that’s hard. Sometimes, you can prove that f(n)h(n)f(n) \leq h(n), and then you can prove that h(n)cg(n)h(n) \leq cg(n). Implicitly, this proves that f(n)cg(n)f(n) \leq cg(n).

You can do this same trick with f(n)=Ω(g(n))f(n) = \Omega(g(n)) but the other way around. You can show that f(n)h(n)f(n) \geq h(n) and h(n)cg(n)h(n) \geq cg(n).

The “distillation” idea

When talking about big O, theta, or omega notation, it’s often accurate to just consider the largest term. So 4n24n^2 will be equivalent to n2n^2, etc. However, this breaks down for a few key things

Variable substiution

Sometimes it’s easier to compare when you substitute variables. For example, when proving that log7ncn0.7\log^7 n \leq c n^{0.7}, we log both sides and let k=lognk = \log n, where it becomes obvious that 7logk0.7k+logc7\log k \leq 0.7 k + \log c

Ratio test

If you’re not sure which one grows quicker, take the ratio f/gf / g and then take the limit as nn → \infty. This is known as the ratio test in sequences, and it does work. If the ratio tends to zero, then we say that f=o(g)f = o(g) (yes, it’s lowercase o)

Floors and ceilings

The best bet is to look at the best case and worst case with floors and ceilings when the floor/ceiling gets used vs when it doesn’t get used and you just pass through whatever is on the inside.


In general, knk^n and qnq^n have different asymptotic behavior. Formally, this is because if you take the log of both sides in your derivation of OO, you might get

nlogknlogq+logcn\log k \leq n \log q + \log c

The cc is no longer a slope, but rather an intercept. Therefore, if kqk \leq q, then our inequality is satisfied. Otherwise, it is contradicted.

One identity to note:


Remember that

(nn/2)(nk)\binom{n}{n/2} \geq \binom{n}{k}

and that

in(ni)=2n\sum_{i}^n\binom{n}{i} = 2^n


We can derive from the ratio test that logbn=o(na)\log^b n = o(n^a) for any constants b,ab, a. This means that the polylogarism grows slower than any polynomial function

Interesting example with nlognn\log n vs ilogi\sum_i \log i

We can show that these two are actually related by Θ\Theta. First, we show that iloginlogn\sum_i \log i \leq n\log n. This is trivial, because logilogn\log i \leq \log n. As such, we conclude that ilogi=O(nlogn)\sum_i \log i = O(n\log n)

Next, we unroll the summation and we get that ilogi=log1+...log(n/2)+...log(n)\sum_i \log i = \log 1 + ... \log(n/2) + ... \log(n). Therefore, ilogi(n/2)(logn/2)=(n/2)(lognlog2)\sum_i \log i \geq (n/2)(\log n / 2) = (n / 2)(\log n - \log 2). This means that ilogicnlogn\sum_i \log_i \geq c n \log n, which means that ilogi=Ω(nlogn)\sum_i \log_i = \Omega (n\log n).

Finally, taken together, this means that ilogi=Θ(nlogn)\sum_i \log i = \Theta(n \log n) . This is a really nice trick!


A weak upper bound is n!nnn! \leq n^n, but this isn’t fun. We can use Stirling’s approximation, which states that

n!=2πn(ne)n(1+Θ(1n))n! = \sqrt{2\pi n}\left(\frac{n}{e}\right)^n\left(1 + \Theta(\frac{1}{n})\right)

which means that with large enough nn, we can disregard the additional Θ\Theta term.

From Sterling’s approximation, we get that log(n!)=Θ(log(nn))\log(n!) = \Theta(\log(n^n)), which is very interesting.