Big-Ω (Big-Omega) notation

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."

If a running time is $$\Omega(f(n))$$, then for large enough $$n$$, the running time is at least $$k \cdot f(n)$$ for some constant $$k$$. Here's how to think of a running time that is $$\Omega(f(n))$$:



We say that the running time is "big-Ω of $$f(n)$$." We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.

Just as $$\Theta(f(n))$$ automatically implies $$O(f(n))$$, it also automatically implies $$\Omega(f(n))$$. So we can say that the worst-case running time of binary search is $$\Omega(\log_2 n)$$.

We can also make correct, but imprecise, statements using big-Ω notation. For example, if you really do have a million dollars in your pocket, you can truthfully say "I have an amount of money in my pocket, and it's at least 10 dollars." That is correct, but certainly not very precise. Similarly, we can correctly but imprecisely say that the worst-case running time of binary search is $$\Omega(1)$$, because we know that it takes at least constant time.

Of course, typically, when we are talking about algorithms, we try to describe their running time as precisely as possible. We provide the examples of the imprecise statements here to help you better understand big-$$\Omega$$, big-$$O$$, and big-$$\Theta$$.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.