Analysis and Big-O notation

In CS 170, we commonly work with two formulations of asymptotic notation, as described in Discussion 1:

Formulation 1: Definition

if there exists a where after large enough , . Asymptotically, grows at most as much as .

Formulation 2: Convenient condition

If and , then .

The discussion asks you to show yourself that these formulations are correct, and that Formulation 2 is a sufficient (but not necessary) condition for Formulation 1, given that exists. However, this requires a bit of real analysis, which is not a prerequisite of this class. Therefore, I wrote this brief note to provide a bit of mathematical justification for this idea. For a more detailed look at this, check out the last few pages of this excellent note from COMP 250 at McGill, which I have essentially paraphrased here.

First, the formal definition of a limit.

Limit

A sequence of real numbers converges to (or has a limit of) a real number if, for any , there an exists an such that for all , .

We would like to relate this definition of the limit to our previous definition of Big-O. To do this, let us write the definition of Big-O slightly differently, in terms of the ratio of to .

Big-O (altered definition)

is if there exists a such that after large enough ,

Now, assume that , and . Based on the definition of the limit, we have that, for sufficiently large and arbitrary ,

Does this look similar to our altered Big-O definition? Indeed, take an arbitrary and call it . Then we have

which tells us (up to an inconsequential equality) that

This line of reasoning can be extended to the similar formulations of Big- and Big-, described in Section 1 of the discussion. I won't go over them here, but you can refer to page 13 of the McGill note if you'd like to see it worked out completely.

Hope this note was helpful!

<<<