Basics & Notation
Welcome to your first week! Before diving into clever algorithms, we need to build a solid foundation. Think of an algorithm as a recipe — a step-by-step set of instructions that tells a computer exactly how to solve a problem. But not all recipes are equally fast: some algorithms can process millions of items in seconds, while others would take years on the same data.
This week you will learn the mathematical language — Big-O (), Big-Omega (), and Big-Theta () notation — that computer scientists use to measure and compare algorithm speed. By the end of this week, you will be able to look at a simple piece of code and estimate how its running time grows as the input gets larger — a fundamental skill for everything that follows in this course.
Learning Objectives
Key Concepts
What is an Algorithm?
An algorithm is simply a step-by-step set of instructions for solving a specific problem — like a cooking recipe, but for computers. For example, when you search for a word in a dictionary, you probably open the book roughly in the middle, check whether you need to go forward or backward, and repeat until you find it. That process is an algorithm!
In computer science, a good algorithm must have five key properties:
- -Correctness — it must give the right answer
- -Finiteness — it must eventually stop and not run forever
- -Definiteness — every step must be clear with no room for guessing
- -Input/Output — it takes data in and produces a result
- -Effectiveness — each step must be something a computer can actually carry out
Understanding these properties helps us tell the difference between a well-designed algorithm and a vague or broken set of instructions.
- -
Must terminate (finish) in a finite number of steps for all valid inputs — an algorithm that runs forever is not an algorithm at all
- -
Each step must be precisely and unambiguously defined — there should be zero confusion about what to do next
- -
Correctness means producing the right output for every possible valid input, not just some lucky cases
- -
Efficiency matters: two algorithms can solve the same problem, but one might take seconds while the other takes hours — learning to measure this is the whole point of this course
Asymptotic Notation
When we want to know how fast an algorithm runs, we do not measure it in seconds — because different computers have different processor speeds (a gaming PC with a fast CPU might execute 1 billion operations per second, while an older laptop might only handle 100 million, so measuring in seconds would give different results on every machine). Instead, we count the number of steps the algorithm takes and ask: as the amount of data grows, how much longer will this take?
Asymptotic notation gives us a mathematical language to answer that question. Before we look at the formulas, here is what each symbol means:
- - = the size of the input (e.g. the number of items in a list)
- - = your algorithm's step count — the number of operations your algorithm actually performs when given an input of size
- - = the reference function you are comparing against (such as , , , etc.)
- - = some positive constant multiplier (the exact value does not matter — we just need some that makes the inequality work)
- - = a starting point — the inequality only needs to hold from onward, because we care about large inputs, not tiny ones
With those symbols defined, here are the three main notations:
| Notation | Name | Meaning | Formal Definition |
|---|---|---|---|
| Big-O | Upper bound (grows no faster than) | for all | |
| Big-Omega | Lower bound (grows at least as fast as) | for all | |
| Big-Theta | Tight bound (grows at exactly the same rate) |
Reading the table in plain English: Big-O says "my algorithm's step count is at most a constant times for large inputs" — so if we say an algorithm is , we mean its steps grow no faster than . Big-Omega says it grows at least that fast, and Big-Theta says it grows at exactly that rate.
Important clarification: Big-O, Big-Omega, and Big-Theta describe relationships between mathematical functions — they are NOT the same as best/worst/average case! You can use any of these notations for any case. For example, "Insertion Sort's best-case running time is " and "Insertion Sort's worst-case running time is " are both valid. The notation describes the growth rate of a function; the case tells you which function (best, worst, or average) you are analyzing.
For example, if an algorithm is , doubling the input size roughly quadruples the running time. These notations let us compare algorithms fairly, regardless of what computer, programming language, or operating system we use.
- -
Big-O = upper bound. It says: " grows no faster than ." If , the step count never exceeds a constant times . Note: Big-O is NOT the same as worst case — it is a bound on a function, and you can apply it to best-case, worst-case, or average-case running times.
- -
Big-Omega = lower bound. It says: " grows at least as fast as ." If , it always takes at least on the order of steps. Note: Big-Omega is NOT the same as best case — it simply means the function is bounded below.
- -
Big-Theta = tight bound (upper and lower). It says: " grows at exactly the same rate as ." If , it always grows at roughly the rate .
- -
Growth hierarchy (slowest → fastest growth): . Algorithms on the left scale well to huge inputs; algorithms on the right become impossibly slow very quickly.
Analyzing Simple Loops
Now that we know the notation, how do we actually figure out how fast code runs? The key idea is counting how many basic operations (additions, comparisons, assignments) the algorithm performs as a function of the input size .
- -A single loop that touches every element once does operations — that is , or linear time.
- -A nested loop (loop inside a loop): the outer loop runs times, and for each of those, the inner loop also runs times, giving total operations — that is , or quadratic time.
When writing Big-O, we always drop constant multipliers ( becomes ) and ignore smaller terms ( becomes ), because for very large inputs only the dominant term matters.
- -
Count how many times each line inside a loop executes — this is the core skill of algorithm analysis
- -
Nested loops multiply their iteration counts: a loop of inside a loop of gives total operations
- -
Always drop constants and lower-order terms: simplifies to because constants become irrelevant for very large
- -
The same algorithm can have different speeds depending on the input: best case (luckiest input), worst case (unluckiest input), and average case may all differ
- -
⚠ Common mistake: To prove , you must find specific constants and that work for ALL — showing it works for a few examples is NOT a proof. Similarly, to disprove Big-O, use proof by contradiction: assume such exist and derive an impossibility.
- -
⚠ Common mistake: Confusing Big-O with Big-Theta. Saying an algorithm is does NOT mean it always takes steps — it means is an upper bound. An algorithm is also technically , , etc. Use when you want to state the exact growth rate.