Dashboard/Block 1: Foundations/Week 1
Week 1Foundations

Basics & Notation

Welcome to your first week! Before diving into clever algorithms, we need to build a solid foundation. Think of an algorithm as a recipe — a step-by-step set of instructions that tells a computer exactly how to solve a problem. But not all recipes are equally fast: some algorithms can process millions of items in seconds, while others would take years on the same data.

This week you will learn the mathematical language — Big-O (OO), Big-Omega (Ω\Omega), and Big-Theta (Θ\Theta) notation — that computer scientists use to measure and compare algorithm speed. By the end of this week, you will be able to look at a simple piece of code and estimate how its running time grows as the input gets larger — a fundamental skill for everything that follows in this course.

Learning Objectives

Define what an algorithm is in plain language and list its five essential properties (correctness, finiteness, definiteness, input/output, and effectiveness)
Read and write pseudocode — a simplified, language-independent way of describing an algorithm's steps so anyone can follow along
Use asymptotic notation (Big-O, Big-Omega, Big-Theta) to express how an algorithm's running time changes as it receives more and more data
Compare common growth rates (constant, logarithmic, linear, quadratic, exponential) and understand which are practical and which become impossibly slow for large inputs

Key Concepts

What is an Algorithm?

An algorithm is simply a step-by-step set of instructions for solving a specific problem — like a cooking recipe, but for computers. For example, when you search for a word in a dictionary, you probably open the book roughly in the middle, check whether you need to go forward or backward, and repeat until you find it. That process is an algorithm!

In computer science, a good algorithm must have five key properties:

  1. -Correctness — it must give the right answer
  2. -Finiteness — it must eventually stop and not run forever
  3. -Definiteness — every step must be clear with no room for guessing
  4. -Input/Output — it takes data in and produces a result
  5. -Effectiveness — each step must be something a computer can actually carry out

Understanding these properties helps us tell the difference between a well-designed algorithm and a vague or broken set of instructions.

  • -

    Must terminate (finish) in a finite number of steps for all valid inputs — an algorithm that runs forever is not an algorithm at all

  • -

    Each step must be precisely and unambiguously defined — there should be zero confusion about what to do next

  • -

    Correctness means producing the right output for every possible valid input, not just some lucky cases

  • -

    Efficiency matters: two algorithms can solve the same problem, but one might take seconds while the other takes hours — learning to measure this is the whole point of this course

Asymptotic Notation

When we want to know how fast an algorithm runs, we do not measure it in seconds — because different computers have different processor speeds (a gaming PC with a fast CPU might execute 1 billion operations per second, while an older laptop might only handle 100 million, so measuring in seconds would give different results on every machine). Instead, we count the number of steps the algorithm takes and ask: as the amount of data grows, how much longer will this take?

Asymptotic notation gives us a mathematical language to answer that question. Before we look at the formulas, here is what each symbol means:

  • -nn = the size of the input (e.g. the number of items in a list)
  • -f(n)f(n) = your algorithm's step count — the number of operations your algorithm actually performs when given an input of size nn
  • -g(n)g(n) = the reference function you are comparing against (such as nn, n2n^2, logn\log n, etc.)
  • -cc = some positive constant multiplier (the exact value does not matter — we just need some cc that makes the inequality work)
  • -n0n_0 = a starting point — the inequality only needs to hold from n0n_0 onward, because we care about large inputs, not tiny ones

With those symbols defined, here are the three main notations:

NotationNameMeaningFormal Definition
O(g(n))O(g(n))Big-OUpper bound (grows no faster than)f(n)cg(n)f(n) \leq c \cdot g(n) for all nn0n \geq n_0
Ω(g(n))\Omega(g(n))Big-OmegaLower bound (grows at least as fast as)f(n)cg(n)f(n) \geq c \cdot g(n) for all nn0n \geq n_0
Θ(g(n))\Theta(g(n))Big-ThetaTight bound (grows at exactly the same rate)c1g(n)f(n)c2g(n)c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n)

Reading the table in plain English: Big-O says "my algorithm's step count f(n)f(n) is at most a constant times g(n)g(n) for large inputs" — so if we say an algorithm is O(n2)O(n^2), we mean its steps grow no faster than n2n^2. Big-Omega says it grows at least that fast, and Big-Theta says it grows at exactly that rate.

Important clarification: Big-O, Big-Omega, and Big-Theta describe relationships between mathematical functions — they are NOT the same as best/worst/average case! You can use any of these notations for any case. For example, "Insertion Sort's best-case running time is Θ(n)\Theta(n)" and "Insertion Sort's worst-case running time is Θ(n2)\Theta(n^2)" are both valid. The notation describes the growth rate of a function; the case tells you which function (best, worst, or average) you are analyzing.

For example, if an algorithm is O(n2)O(n^2), doubling the input size roughly quadruples the running time. These notations let us compare algorithms fairly, regardless of what computer, programming language, or operating system we use.

  • -

    Big-O O(g(n))O(g(n)) = upper bound. It says: "f(n)f(n) grows no faster than g(n)g(n)." If f(n)=O(n2)f(n) = O(n^2), the step count never exceeds a constant times n2n^2. Note: Big-O is NOT the same as worst case — it is a bound on a function, and you can apply it to best-case, worst-case, or average-case running times.

  • -

    Big-Omega Ω(g(n))\Omega(g(n)) = lower bound. It says: "f(n)f(n) grows at least as fast as g(n)g(n)." If f(n)=Ω(n)f(n) = \Omega(n), it always takes at least on the order of nn steps. Note: Big-Omega is NOT the same as best case — it simply means the function is bounded below.

  • -

    Big-Theta Θ(g(n))\Theta(g(n)) = tight bound (upper and lower). It says: "f(n)f(n) grows at exactly the same rate as g(n)g(n)." If f(n)=Θ(nlogn)f(n) = \Theta(n \log n), it always grows at roughly the rate nlognn \log n.

  • -

    Growth hierarchy (slowest → fastest growth): 1<logn<n<n<nlogn<n2<n3<2n<n!1 < \log n < \sqrt{n} < n < n \log n < n^2 < n^3 < 2^n < n!. Algorithms on the left scale well to huge inputs; algorithms on the right become impossibly slow very quickly.

Analyzing Simple Loops

Now that we know the notation, how do we actually figure out how fast code runs? The key idea is counting how many basic operations (additions, comparisons, assignments) the algorithm performs as a function of the input size nn.

  • -A single loop that touches every element once does nn operations — that is O(n)O(n), or linear time.
  • -A nested loop (loop inside a loop): the outer loop runs nn times, and for each of those, the inner loop also runs nn times, giving n×n=n2n \times n = n^2 total operations — that is O(n2)O(n^2), or quadratic time.

When writing Big-O, we always drop constant multipliers (O(3n)O(3n) becomes O(n)O(n)) and ignore smaller terms (O(n2+n)O(n^2 + n) becomes O(n2)O(n^2)), because for very large inputs only the dominant term matters.

  • -

    Count how many times each line inside a loop executes — this is the core skill of algorithm analysis

  • -

    Nested loops multiply their iteration counts: a loop of nn inside a loop of nn gives n×n=O(n2)n \times n = O(n^2) total operations

  • -

    Always drop constants and lower-order terms: O(5n+100)O(5n + 100) simplifies to O(n)O(n) because constants become irrelevant for very large nn

  • -

    The same algorithm can have different speeds depending on the input: best case (luckiest input), worst case (unluckiest input), and average case may all differ

  • -

    ⚠ Common mistake: To prove f(n)=O(g(n))f(n) = O(g(n)), you must find specific constants cc and n0n_0 that work for ALL nn0n \geq n_0 — showing it works for a few examples is NOT a proof. Similarly, to disprove Big-O, use proof by contradiction: assume such c,n0c, n_0 exist and derive an impossibility.

  • -

    ⚠ Common mistake: Confusing Big-O with Big-Theta. Saying an algorithm is O(n2)O(n^2) does NOT mean it always takes n2n^2 steps — it means n2n^2 is an upper bound. An O(n)O(n) algorithm is also technically O(n2)O(n^2), O(n3)O(n^3), etc. Use Θ\Theta when you want to state the exact growth rate.