BME Course

Theory of Algorithms

Like ITC but worse

0/12

Weeks

41

Topics

27

Videos

12

Quizzes

Course Blocks

Midterm Prep

41 real exam questions mapped to topics with key insights and difficulty ratings

All Weeks

Week 1

Basics & Notation

Welcome to your first week! Before diving into clever algorithms, we need to build a solid foundation. Think of an algorithm as a **recipe** — a step-by-step set of instructions that tells a computer exactly how to solve a problem. But not all recipes are equally fast: some algorithms can process millions of items in seconds, while others would take years on the same data. This week you will learn the mathematical language — **Big-O** ($O$), **Big-Omega** ($\Omega$), and **Big-Theta** ($\Theta$) notation — that computer scientists use to measure and compare algorithm speed. By the end of this week, you will be able to look at a simple piece of code and estimate how its running time grows as the input gets larger — a fundamental skill for everything that follows in this course.

3 topics3 videos1 quiz
Week 2

Sorting I

**Sorting** means arranging items in order — like alphabetizing a list of names or ordering numbers from smallest to largest. It is one of the most **fundamental operations** in computer science because so many other tasks (searching, finding duplicates, matching data) become much easier once data is sorted. This week you will learn four sorting methods: **Selection Sort** (scan for the smallest, put it first, repeat), **Insertion Sort** (pick each item and slide it into the right place in the already-sorted portion), **Merge Sort** (split the list in half, sort each half, then merge them back together), and **QuickSort** (pick a pivot, partition around it, recurse on both sides). You will also learn the **divide-and-conquer** strategy, how to use the **Master Theorem** to calculate running times of recursive algorithms, a surprising mathematical proof that no comparison-based sorting algorithm can ever do better than $O(n \log n)$ in the worst case, and **Counting Sort** — a non-comparison sort that bypasses this limit. | Algorithm | Best Case | Average | Worst Case | Stable? | In-place? | |-----------|-----------|---------|------------|---------|----------| | Selection Sort | $O(n^2)$ | $O(n^2)$ | $O(n^2)$ | No | Yes | | Insertion Sort | $O(n)$ | $O(n^2)$ | $O(n^2)$ | Yes | Yes | | Merge Sort | $O(n \log n)$ | $O(n \log n)$ | $O(n \log n)$ | Yes | No ($O(n)$ extra) | | QuickSort | $O(n \log n)$ | $O(n \log n)$ | $O(n^2)$ | No | Yes | | Counting Sort | $O(n+k)$ | $O(n+k)$ | $O(n+k)$ | Yes | No ($O(n+k)$ extra) |

6 topics3 videos1 quiz
Week 3

Searching & Selection

How do you find a specific item in a large collection of data? If the data is sorted, you can use Binary Search — a technique that eliminates half of the remaining possibilities with each step, finding any item in a list of a million elements with just about 20 comparisons. But what if you do not need to find a specific value, but rather need the 'k-th smallest' — for example, the median (middle value) of a dataset? This is called the selection problem. This week covers Binary Search in depth, then introduces QuickSelect (which cleverly reuses the partitioning trick from QuickSort to find the k-th element in expected O(n) time) and the Median of Medians algorithm (a more complex method that guarantees O(n) time even in the worst case). These techniques are foundational tools used throughout computer science and data analysis.

3 topics2 videos1 quiz
Week 4

Binary Heaps & Priority Queues

Imagine you are running a hospital emergency room and patients keep arriving with different urgency levels. You always need to serve the most critical patient first. A priority queue solves this problem, and the binary heap is the data structure that makes it efficient. A heap is a special kind of binary tree stored cleverly in a simple array, where the most important element (highest priority) is always at the top. This week you will learn how heaps work, how to insert new elements and remove the top element in O(log n) time, and a surprising result — you can build a heap from scratch in just O(n) time, not O(n log n) as you might expect. You will also learn HeapSort, a sorting algorithm that uses heaps to achieve O(n log n) performance with no extra memory.

3 topics2 videos1 quiz
Week 6

Hash Tables

Hash tables are arguably the most important data structure in everyday programming. They let you store and look up data in expected O(1) time — essentially instant, regardless of how much data you have. The idea is simple: use a 'hash function' to convert each key into an array index, then store the value at that index. The challenge arises when two different keys hash to the same index (called a 'collision'). This week you will learn how hash functions work, two main strategies for handling collisions (chaining — storing a linked list at each slot, and open addressing — probing for another empty slot), and how the 'load factor' (the ratio of stored items to table size) affects performance. Hash tables power dictionaries in Python, objects in JavaScript, and HashMaps in Java.

3 topics2 videos1 quiz
Week 7

Graph Basics & Traversals

Graphs are one of the most versatile data structures in computer science. A graph is simply a collection of 'nodes' (also called vertices) connected by 'edges' — think of a social network (people are nodes, friendships are edges), a road map (cities are nodes, roads are edges), or the internet (websites are nodes, links are edges). This week introduces how to store graphs in a computer's memory and two fundamental ways to explore them: Breadth-First Search (BFS), which explores level by level like ripples spreading outward from a stone thrown in water, and Depth-First Search (DFS), which dives deep into one path before backtracking. BFS is great for finding shortest paths in unweighted graphs, while DFS is powerful for detecting cycles, classifying edges, and performing topological sorting (ordering tasks that have dependencies).

4 topics2 videos1 quiz
Week 8

Shortest Paths

Last week we found shortest paths in unweighted graphs using BFS. But what if edges have different weights — like roads with different distances or network links with different latencies? This week covers two classic algorithms for weighted shortest paths. Dijkstra's algorithm is the go-to choice when all edge weights are non-negative: it greedily processes the closest unvisited vertex and 'relaxes' its neighbors, running efficiently with a priority queue. Bellman-Ford is slower but more powerful: it handles negative edge weights and can detect 'negative weight cycles' (loops where going around actually decreases total distance, making shortest paths undefined). Understanding when to use which algorithm is crucial for real-world applications like GPS navigation, network routing, and logistics optimization.

2 topics2 videos1 quiz
Week 9

Minimum Spanning Trees

Imagine you need to connect several cities with roads, and building each road has a different cost. You want to connect all cities using the minimum total road-building cost — this is the Minimum Spanning Tree (MST) problem. A spanning tree connects all vertices using exactly V-1 edges (no cycles), and the minimum spanning tree does so with the lowest total edge weight. This week covers two classic algorithms: Kruskal's algorithm (sort all edges by cost and greedily add the cheapest ones that do not form a cycle, using a clever Union-Find data structure) and Prim's algorithm (start from one vertex and keep growing the tree by adding the cheapest edge connecting the tree to a new vertex). You will also learn the 'cut property' — the mathematical principle that proves both algorithms are correct.

4 topics2 videos1 quiz
Week 10

Dynamic Programming

Dynamic Programming (DP) is one of the most powerful problem-solving techniques in computer science, and also one of the trickiest to learn. The core idea is surprisingly simple: if a big problem can be broken into smaller subproblems, and those smaller subproblems overlap (meaning the same subproblem appears over and over), then instead of solving each one from scratch every time, solve it once and save the result for later. This avoids a massive amount of redundant work. For example, computing the 50th Fibonacci number naively requires billions of recursive calls, but with DP it takes just 50 steps. This week covers the two approaches to DP — memoization (top-down: start with the big problem, recurse down, and cache results) and tabulation (bottom-up: fill in a table starting from the smallest subproblems). You will apply DP to three classic problems: the 0/1 Knapsack, the Longest Common Subsequence, and Matrix Chain Multiplication.

4 topics3 videos1 quiz
Week 11

Greedy & Amortized Analysis

Sometimes the simplest strategy is the best: just make the best choice available right now, at each step, without worrying about future consequences. This is the greedy approach, and surprisingly, it produces optimal solutions for many important problems — as long as the problem has the right structure. This week you will learn when greedy works (activity selection, fractional knapsack, Huffman coding) and when it does not (0/1 knapsack — where DP is needed instead). You will also learn amortized analysis, a technique for understanding the true cost of operations that occasionally spike in expense. For example, when a dynamic array doubles in size, that one expansion costs O(n), but because it happens so rarely, the average cost per insertion is still O(1). Amortized analysis makes this intuition mathematically rigorous using three methods: aggregate, accounting, and the potential method.

3 topics2 videos1 quiz
Week 12

Complexity Theory & NP-Completeness

This final week explores one of the deepest questions in all of computer science: are there problems that are easy to check but fundamentally hard to solve? We know many problems (like sorting or shortest paths) can be solved quickly — these belong to class P (solvable in polynomial time). But there is a huge collection of problems (like the Traveling Salesman Problem, scheduling, or Sudoku) where we can easily verify a given solution, but nobody has found a fast algorithm to find one. These belong to class NP (verifiable in polynomial time). The million-dollar question — literally, it carries a $1,000,000 Clay Millennium Prize — is whether P equals NP. This week you will learn about NP-completeness: a special set of problems that are the 'hardest' problems in NP, in the sense that solving any one of them quickly would solve ALL NP problems quickly. Understanding this theory is essential for recognizing when a problem is likely too hard for any efficient algorithm.

3 topics2 videos1 quiz

Overall Progress

Progress0%

Start your journey! Click on any week to begin learning.