Memory Technology
This week covers the technology behind memory systems: SRAM, DRAM, and the principles of the memory hierarchy. We study how SRAM and DRAM cells work, their timing characteristics, and how memory bandwidth and latency affect system performance.
Learning Objectives
Key Concepts
SRAM & DRAM
SRAM (Static RAM) uses 6 transistors per cell, holds data as long as power is on, and is fast but expensive. Used for caches.
DRAM (Dynamic RAM) uses 1 transistor + 1 capacitor per cell, must be refreshed periodically (every ~64ms), is slower but dense and cheap. Used for main memory.
- -
SRAM: 6T cell, no refresh, access time ~1-2 ns, used for L1/L2/L3 caches
- -
DRAM: 1T1C cell, requires periodic refresh, access time ~50-70 ns, used for main memory
- -
DRAM is organized in rows and columns; row buffer acts as a cache within DRAM
- -
Row buffer hit (same row open): much faster than row miss (need to precharge + activate new row)
Memory Hierarchy & Bandwidth
The memory hierarchy exploits temporal locality (recently accessed data is likely to be accessed again) and spatial locality (nearby data is likely to be accessed soon). Each level provides a tradeoff between speed, capacity, and cost.
- -
Registers → L1 Cache → L2 Cache → L3 Cache → Main Memory → Disk/SSD
- -
Each level is larger, slower, and cheaper per bit
- -
Memory bandwidth = Bus width × Clock rate × Transfers per clock
- -
DDR (Double Data Rate) transfers data on both rising and falling clock edges