Definition
Virtual Memory: Abstraction providing each process with large contiguous address space, even though physical memory may be small and fragmented. Processes can be larger than RAM.
Illusion: Each process has entire address space (e.g., 4GB) Reality: Mix of RAM, disk, and swapped pages
Virtual Address Space
Concept
Process sees large, continuous address space:
32-bit system: 2^32 = 4GB address space (0 to 4,294,967,295)
64-bit system: 2^64 = 16 Exabytes address space
Process feels like it has 4GB (or 16EB) memory
Actually may have only 512MB RAM!
Isolation
Each process has own virtual address space:
Process A: Virtual address 1000
Process B: Virtual address 1000
These are DIFFERENT physical locations!
Process A: Virtual 1000 → Physical 100000 (Frame 24)
Process B: Virtual 1000 → Physical 200000 (Frame 48)
No interference!
Protection enforced by hardware (MMU)
Physical Address Space
Real RAM available:
Physical RAM: 512MB (0 to 536,870,912)
Multiple processes share physical RAM:
Process A: Pages 0, 5, 10 in RAM
Process B: Pages 3, 7, 15 in RAM
Process C: Mostly on disk
All fit in 512MB!
Virtual Memory Mechanism
How It Works
Process Program:
int x = 10;
int *ptr = &x; // Gets logical address, e.g., 0x7fff0000
CPU execution:
Memory access to 0x7fff0000
MMU Translation:
Virtual address 0x7fff0000
↓
Is page in memory?
✓ Yes: Physical address = Frame*PageSize + Offset
✗ No: Page Fault!
→ Load from disk
→ Update page table
→ Retry instruction
Process unaware of disk activity!
Key Components
1. Virtual Address Space:
- Per-process logical memory
- Isolated from other processes
- Can exceed physical RAM
2. Physical Address Space:
- Actual RAM installed
- Shared among all processes
- OS manages allocation
3. Memory Management Unit (MMU):
- Hardware translates logical to physical
- Detects page faults
- Maintains TLB cache
4. Page Table:
- Maps logical pages to physical frames
- Indicates if page in RAM or disk
- Tracks protection bits
5. Page Replacement:
- When RAM full, which page evict?
- LRU (Least Recently Used) common
- Different algorithms have different performance
Benefits of Virtual Memory
1. Large Address Space
Physical RAM: 512MB
Virtual space: 4GB
Program can allocate 2GB:
int *data = malloc(2*1024*1024*1024);
Actually uses:
512MB at peak
Rest on disk
Works! (slower when accessing disk portions)
2. Memory Abstraction
Program doesn’t know about physical limitations:
/* Program doesn't care about RAM size */
#include <stdlib.h>
int main() {
int *arr = malloc(1GB); // Works if virtual memory available
for (int i = 0; i < (1GB/4); i++) {
arr[i] = i; // Magically handled by OS
}
}
3. Process Isolation
Processes protected from each other:
Process A writes garbage to virtual address 0x5000
OS directs to Frame 50
Process B accesses virtual address 0x5000
OS directs to Frame 100 (different!)
No corruption!
Hardware prevents unauthorized access
4. Efficient Resource Sharing
Multiple processes, limited RAM:
10 processes × 512MB each = 5120MB total demand
Physical RAM: 512MB
Virtual memory solution:
Load working set of each process
Swap others to disk
Share read-only code pages
Efficient utilization!
5. Dynamic Memory Allocation
Heap can grow as needed:
int *arr = malloc(1000); // 4KB allocated
// ... use arr ...
arr = realloc(arr, 2000); // 8KB allocated
// OS expands mapping, no problem!
6. Simplified Programming
No explicit memory relocation:
Past (without VM):
Code must be position-independent
Know exact memory locations
Complex addressing
Modern (with VM):
Program uses logical addresses
OS handles relocation
Simple!
Virtual Memory Overhead
1. Translation Overhead
Every address access needs translation:
Without VM:
Memory address: Direct access to RAM
With VM:
Memory address → TLB lookup → Physical address → RAM
(or page table if TLB miss)
TLB hit (99%): ~10ns overhead
TLB miss (1%): ~100ns overhead
Average: ~11ns overhead
2. Memory Overhead
Page tables consume memory:
Process with 2GB address space
Page size: 4KB
Pages: 2GB / 4KB = 524,288 pages
Single-level page table:
524,288 entries × 8 bytes = 4MB
100 such processes: 400MB just for page tables!
Multi-level helps but still overhead
3. Disk I/O
Page faults cause disk access:
Page fault: 5-10 milliseconds
Single instruction: < 1 nanosecond
5ms / 0.001ns = 5 billion× slower!
Page fault = catastrophic performance hit
Design to minimize page faults!
Performance Optimization
Temporal Locality
Access same data repeatedly:
for (int i = 0; i < 1000000; i++) {
sum += array[i]; // Access array[i] each iteration
}
First access: Page fault
Next 4095 accesses: Same page (no fault)
Total: 1 fault per 4096 accesses
Efficient!
Spatial Locality
Access data near recently accessed:
for (int i = 0; i < 1000; i++) {
process(array[i]); // Sequential access
}
First access: Page fault, load page
array[0-4095]: All on same page
No more faults!
Efficient!
Working Set
Keep actively used pages in memory:
Program memory behavior:
Phase 1: Access pages {0, 1, 2, 3}
Phase 2: Access pages {10, 11, 12, 13}
Phase 3: Access pages {20, 21, 22, 23}
Keep 4 pages in RAM at a time
Phase transitions: 4 page faults
Rest: Hits in memory
Efficient!
Demand Paging
Concept
Load page only when needed (on demand):
Process starts: No pages in memory yet
Step 1: First instruction executed
Page 0 not in memory
Page fault!
Step 2: Load page 0 from disk
Step 3: Execute instruction
Step 4: Continue
Most accesses hit (same page)
Occasional page faults (new pages)
Advantages
✓ No need to load entire program at start
✓ Can run program larger than RAM
✓ Fast startup (not waiting to load all pages)
✓ Support processes larger than available RAM
Disadvantages
❌ Unpredictable latency (page faults)
❌ Page replacement decisions needed
❌ Disk I/O bottleneck
❌ Thrashing possible (too much paging)
Pure Demand Paging Example
Program:
int main() {
int data[1000000];
for (int i = 0; i < 1000000; i++) {
data[i] = i;
}
}
Execution:
data[0]: Page fault (load page 0)
data[1-1023]: Same page (hits)
data[1024-2047]: Page fault (load page 1)
data[2048-3071]: Page fault (load page 2)
...
Total: ~244 page faults (1,000,000 / 4096)
Timing: 244 faults × 5ms = 1.22 seconds (slow!)
Without paging:
4MB in RAM = no wait
With paging:
4MB in RAM with disk I/O = wait
Modern Virtual Memory Systems
Windows
Virtual Address Space: 4GB (32-bit), 128TB (64-bit)
Page Size: 4KB
Page Table: Multi-level (hierarchical)
Demand Paging: Yes
Page Replacement: Enhanced LRU
Compression: Modern Windows uses compression
Linux
Virtual Address Space: Per architecture
Page Size: 4KB (or configurable)
Page Table: Multi-level
Demand Paging: Yes
Page Replacement: LRU approximation
Swap: Can disable
macOS
Virtual Address Space: 4GB (32-bit), 128GB-256GB (64-bit)
Page Size: 4KB
Page Table: Multi-level
Demand Paging: Yes
Compression: Uses memory compression (not disk)
Summary
Virtual memory creates illusion of large, continuous address space per process, even with limited RAM. Enabled by paging/segmentation and OS page replacement. Provides process isolation, simplifies programming, enables efficient multi-tasking. Uses demand paging (load only when needed). TLB caches translations to minimize overhead. Page faults slow but rare (99% TLB hits). Working set concept explains efficient operation. Modern systems extensively use virtual memory. Understanding virtual memory critical for modern OS design.