Definition
A Scheduling Algorithm is a set of rules that determines which process should run on CPU next and for how long. Different algorithms optimize for different criteria (response time, throughput, fairness, etc.). Choice of algorithm significantly impacts system performance and user experience.
FCFS (First Come First Served)
Also called FIFO (First In First Out) Scheduling.
How It Works
Processes run in the order they arrive. Simple like a queue at bank counter.
Ready Queue: [P1] [P2] [P3] [P4]
Execution: P1 runs to completion → P2 runs → P3 runs → P4 runs
Algorithm
1. Process arrives, added to end of ready queue
2. When CPU free, process at front of queue gets CPU
3. Process runs until:
- It terminates, OR
- It blocks for I/O
4. Next process in queue gets CPU
Example
Process | Arrival Time | Burst Time
--------|--------------|----------
P1 | 0ms | 10ms
P2 | 1ms | 8ms
P3 | 2ms | 6ms
FCFS Schedule:
Time: 0 10 18 24
P1 P2 P3
|---|---|---|
Process | Turnaround | Waiting
--------|------------|--------
P1 | 10ms | 0ms
P2 | 17ms | 10ms (arrived at 1ms, ran at 10ms)
P3 | 22ms | 16ms (arrived at 2ms, ran at 18ms)
Avg | 16.3ms | 8.7ms
Advantages
- Simple: Easy to implement
- Predictable: Same sequence every time
- No Starvation: Every process eventually runs
- Low Overhead: No complex scheduling logic
Disadvantages
- Convoy Effect: One long process blocks others
- Poor Response Time: Interactive users wait long (bad for interactive)
- Unfair: First process may get much more time than last
- Low CPU Utilization: If processes do I/O, CPU sits idle
Convoy Effect (Major Problem!)
Scenario: Long process blocks everyone
P1: CPU burst 50ms
P2: CPU burst 5ms
P3: CPU burst 3ms
P4: CPU burst 2ms
FCFS Schedule:
Time: 0 50 55 58 60
P1 P2 P3 P4
P2 arrives at time 5ms but waits 45ms before running! (Bad!)
P3 waits 53ms
P4 waits 56ms
Average waiting time = (0 + 45 + 53 + 56) / 4 = 38.5ms
With preemption, average could be around 4-5ms!
Best For
- Batch systems (non-interactive)
- Simple systems with known process order
Not Suitable For
- Interactive systems (user gets no response until long process done)
- Real-time systems (can’t meet deadlines)
SJF (Shortest Job First)
Also called Shortest Process Next (SPN).
How It Works
Schedule process with smallest CPU burst next.
Ready Queue: [P1:10ms] [P2:8ms] [P3:6ms] [P4:5ms]
After SJF sorting: [P4:5ms] [P3:6ms] [P2:8ms] [P1:10ms]
Execution: P4 (5ms) → P3 (6ms) → P2 (8ms) → P1 (10ms)
Two Variants
1. Non-Preemptive SJF: Once process starts, runs to completion
2. Preemptive SJF (SRTF - Shortest Remaining Time First): If new process arrives with burst shorter than remaining time of current process, switch
Example (Non-Preemptive)
Process | Arrival | Burst
--------|---------|------
P1 | 0 | 8ms
P2 | 1 | 4ms
P3 | 2 | 2ms
P4 | 3 | 1ms
SJF Schedule (at time 0, P1 is only ready):
Time: 0 8 12 14 15
P1 P2 P3 P4
Burst order: 8ms, 4ms, 2ms, 1ms
Turnaround Times:
P1: 8 - 0 = 8ms
P2: 12 - 1 = 11ms
P3: 14 - 2 = 12ms
P4: 15 - 3 = 12ms
Average: 10.75ms (better than FCFS!)
Example (Preemptive - SRTF)
Process | Arrival | Burst
--------|---------|------
P1 | 0 | 8ms
P2 | 1 | 4ms
P3 | 2 | 2ms
P4 | 3 | 1ms
Time 0ms: P1 starts (burst 8ms)
Time 1ms: P2 arrives (burst 4ms, remaining of P1 = 7ms > 4ms) → switch to P2
Time 2ms: P3 arrives (burst 2ms, remaining of P2 = 3ms > 2ms) → switch to P3
Time 3ms: P4 arrives (burst 1ms, remaining of P3 = 1ms = 1ms) → no switch
Time 4ms: P3 done, P4 ready
Time 5ms: P4 done, P2 ready (remaining 2ms)
Time 7ms: P2 done, P1 ready
Time 15ms: P1 done
Much better response times!
Advantages
- Minimal Waiting Time: Shortest job first = less queue
- Good Throughput: Short jobs finish quickly
- Optimal for Avg Waiting: Mathematically proven optimal
Disadvantages
- Requires Knowledge: Must know burst time in advance (don’t always know!)
- Starvation Possible: Long jobs can wait indefinitely if short jobs keep arriving
- Not Suitable for Preemptive: Requires predicting remaining burst time
- Unfair: Favors short jobs
Starvation Example
Time 0: P1 (100ms) arrives
Time 1: P2 (1ms) arrives → switch to P2
Time 2: P3 (1ms) arrives → P2 finishes, switch to P3
Time 3: P4 (1ms) arrives → P3 finishes, switch to P4
Time 4: P5 (1ms) arrives → P4 finishes, switch to P5
...
P1 starves! (keeps getting pushed back by short jobs)
Best For
- Batch systems where burst times known
- Minimizing average waiting time
Not Suitable For
- Interactive systems (don’t know burst times)
- Real-time (can’t guarantee deadlines)
Priority Scheduling
How It Works
Each process assigned priority. CPU goes to highest priority process.
Priority Queue:
[P1: Priority 3]
[P2: Priority 1] ← Highest priority
[P3: Priority 2]
Schedule order: P2 → P3 → P1
Priority Schemes
1. Static Priorities: Set once, never change
Process P1: priority = 3 (always)
Process P2: priority = 1 (always highest)
2. Dynamic Priorities: Can change during execution
Process P1: starts at priority 3
decreases as it uses CPU (gets lower priority)
increases as it waits (to prevent starvation)
Examples
Process | Priority | Burst
--------|----------|------
P1 | 3 (low) | 8ms
P2 | 1 (high) | 4ms
P3 | 2 (med) | 2ms
Priority Schedule (higher number = higher priority):
Time: 0 4 6 14
P2 P3 P1
Order: 1 (highest), 2, 3 (lowest)
Advantages
- Flexible: Can handle different process importance
- Real-Time Capable: High priority = guaranteed response
- Fair to Important: Critical processes get CPU fast
Disadvantages
- Starvation: Low priority processes can wait forever
- Complex: Must manage priority assignments
- Priority Inversion: Low priority task can block high priority (deadlock potential)
Starvation Prevention
Use Aging: Increase priority as process waits
P1 (priority 10, has waited 5s): +1 per second → priority becomes 15
P2 (priority 1, just arrived): priority stays 1
Now P1 runs! Starvation prevented.
Best For
- Real-time systems
- Systems with different process importance
- Critical services
Round Robin (RR)
How It Works
Each process gets fixed time slice (time quantum) then goes to back of queue.
Time Quantum = 5ms
Queue: [P1] [P2] [P3] [P4]
↓
P1 runs 5ms → [P2] [P3] [P4] [P1]
↓
P2 runs 5ms → [P3] [P4] [P1] [P2]
... continue until all done
Example
Process | Burst
--------|------
P1 | 10ms
P2 | 5ms
P3 | 7ms
Time Quantum = 3ms
Timeline:
Time 0-3: P1 runs (3ms), remaining 7ms, goes to back
Time 3-6: P2 runs (3ms), remaining 2ms, goes to back
Time 6-9: P3 runs (3ms), remaining 4ms, goes to back
Time 9-12: P1 runs (3ms), remaining 4ms, goes to back
Time 12-14: P2 runs (2ms, done)
Time 14-17: P3 runs (3ms), remaining 1ms, goes to back
Time 17-20: P1 runs (3ms), remaining 1ms, goes to back
Time 20-21: P3 runs (1ms, done)
Time 21-22: P1 runs (1ms, done)
All done at 22ms
Average turnaround: (22 + 14 + 21) / 3 = 19ms
Time Quantum Effects
Too Small (1ms):
- Pro: Fair, responsive
- Con: Excessive context switches, overhead
Too Large (100ms+):
- Pro: Less switching overhead
- Con: Feels like FCFS, poor response time
Optimal (10-50ms):
- Balance of fairness and overhead
Advantages
- Fair: All processes get equal CPU time
- Good Response Time: No process waits too long
- No Starvation: Every process guaranteed to run
- Good for Interactive: Responsive to user input
- Preemptive: Handles I/O blocking well
Disadvantages
- Context Switching Overhead: More than FCFS
- Not Optimal for Any Single Criterion: Compromise algorithm
- Cache Misses: Frequent switching causes cache misses
Best For
- Interactive systems (most common choice)
- Time-sharing systems
- General-purpose OS
Multilevel Feedback Queue (MLFQ)
How It Works
Multiple priority queues with different scheduling characteristics.
Priority: Highest ──────────────────► Lowest
Queue 0: [P8] [P9] (Real-time, priority)
↓ RR, quantum=10ms
Queue 1: [P4] [P5] (Interactive)
↓ RR, quantum=20ms
Queue 2: [P1] [P2] [P3] (Batch)
↓ FCFS
Rules:
- If Queue 0 has job, run it
- Only if Queue 0 empty, check Queue 1
- Only if Queue 0,1 empty, check Queue 2
Dynamic Priority Adjustment
Promotion (move up):
- Process waits long → increase priority
Demotion (move down):
- Process uses lots of CPU → decrease priority
Example
P1: I/O-bound (keyboard input)
↓ Promoted to high queue (interactive)
↓ Gets quick response time
P2: CPU-bound (calculation)
↓ Demoted to low queue (batch)
↓ Doesn't block interactive users
Advantages
- Adapts to Process Type: Automatically identifies I/O-bound vs CPU-bound
- Responsive Interactive: Interactive processes get priority
- Good Throughput: CPU-bound processes still run
- Fair: Prevents starvation with aging
- Optimal Balance: Considers multiple criteria
Disadvantages
- Complex: Harder to implement
- Overhead: More scheduling decisions
- Parameter Tuning: Many parameters to optimize
Best For
- Modern general-purpose OS (Linux, Windows, macOS)
- Systems needing to serve multiple process types
- Interactive systems with background jobs
Comparison of Algorithms
| Algorithm | CPU Use | Throughput | Turnaround | Response | Fair | Simple | Starvation |
|---|---|---|---|---|---|---|---|
| FCFS | High | OK | Poor | Very Poor | No | Yes | No |
| SJF | High | Excellent | Excellent | Poor | No | Medium | Yes |
| Priority | High | OK | OK | OK | No | Medium | Yes |
| RR | Medium | Good | Good | Excellent | Yes | Medium | No |
| MLFQ | Medium | Excellent | Excellent | Excellent | Yes | No | No |
Choosing the Right Algorithm
For Batch Systems: SJF or FCFS For Interactive Systems: Round Robin or MLFQ For Real-Time Systems: Priority-based or MLFQ For General Purpose: MLFQ (Windows, Linux, macOS use variants)
Real-World Examples
Windows Scheduling
- Uses priority-based MLFQ
- 32 priority levels (0-31)
- Round-robin within same priority
- Aging prevents starvation
Linux Scheduling
- CFS (Completely Fair Scheduler): Ensures all processes get fair CPU time
- Uses red-black tree to track processes
- Fair by default
macOS Scheduling
- Uses MLFQ with multiple priority bands
- Real-time, normal, batch classes
- Adapts to process behavior
Summary
Different scheduling algorithms optimize for different criteria. FCFS is simple but unfair. SJF is optimal for waiting time but causes starvation. Priority scheduling handles importance but needs starvation prevention. Round Robin is fair and responsive. MLFQ combines benefits of multiple approaches and is used in modern OS. Choice of algorithm depends on system type and priorities.