Definition
Process Synchronization is the coordination of multiple processes/threads to ensure they execute in proper order and access shared resources correctly. Since processes are concurrent (running simultaneously), they often need to access shared data (files, memory, variables, devices). Without synchronization, concurrent access leads to incorrect results. Synchronization ensures data consistency and prevents race conditions.
Need for Synchronization
Why Synchronize?
When multiple processes/threads access shared resources, problems occur:
Race Condition: Outcome depends on timing/order of execution
Shared Variable: count = 0
Process 1: Process 2:
read count (0) read count (0)
add 1 → get 1 add 1 → get 1
write count (1) write count (1)
Expected result: 2
Actual result: 1 (lost one increment!)
Why? Both read at same time before either writes.
Real-World Analogy
Bank Account Example:
Initial balance: $1000
You (Process 1):
1. Read balance: $1000
2. Withdraw $100
3. Write new balance: $900
Spouse (Process 2):
1. Read balance: $1000
2. Withdraw $200
3. Write new balance: $800
What happened:
- Your $100 withdrawal lost!
- Should be $700, got $800
Critical Section
Definition
A critical section (also called critical region) is part of code where process accesses shared resources and the order of execution matters.
while (true) {
// Entry section (acquire lock)
// CRITICAL SECTION - access shared resource
count = count + 1;
// Exit section (release lock)
// Remainder section
}
Rules for Critical Sections
- Mutual Exclusion: Only one process in critical section at a time
- Progress: If no process in critical section, waiting process gets access
- Bounded Wait: No process waits indefinitely for critical section
Goals of Synchronization
- Ensure Correctness: Data remains consistent
- Prevent Deadlock: Processes not stuck waiting forever
- Prevent Starvation: Every process eventually gets access
- Maximize Concurrency: Allow as much parallelism as possible
Synchronization Tools
1. Locks (Mutex - Mutual Exclusion Lock)
Simplest synchronization primitive
Lock lock;
lock.acquire();
// Critical section (only one thread here)
count = count + 1;
lock.release();
How it Works:
acquire(): Wait until lock available, then acquire itrelease(): Release lock, wake up waiting process
Problem: Busy-waiting (spinning) wastes CPU cycles
2. Semaphores
More flexible than locks
Binary semaphore = lock Counting semaphore = track multiple resources
Covered in detail in separate topic.
3. Condition Variables
Allow process to wait for specific condition
Condition cond;
Lock lock;
// Thread 1: Wait for condition
lock.acquire();
while (not ready) {
cond.wait(lock); // Release lock, sleep
}
// Now ready!
lock.release();
// Thread 2: Signal condition
lock.acquire();
ready = true;
cond.signal(); // Wake up waiting thread
lock.release();
4. Monitors
Language-level synchronization (Java, C#)
Automatically handles locks and condition variables
class BankAccount {
private int balance;
synchronized void withdraw(int amount) { // Automatic lock!
balance -= amount;
}
}
Problems Caused by Lack of Synchronization
1. Race Condition
Multiple processes read/modify shared data, result unpredictable
2. Data Inconsistency
Shared data in wrong state
3. Deadlock
Processes waiting for each other indefinitely (covered in detail later)
4. Starvation
Some process never gets CPU/resource
5. Lost Updates
One process’s update overwrites another’s
Types of Synchronization
1. Mutual Exclusion (Mutex)
Ensure only one process accesses resource
2. Synchronization
Ensure processes execute in specific order
Process A must finish before Process B starts
3. Condition Synchronization
Wait for specific condition/event
Process B waits for signal from Process A
Process States with Synchronization
Running
↓ (blocks waiting for lock)
Waiting (for synchronization)
↓ (lock acquired)
Ready
↓ (scheduled)
Running
Granularity of Locking
Coarse-Grained Locking
One lock for entire data structure
Pros: Simple, no deadlock Cons: Low concurrency (only one can access)
Fine-Grained Locking
Multiple locks for different parts
Pros: High concurrency Cons: Complex, risk of deadlock
Summary
Process synchronization ensures correct concurrent execution by controlling access to shared resources. Without synchronization, race conditions cause data corruption. Synchronization tools (locks, semaphores, monitors) enforce mutual exclusion and ordering. Key goal: Maximize concurrency while ensuring correctness. Careful synchronization design is critical for multi-threaded/multi-process systems.