What is Complexity Theory?
Complexity Theory studies how much time and space (memory) are needed to solve problems.
Simple Analogy
Computability: “Can we solve this problem?”
Complexity: “How efficiently can we solve it?”
Think of it like:
- Computability: “Can I walk to the moon?” (No!)
- Complexity: “How long to walk to the store?” (Feasible vs infeasible)
Key Questions
- How much time does algorithm take?
- How much space (memory) does it use?
- Can we do better?
- What are the limits?
Why Complexity Matters
Decidable but Infeasible
Problem may be decidable but take too long!
Example: Checking all possible chess games
- Decidable: Yes ✓
- Time: > lifetime of universe ✗
Complexity: Distinguishes practical from impractical
Real-World Impact
Fast algorithms:
- Google search (billions of pages, milliseconds)
- GPS routing (millions of roads, seconds)
- Cryptography (secure in reasonable time)
Slow algorithms:
- Password cracking (exponential time)
- Factoring large numbers (RSA security)
- Protein folding (NP-hard)
Computability vs Complexity
Comparison
| Aspect | Computability | Complexity |
|---|---|---|
| Question | ”Can we solve it?" | "How efficiently?” |
| Answer | Yes or No | Time/Space bound |
| Focus | Decidable vs Undecidable | Feasible vs Infeasible |
| Example Yes | Sorting | O(n log n) |
| Example No | Halting problem | Exponential time |
| Tools | Turing machines | Big-O notation |
Relationship
All Problems
|
├── Undecidable (HALT, ATM, etc.)
| └── Can't solve at all
|
└── Decidable
|
├── Infeasible (exponential, worse)
| └── Can solve but takes too long
|
└── Feasible (polynomial time)
└── Practical algorithms
Measuring Complexity
Time Complexity
Definition: Number of steps TM takes as function of input size
Notation: T(n) where n = input length
Example: Sorting n numbers
- Bubble sort: O(n²)
- Merge sort: O(n log n)
Space Complexity
Definition: Number of tape cells TM uses as function of input size
Notation: S(n) where n = input length
Example:
- In-place sorting: O(1) extra space
- Merge sort: O(n) extra space
Asymptotic Notation
Big-O Notation O(f(n))
Definition: T(n) = O(f(n)) if T(n) ≤ c·f(n) for large n
Meaning: “At most” or “upper bound”
Example:
- T(n) = 3n² + 5n + 10
- T(n) = O(n²)
Big-Ω Notation Ω(f(n))
Definition: T(n) = Ω(f(n)) if T(n) ≥ c·f(n) for large n
Meaning: “At least” or “lower bound”
Example:
- T(n) = n² + 100
- T(n) = Ω(n²)
Big-Θ Notation Θ(f(n))
Definition: T(n) = Θ(f(n)) if T(n) = O(f(n)) and T(n) = Ω(f(n))
Meaning: “Exactly” or “tight bound”
Example:
- T(n) = 2n² + 3n
- T(n) = Θ(n²)
Common Growth Rates
From fastest to slowest:
O(1) < Constant (array access)
O(log n) < Logarithmic (binary search)
O(n) < Linear (linear search)
O(n log n) < Linearithmic (merge sort)
O(n²) < Quadratic (bubble sort)
O(n³) < Cubic (matrix multiply)
O(2ⁿ) < Exponential (subsets)
O(n!) < Factorial (permutations)
Growth Comparison
For n = 1000:
| Function | Value | Feasible? |
|---|---|---|
| log n | 10 | ✓ |
| n | 1,000 | ✓ |
| n log n | 10,000 | ✓ |
| n² | 1,000,000 | ✓ (borderline) |
| n³ | 1,000,000,000 | ✗ |
| 2ⁿ | 10³⁰⁰ | ✗ (impossible!) |
Time Complexity Classes
DTIME(f(n))
Definition: Languages decidable by deterministic TM in time O(f(n))
DTIME(n²): Languages decidable in O(n²) time
Example:
- Sorting ∈ DTIME(n log n)
- Matrix multiply ∈ DTIME(n³)
NTIME(f(n))
Definition: Languages decidable by non-deterministic TM in time O(f(n))
NTIME(n²): Languages decidable by NTM in O(n²) time
Note: Non-determinism may speed up!
Relationship
DTIME(f(n)) ⊆ NTIME(f(n))
DTM can simulate NTM but may be slower (exponentially)
Space Complexity Classes
DSPACE(f(n))
Definition: Languages decidable by deterministic TM using O(f(n)) space
DSPACE(n): Linear space (context-sensitive languages)
Example: {aⁿbⁿcⁿ} ∈ DSPACE(n)
NSPACE(f(n))
Definition: Languages decidable by non-deterministic TM using O(f(n)) space
NSPACE(n): Non-deterministic linear space
Savitch’s Theorem
Theorem: NSPACE(f(n)) ⊆ DSPACE(f(n)²)
Meaning: Can simulate NTM using square space
Contrast with time: Time simulation exponential, space only quadratic!
The Complexity Zoo
Major Classes
L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXPTIME
L: Logarithmic space
NL: Non-deterministic log space
P: Polynomial time
NP: Non-deterministic polynomial time
PSPACE: Polynomial space
EXPTIME: Exponential time
Relationships Known
P ⊆ PSPACE: Polynomial time uses polynomial space ✓
NP ⊆ PSPACE: NP problems use polynomial space ✓
P ⊊ EXPTIME: Strict (time hierarchy) ✓
L ⊊ PSPACE: Strict (space hierarchy) ✓
Relationships Unknown
P vs NP: Open! Million-dollar problem
NP vs PSPACE: Open!
L vs NL: Open!
NL vs P: Open!
Polynomial Time (P)
P = ⋃ₖ DTIME(nᵏ)
Definition: Languages decidable in polynomial time
Examples:
- Sorting
- Graph connectivity
- Shortest path
- Primality testing
- Linear programming
Philosophy: P = “efficiently solvable”
Why Polynomial?
Closed under composition: If f(n) and g(n) polynomial, so is f(g(n))
Model-independent: Same class for TM, RAM, etc. (within polynomial)
Practical: n¹⁰⁰⁰ may be infeasible, but philosophy holds
Non-Deterministic Polynomial Time (NP)
NP = ⋃ₖ NTIME(nᵏ)
Definition: Languages decidable by NTM in polynomial time
Alternative: Languages with polynomial-time verifiable solutions
Examples:
- SAT (Boolean satisfiability)
- Hamiltonian path
- Graph coloring
- Subset sum
- Traveling salesman
Verifier Definition
L ∈ NP if there exists polynomial-time verifier V such that:
w ∈ L ⟺ ∃ certificate c such that V(w, c) accepts
Certificate: “Proof” that w ∈ L
Verifier: Checks certificate in polynomial time
Example:
- Problem: Is graph 3-colorable?
- Certificate: Actual 3-coloring
- Verifier: Check adjacent vertices different colors (polynomial time)
The P vs NP Question
P vs NP: Is P = NP?
The Question
Does: “Easy to verify” = “Easy to solve”?
P: Problems easy to solve
NP: Problems with easy-to-verify solutions
P ⊆ NP: Known ✓
NP ⊆ P: Unknown! Most believe no.
Implications if P = NP
Revolutionary!
- Many hard problems become easy
- Cryptography breaks down
- Optimization trivial
- Theorem proving automated
- Scientific revolution
Most believe: P ≠ NP (but unproved!)
Implications if P ≠ NP
Expected result:
- Some problems inherently hard
- Cryptography secure
- Optimization remains difficult
- Current understanding correct
But: Still unproved!
NP-Complete Problems
Definition: Problem A is NP-complete if:
- A ∈ NP
- Every problem in NP reduces to A
Hardest problems in NP!
If any NP-complete problem in P, then P = NP
Examples
- SAT: Boolean satisfiability
- 3-SAT: 3-CNF satisfiability
- Hamiltonian Path: Visit each vertex once
- TSP: Traveling salesman (decision version)
- Graph Coloring: Color with k colors
- Clique: Find clique of size k
- Vertex Cover: Cover edges with k vertices
- Subset Sum: Subset summing to target
Cook-Levin Theorem
Theorem: SAT is NP-complete
First NP-complete problem! (Cook, 1971)
Proof idea: Encode NTM computation as Boolean formula
Polynomial-Time Reductions
A ≤ₚ B: A reduces to B in polynomial time
If:
- B ∈ P and A ≤ₚ B
- Then A ∈ P
Use: Prove problems NP-complete
Example: 3-SAT ≤ₚ Clique
PSPACE
PSPACE = ⋃ₖ DSPACE(nᵏ)
Definition: Languages decidable using polynomial space
Examples:
- Quantified Boolean formulas (TQBF)
- Game playing (generalized chess, Go)
- Planning problems
Known: P ⊆ NP ⊆ PSPACE
Unknown: Whether inclusions strict
PSPACE-Complete
TQBF: Quantified Boolean formulas
Example:
∀x ∃y ∃z: (x ∨ y) ∧ (¬x ∨ z) ∧ (¬y ∨ ¬z)
Hardest problems in PSPACE
Hierarchy Theorems
Time Hierarchy Theorem
Theorem: If f(n) log f(n) = o(g(n)), then DTIME(f(n)) ⊊ DTIME(g(n))
Meaning: More time allows solving more problems
Example: P ⊊ EXPTIME
Space Hierarchy Theorem
Theorem: If f(n) = o(g(n)), then DSPACE(f(n)) ⊊ DSPACE(g(n))
Meaning: More space allows solving more problems
Example: L ⊊ PSPACE
Proof Idea
Diagonalization:
- Enumerate all TMs using f(n) resources
- Construct TM using g(n) resources
- Differs from all f(n) TMs
- Can’t be in DTIME(f(n))
Practical vs Theoretical Complexity
Theoretical O(n²) May Be Fast
Example: Insertion sort
- O(n²) worst case
- O(n) on nearly sorted data
- Small constants
- Cache-friendly
Theoretical O(n) May Be Slow
Example: Some graph algorithms
- O(n) time
- Huge constants
- Complex implementation
- Poor cache behavior
Constants Matter
2n vs n²:
- For n < 2: n² faster
- For n > 2: 2n faster
But asymptotically: O(n) beats O(n²)
Complexity in Practice
Algorithm Design
Goal: Find efficient algorithms
Techniques:
- Divide and conquer (O(n log n))
- Dynamic programming (avoid exponential)
- Greedy algorithms (often polynomial)
- Approximation (NP-hard → polynomial approx)
Problem Classification
In P: Solve directly
NP-complete: Use approximation or heuristics
PSPACE-complete: Very hard, special techniques
Undecidable: Can’t solve in general
Important Distinctions
Decidable vs Tractable
Decidable: Algorithm exists (may be slow)
Tractable: Efficient algorithm exists (polynomial)
Example:
- Hamiltonian path: Decidable ✓, Tractable ✗ (NP-complete)
Worst-Case vs Average-Case
Worst-case: Maximum time over all inputs of size n
Average-case: Expected time over typical inputs
Example: Quicksort
- Worst-case: O(n²)
- Average-case: O(n log n)
Time vs Space
Time: Number of steps
Space: Amount of memory
Trade-off: Often can trade time for space or vice versa
Open Problems
Major Open Questions
- P vs NP: Million-dollar prize!
- NP vs PSPACE: NP ⊊ PSPACE?
- L vs NL: Log space hierarchy
- NL vs P: Non-determinism in space
- NC vs P: Parallel complexity
Why Hard to Prove?
Barriers:
- Relativization (oracle separations)
- Natural proofs (circuit lower bounds)
- Algebraization (algebrized separations)
Need: New techniques!
Important Exam Points
- ✓ Complexity: Studies time and space efficiency
- ✓ P: Polynomial time (efficient)
- ✓ NP: Non-deterministic polynomial time (verifiable)
- ✓ NP-complete: Hardest problems in NP
- ✓ P vs NP: Open problem (million-dollar!)
- ✓ Big-O: Upper bound on growth rate
- ✓ PSPACE: Polynomial space
- ✓ Hierarchy theorems: More resources → more problems solvable
- ✓ Reduction: A ≤ₚ B proves relative hardness
- ✓ Practical vs theoretical: Constants and lower-order terms matter
Common Mistakes to Avoid
✗ Mistake 1: Confusing decidable with tractable
✓ Correct: Decidable may still be exponential time
✗ Mistake 2: Thinking P = polynomial, NP = exponential
✓ Correct: NP = non-deterministic polynomial (may be in P!)
✗ Mistake 3: NP means “not polynomial”
✓ Correct: NP = “Non-deterministic Polynomial”
✗ Mistake 4: All NP problems exponential
✓ Correct: Many NP problems in P (e.g., sorting)
✗ Mistake 5: P vs NP is proved
✓ Correct: Still open! (50+ years)
✗ Mistake 6: Ignoring constants in practice
✓ Correct: O(n) with huge constant may be slower than O(n²)
Practice Questions
Q1: What does complexity theory study?
Answer: Time and space efficiency of algorithms.
Q2: What is P?
Answer: Languages decidable in polynomial time.
Q3: What is NP?
Answer: Languages decidable by NTM in polynomial time (or verifiable in polynomial time).
Q4: What is P vs NP question?
Answer: Is P = NP? (Does easy to verify = easy to solve?)
Q5: What is NP-complete?
Answer: Hardest problems in NP (if one in P, all are).
Q6: Give example of NP-complete problem.
Answer: SAT, Hamiltonian path, graph coloring, TSP.
Q7: What is Big-O notation?
Answer: Upper bound on growth rate (f(n) = O(g(n))).
Q8: Is sorting in P?
Answer: Yes! O(n log n) algorithms exist.
Q9: What is PSPACE?
Answer: Languages decidable using polynomial space.
Q10: Are hierarchy theorems?
Answer: More resources allow solving more problems (strict).
Summary
- Complexity theory: Studies time and space efficiency
- Time complexity: Number of steps as function of input size
- Space complexity: Memory used as function of input size
- Big-O notation: Asymptotic upper bound O(f(n))
- P: Polynomial time (efficient, tractable)
- NP: Non-deterministic polynomial time (verifiable)
- P vs NP: Open problem (P = NP or P ≠ NP?)
- NP-complete: Hardest problems in NP
- PSPACE: Polynomial space
- Hierarchy theorems: More resources → more power
- Reductions: Prove relative hardness (A ≤ₚ B)
- Key insight: Distinguishes practical from impractical
Complexity theory bridges theory and practice in computation!