What Is an Approximation Algorithm?
An approximation algorithm is an algorithm designed to find near-optimal solutions to optimization problems, especially when finding the exact solution is too slow or even impossible in practice (i.e., NP-hard problems).
If exact solutions are a luxury, approximation algorithms are your smart, budget-friendly substitute.
These algorithms:
- Guarantee a solution within a known factor of the optimal
- Run in polynomial time
- Are used when efficiency is more important than perfection
Why Approximate?
Some problems (like Traveling Salesman, Vertex Cover, Knapsack) are NP-hard, meaning:
- We can’t solve them exactly in polynomial time unless P = NP
- But we still want solutions that are good enough, and fast
Approximation algorithms bridge that gap — giving us solutions with provable quality bounds, fast.
Key Terminology
| Term | Description |
|---|---|
| Optimization Problem | Goal is to minimize or maximize something (e.g., cost, distance) |
| Approximation Ratio | How far the algorithm’s solution can be from the optimal |
| Performance Guarantee | Upper bound on how bad the algorithm can be |
| Polynomial Time | Efficiency requirement for approximation algorithms |
| Inapproximability | Proof that no good approximation exists (under P ≠ NP) |
Types of Approximation Guarantees
Let A(x) be the solution returned by the algorithm, and OPT(x) the optimal solution.
For minimization problems:
Approximation ratio r ≥ 1 such that:
A(x) ≤ r × OPT(x)
For maximization problems:
Approximation ratio r ≤ 1 such that:
A(x) ≥ r × OPT(x)
The closer r is to 1, the better the algorithm.
Example: Vertex Cover Problem
Problem: Given a graph, find the smallest set of vertices such that every edge has at least one endpoint in the set.
This problem is NP-hard, but we have a 2-approximation algorithm:
Greedy Algorithm:
- Start with an empty set
C. - While there are uncovered edges:
- Pick an arbitrary edge
(u, v) - Add both
uandvtoC - Remove all edges covered by
uorv
- Pick an arbitrary edge
- Return
C
This algorithm returns a solution that is at most 2× the optimal size.
Example: Knapsack Problem (Fractional)
Problem: Maximize total value of items packed into a knapsack of limited weight.
Fractional version (can take fractions of items) has an exact greedy solution:
- Sort items by value-to-weight ratio
- Take items greedily until the knapsack is full
Time Complexity: O(n log n)
Result: Optimal
0/1 Knapsack, on the other hand (can’t take fractions), is NP-complete, but can be approximated within (1 – ε) of the optimum using FPTAS (Fully Polynomial-Time Approximation Scheme).
Approximation Schemes
| Type | Description |
|---|---|
| PTAS (Polynomial-Time Approximation Scheme) | For any ε > 0, gives a solution within (1 + ε) × OPT, in polynomial time for fixed ε |
| FPTAS (Fully Polynomial-Time Approximation Scheme) | PTAS where runtime is polynomial in both input size and 1/ε |
Example: FPTAS for 0/1 Knapsack
Idea:
- Scale down the item values using a factor depending on ε
- Use dynamic programming on scaled values
- Get a near-optimal solution in O(n² / ε)
Approximation for TSP (Traveling Salesman Problem)
TSP: Find the shortest path that visits every city once and returns.
| Variant | Approximation Possible? | Notes |
|---|---|---|
| General TSP | ❌ No good approximation unless P = NP | |
| Metric TSP (triangle inequality holds) | ✅ Yes, 1.5-approximation via Christofides algorithm | |
| Euclidean TSP | ✅ PTAS exists |
Christofides Algorithm (Metric TSP):
- Build MST (minimum spanning tree)
- Add shortest matching on odd-degree vertices
- Combine into Eulerian graph, shortcut to Hamiltonian cycle
Approximation vs Heuristics
| Approximation Algorithms | Heuristics |
|---|---|
| ✅ Theoretical guarantee | ❌ No guarantees |
| ✅ Predictable behavior | ❌ May perform poorly in edge cases |
| ✅ Provable bounds | ❌ Empirical only |
| ❌ Sometimes slower | ✅ Often faster |
Heuristics are great in practice (e.g., simulated annealing, genetic algorithms), but approximation algorithms are better understood theoretically.
Inapproximability Results
Some problems are so hard that even approximating them within any constant factor is impossible unless P = NP.
| Problem | Known lower bound |
|---|---|
| Set Cover | No (1 – o(1)) ln n approximation possible |
| Max-3-SAT | Cannot do better than 7/8 of clauses |
| General TSP | No approximation ratio unless P = NP |
These results come from complexity theory and reductions that show approximating well would imply P = NP.
Applications of Approximation Algorithms
| Domain | Example |
|---|---|
| Networking | Routing with minimal delay (e.g., Steiner trees) |
| Scheduling | Allocating jobs to processors |
| Data Mining | Clustering large datasets (e.g., k-means) |
| Bioinformatics | Sequence alignment, gene matching |
| Supply Chains | Vehicle routing, inventory optimization |
| Machine Learning | Fast inference on large models |
Best Practices
- ✅ Know when exact algorithms are impractical
- ✅ Choose algorithms with tightest bounds
- ✅ Evaluate input sizes — approximation often shines at scale
- ✅ Use heuristics if approximation is too slow
- ✅ Be aware of theoretical limitations (inapproximability)
Summary
- Approximation algorithms provide fast, near-optimal solutions for hard problems.
- They are essential for solving NP-hard optimization problems in real-world settings.
- They offer provable performance guarantees, unlike heuristics.
- Different schemes (like PTAS, FPTAS) allow control over trade-off between accuracy and runtime.
- Some problems can’t be approximated well — unless P = NP.
“Approximation is not defeat — it’s a powerful strategy when perfection is computational suicide.”
Related Keywords
- Optimization
- NP-hard
- Polynomial-Time
- Approximation Ratio
- PTAS
- FPTAS
- Greedy Algorithm
- Vertex Cover
- Knapsack Problem
- TSP
- Christofides Algorithm
- Dynamic Programming
- Heuristics
- Inapproximability
- Performance Guarantee
- Complexity Classes
- Reduction
- Suboptimal Solutions
- Metaheuristics
- Resource-Constrained Optimization









