What Is an Approximation Algorithm?

An approximation algorithm is an algorithm designed to find near-optimal solutions to optimization problems, especially when finding the exact solution is too slow or even impossible in practice (i.e., NP-hard problems).

If exact solutions are a luxury, approximation algorithms are your smart, budget-friendly substitute.

These algorithms:

  • Guarantee a solution within a known factor of the optimal
  • Run in polynomial time
  • Are used when efficiency is more important than perfection

Why Approximate?

Some problems (like Traveling Salesman, Vertex Cover, Knapsack) are NP-hard, meaning:

  • We can’t solve them exactly in polynomial time unless P = NP
  • But we still want solutions that are good enough, and fast

Approximation algorithms bridge that gap — giving us solutions with provable quality bounds, fast.

Key Terminology

TermDescription
Optimization ProblemGoal is to minimize or maximize something (e.g., cost, distance)
Approximation RatioHow far the algorithm’s solution can be from the optimal
Performance GuaranteeUpper bound on how bad the algorithm can be
Polynomial TimeEfficiency requirement for approximation algorithms
InapproximabilityProof that no good approximation exists (under P ≠ NP)

Types of Approximation Guarantees

Let A(x) be the solution returned by the algorithm, and OPT(x) the optimal solution.

For minimization problems:

Approximation ratio r ≥ 1 such that:

A(x) ≤ r × OPT(x)

For maximization problems:

Approximation ratio r ≤ 1 such that:

A(x) ≥ r × OPT(x)

The closer r is to 1, the better the algorithm.

Example: Vertex Cover Problem

Problem: Given a graph, find the smallest set of vertices such that every edge has at least one endpoint in the set.

This problem is NP-hard, but we have a 2-approximation algorithm:

Greedy Algorithm:

  1. Start with an empty set C.
  2. While there are uncovered edges:
    • Pick an arbitrary edge (u, v)
    • Add both u and v to C
    • Remove all edges covered by u or v
  3. Return C

This algorithm returns a solution that is at most 2× the optimal size.

Example: Knapsack Problem (Fractional)

Problem: Maximize total value of items packed into a knapsack of limited weight.

Fractional version (can take fractions of items) has an exact greedy solution:

  1. Sort items by value-to-weight ratio
  2. Take items greedily until the knapsack is full

Time Complexity: O(n log n)
Result: Optimal

0/1 Knapsack, on the other hand (can’t take fractions), is NP-complete, but can be approximated within (1 – ε) of the optimum using FPTAS (Fully Polynomial-Time Approximation Scheme).

Approximation Schemes

TypeDescription
PTAS (Polynomial-Time Approximation Scheme)For any ε > 0, gives a solution within (1 + ε) × OPT, in polynomial time for fixed ε
FPTAS (Fully Polynomial-Time Approximation Scheme)PTAS where runtime is polynomial in both input size and 1/ε

Example: FPTAS for 0/1 Knapsack

Idea:

  • Scale down the item values using a factor depending on ε
  • Use dynamic programming on scaled values
  • Get a near-optimal solution in O(n² / ε)

Approximation for TSP (Traveling Salesman Problem)

TSP: Find the shortest path that visits every city once and returns.

VariantApproximation Possible?Notes
General TSP❌ No good approximation unless P = NP
Metric TSP (triangle inequality holds)✅ Yes, 1.5-approximation via Christofides algorithm
Euclidean TSP✅ PTAS exists

Christofides Algorithm (Metric TSP):

  1. Build MST (minimum spanning tree)
  2. Add shortest matching on odd-degree vertices
  3. Combine into Eulerian graph, shortcut to Hamiltonian cycle

Approximation vs Heuristics

Approximation AlgorithmsHeuristics
✅ Theoretical guarantee❌ No guarantees
✅ Predictable behavior❌ May perform poorly in edge cases
✅ Provable bounds❌ Empirical only
❌ Sometimes slower✅ Often faster

Heuristics are great in practice (e.g., simulated annealing, genetic algorithms), but approximation algorithms are better understood theoretically.

Inapproximability Results

Some problems are so hard that even approximating them within any constant factor is impossible unless P = NP.

ProblemKnown lower bound
Set CoverNo (1 – o(1)) ln n approximation possible
Max-3-SATCannot do better than 7/8 of clauses
General TSPNo approximation ratio unless P = NP

These results come from complexity theory and reductions that show approximating well would imply P = NP.

Applications of Approximation Algorithms

DomainExample
NetworkingRouting with minimal delay (e.g., Steiner trees)
SchedulingAllocating jobs to processors
Data MiningClustering large datasets (e.g., k-means)
BioinformaticsSequence alignment, gene matching
Supply ChainsVehicle routing, inventory optimization
Machine LearningFast inference on large models

Best Practices

  1. ✅ Know when exact algorithms are impractical
  2. ✅ Choose algorithms with tightest bounds
  3. ✅ Evaluate input sizes — approximation often shines at scale
  4. ✅ Use heuristics if approximation is too slow
  5. ✅ Be aware of theoretical limitations (inapproximability)

Summary

  • Approximation algorithms provide fast, near-optimal solutions for hard problems.
  • They are essential for solving NP-hard optimization problems in real-world settings.
  • They offer provable performance guarantees, unlike heuristics.
  • Different schemes (like PTAS, FPTAS) allow control over trade-off between accuracy and runtime.
  • Some problems can’t be approximated well — unless P = NP.

“Approximation is not defeat — it’s a powerful strategy when perfection is computational suicide.”

Related Keywords

  • Optimization
  • NP-hard
  • Polynomial-Time
  • Approximation Ratio
  • PTAS
  • FPTAS
  • Greedy Algorithm
  • Vertex Cover
  • Knapsack Problem
  • TSP
  • Christofides Algorithm
  • Dynamic Programming
  • Heuristics
  • Inapproximability
  • Performance Guarantee
  • Complexity Classes
  • Reduction
  • Suboptimal Solutions
  • Metaheuristics
  • Resource-Constrained Optimization