Description
Quadratic convergence is a term from numerical analysis that describes how rapidly an iterative method approaches its final solution. Specifically, a sequence {xₖ} is said to converge quadratically to a solution x* if the number of correct digits roughly doubles at each step.
In mathematical terms, the convergence is quadratic if there exists a constant C > 0 such that:
|xₖ₊₁ - x*| ≤ C · |xₖ - x*|² for sufficiently large k
This means the error at step k+1 is proportional to the square of the error at step k, indicating extremely fast convergence. Quadratic convergence is stronger than linear or superlinear convergence and is a hallmark of efficient, second-order iterative algorithms—such as Newton’s Method.
How It Works
The convergence of an iterative method describes how fast the current estimate xₖ approaches the actual solution x*. The rate of convergence depends on how the error evolves:
- Let
eₖ = |xₖ - x*|be the error at iterationk. - If
eₖ₊₁ ≤ C · eₖ², then the convergence is quadratic.
This condition implies that:
- As the error gets smaller, it shrinks even faster in subsequent steps.
- The sequence
{xₖ}moves extremely rapidly towardx*once it’s close.
Key Components
1. Error Sequence (eₖ)
Represents the difference between the current iterate and the true solution:
eₖ = |xₖ - x*|
2. Rate Constant (C)
A positive constant bounding the error progression:
eₖ₊₁ ≤ C · eₖ²
3. Order of Convergence (p)
Quadratic convergence corresponds to:
p = 2
Where p is the exponent in:
lim (eₖ₊₁ / eₖᵖ) = constant < ∞
Comparison of Convergence Rates
| Type | Mathematical Form | Speed |
|---|---|---|
| Linear | eₖ₊₁ ≤ C · eₖ | Slow |
| Superlinear | eₖ₊₁ / eₖ → 0 | Faster |
| Quadratic | eₖ₊₁ ≤ C · eₖ² | Very fast |
| Cubic | eₖ₊₁ ≤ C · eₖ³ | Extremely fast |
Quadratic convergence means double the digits of precision after each iteration.
Use Cases
🧮 Root Finding
- Newton’s Method exhibits quadratic convergence when the function is sufficiently smooth and the initial guess is close to the root.
🔁 Optimization
- In second-order methods (e.g., Newton-based optimization), convergence to the optimum is quadratic under convexity and differentiability assumptions.
🧠 Machine Learning
- While deep learning models often use first-order methods like SGD, second-order approximations can offer faster convergence for convex objectives in smaller-scale models.
📊 Scientific Computing
- Used to assess the efficiency of solvers in engineering simulations, finite element methods, and systems of nonlinear equations.
Example: Newton’s Method
Let’s find the square root of 2 using:
f(x) = x² - 2
Then:
xₖ₊₁ = xₖ - f(xₖ)/f'(xₖ) = xₖ - (xₖ² - 2)/(2xₖ)
Starting with x₀ = 1.0, we compute:
x₁ = 1.5→ error ≈ 0.0858x₂ = 1.4167→ error ≈ 0.0025x₃ = 1.4142→ error ≈ 6.1e-06x₄ = 1.41421356237→ error ≈ 1e-11
Each error is roughly the square of the previous one, illustrating quadratic convergence.
Real-World Analogy
Imagine trying to land a spacecraft on a tiny spot on the Moon. If your navigation system has quadratic convergence, each correction cuts your landing error exponentially fast, making your descent not just smooth but lightning-fast—so long as you’re close enough to the right trajectory. That’s what quadratic convergence offers in mathematics: once you’re near the solution, you converge extremely quickly.
Benefits and Limitations
✅ Benefits
- Highly Efficient: Requires very few iterations once near the solution.
- Fewer Function Evaluations: Saves computational resources.
- Predictable Behavior: Error reduction follows a clear squared pattern.
❌ Limitations
- Needs Good Initialization: May fail or diverge if starting point is too far.
- Requires Derivatives: Methods like Newton require first or second derivatives.
- Sensitivity: May not handle discontinuities or flat regions well.
Theoretical Condition for Quadratic Convergence
Let f(x) be a function with:
- A root at
x*, f ∈ C²(twice continuously differentiable),f'(x*) ≠ 0
Then Newton’s Method converges quadratically to x* if the initial guess x₀ is sufficiently close to x*.
Mathematical Definition
A sequence {xₖ} converges to x* with order p and rate constant C if:
limₖ→∞ |xₖ₊₁ - x*| / |xₖ - x*|ᵖ = C, where C > 0 and p ≥ 1
If p = 2, it is quadratic convergence.
Python Code Illustration
def newton_method(f, df, x0, tol=1e-12, max_iter=20):
x = x0
for i in range(max_iter):
fx = f(x)
dfx = df(x)
if dfx == 0:
raise ZeroDivisionError("Derivative is zero.")
x_new = x - fx / dfx
print(f"Iter {i}: x = {x_new}, Error = {abs(x_new - x)}")
if abs(x_new - x) < tol:
break
x = x_new
return x
Usage:
import math
f = lambda x: x**2 - 2
df = lambda x: 2 * x
root = newton_method(f, df, 1.0)
print("Approximate root:", root) # ≈ 1.41421356
Observe how the error roughly squares at each iteration.
Key Formulas Summary
- Error at iteration k:
eₖ = |xₖ - x*| - Quadratic convergence condition:
eₖ₊₁ ≤ C · eₖ² - Order of convergence (general):
limₖ→∞ (eₖ₊₁ / eₖᵖ) = C ⇒ order p - Newton’s Method update rule:
xₖ₊₁ = xₖ - f(xₖ)/f'(xₖ)
Related Keywords
- Bisection Method
- Convergence Rate
- Cubic Convergence
- Error Bound
- Fixed Point Iteration
- Newton Method
- Numerical Optimization
- Order of Convergence
- Root-Finding
- Superlinear Convergence









