0% found this document useful (0 votes)
4 views11 pages

Chebyshev Approximation

Chebyshev approximation is a method that utilizes Chebyshev polynomials to minimize the maximum error between a function and its approximation over an interval. These orthogonal polynomials are defined over [-1,1] and exhibit properties such as minimax error, numerical stability, and exponential convergence for smooth functions. The technique is particularly useful in avoiding issues like Runge's phenomenon in polynomial interpolation and has applications in quantum mechanics for efficiently computing wavefunction evolution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views11 pages

Chebyshev Approximation

Chebyshev approximation is a method that utilizes Chebyshev polynomials to minimize the maximum error between a function and its approximation over an interval. These orthogonal polynomials are defined over [-1,1] and exhibit properties such as minimax error, numerical stability, and exponential convergence for smooth functions. The technique is particularly useful in avoiding issues like Runge's phenomenon in polynomial interpolation and has applications in quantum mechanics for efficiently computing wavefunction evolution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Chebyshev Approximations

What is Chebyshev Approximation?

Defintion: Chebyshev approximation is a technique that


uses Chebyshev polynomials to approximate a function
such that the maximum deviation (error) between the
function and the approximation is minimized across the
interval.
Chebyshev Polynomials
Chebyshev Polynomials are a special set of orthogonal
polynomials defined over [−1,1]
Denoted as Tn(x), the first few are:
T0(x)=1
T1(x)=x
T2(x)=2x^2−1 .....
Common recursive form: Tn+1​(x)=2xTn​(x)−Tn−1​(x)

Explicit Form: Tn​(x) = cos(n*arccos(x)), for x [−1,1]
They are orthogonal with respect to the weight
w(x)= 1/√(1-x²)
How is the orthogonality helpful?

The orthogonality of Chebyshev polynomials is exactly what


allows us to compute the coefficients in a Chebyshev series.
Chebyshev Polynomials’ use

Where an is written as below (just like in fourier transform)


Properties
Minimax Property: Chebyshev approximation minimizes the
maximum error between the function and the approximation over
the interval.
Numerical Stability: It avoids the large oscillations seen in
polynomial interpolation (like Runge's phenomenon) near
endpoints and approximates over an interval (unlike Taylor’s).
Exponential Convergence: For smooth (analytic) functions,
Chebyshev approximations converge exponentially fast with
increasing degree.
Examples

f(x)=1/(1+25*x²)​
This function is famous for Runge's phenomenon
where Taylor or high-degree polynomial interpolation
oscillates wildly at the edges.

Next, we compare the Interpolation and Taylor's


approximation to Chebyshev approximations
Inferences

The Taylor approximations perform well close to the


center, where they were expanded.
As you move away from the center, the Taylor series
quickly blows up (Runge phenomenon).
The Chebyshev approximations, while not perfect near
the center, remain stable across the whole interval, and
they don’t oscillate wildly at the edges.
Error analysis
Taylor Error
Error is tiny near center.
Diverges rapidly away from center — local nature
Shows Runge phenomenon
Chebyshev Error
Uses orthogonal polynomials to minimize max error over
the whole interval.
Error is evenly spread and remains small.
Slight oscillations but no blow-up — great for global
approximation.
Chebyshev polynomials nodes
Reason why Runge phenomenon is not seen here:
Application in Quantum Mechanics
Computed the time evolution of a quantum wavefunction using
Chebyshev

Directly computing the exponential of a matrix e^(−iHt) is


expensive for large systems.

You might also like