HW 5
HW 5
February 8, 2024
1 Homework 5
In this homework we test the approximation of functions with polynomials.
We start by loading three Julia packages. The first one is to deal with polynomials. The second
one provides pre-made methods for numerical integration. The third is the standard package for
plotting functions.
[ ]: using Polynomials
using QuadGK
using Plots
The polynomials can be defined from a list of coefficients. Arguably, it looks better to define a
global variable x that refers to the 𝑥 polynomial and then do basic arithmetic with it. Let us see.
[ ]: global x = Polynomial([0,1])
display(1 + 2x + 3x^2 + 5x^3)
display((x-1.5)*(x-2))
1 + 2 ⋅ 𝑥 + 3 ⋅ 𝑥2 + 5 ⋅ 𝑥3
3.0 − 3.5 ⋅ 𝑥 + 1.0 ⋅ 𝑥2
Now, the following function returns the Bernstien polynomial 𝛽𝑛,𝑖 .
1
1.1 Exercise 1
It is your job to implement a function that returns the Bernstein polynomial corresponding to any
function f
[ ]: function bernstein(f::Function, n::Integer)
b = 0
for i = 0:n
b += f(i/n) * bernstein_beta(n,i)
end
return b
end
2
2 Lagrange interpolation
You implemented Lagrange interpolation in last week’s homework. Here is a possible way to do it.
[ ]: function newton(f::Function, x::Vector{<:Real})
n = length(x)
if (n==1) return f(x[1]) end
yn = x[1:end-1]
yl = vcat(x[1:end-2],[x[n]])
return (newton(f,yn)-newton(f,yl))/(x[n-1]-x[n])
end
3
end
return p
end
2.1 Exercise 2
We now want to generate a sequence of orthogonal polynomials in [0,1] and use them to find the
least squares approximation of any given function as a polynomial of degree ≤ 𝑛 in the interval
[0,1].
First of all, we need to generate a list of orthogonal polynomials. The following function is supposed
to return an array with the orthogonal polynomials 𝑝0 , 𝑝1 , 𝑝2 , 𝑝3 , … so that each 𝑝𝑘 has degree 𝑘.
[ ]: function orthogonal_polynomials(n::Integer)
# This is going to be a list of orthogonal polynomials.
p = []
for k in 1:n
# Start with the standard basis polynomial x^k
pk = Polynomial([zeros(k); 1])
for j in 1:length(p)
pj = p[j]
# Define functions for the inner products
f_pk_pj(x) = pk(x) * pj(x)
f_pj_pj(x) = pj(x)^2
push!(p, pk)
end
return p
end
4
[ ]: 4-element Vector{Any}:
Polynomial(1)
Polynomial(-0.5 + 1.0*x)
Polynomial(0.16666666666666669 - 1.0*x + 1.0*x^2)
Polynomial(-0.04999999999999993 + 0.5999999999999999*x - 1.5*x^2 + 1.0*x^3)
I provide the function that generates the polynomial of a given degree that best approximates 𝑓 with
respect to the 𝐿2 distance. It requires a well done implementation of orthogonal_polynomials to
work.
Note that we use the function integrate to integrate polynomials. It should be fast and roughly
exact. We use quadgk to integrate numerically any generic function. It is presumably slower and
maybe less accurate. We will study how this is done in the next chapter.
[ ]: function least_squares_approximation(f::Function, degree::Integer)
ops = orthogonal_polynomials(degree)
p = Polynomial(0)
for po in ops
p += quadgk(t -> f(t)*po(t), 0,1, rtol=1e-14)[1] / integrate(po*po,0,1)␣
↪* po
end
return p
end
5
2.2 Some pictures.
Let us plot a few functions and their corresponding polynomials.
We use a uniform grid in [0,1] for the interpolation. We use polynomials of degree n with (n+1)
points.
[ ]: function plot_all_polynomials(f::Function, n::Integer)
pB = bernstein(f,n)
pI = interpolate(f,Array(0:1/n:1))
pL = least_squares_approximation(f,n)
xi = 0:0.002:1
scatter(0:1/n:1, map(f,0:1/n:1), label="Interpolation points")
plot!(xi,map(f,xi), label="function")
plot!(xi,map(pB,xi), label="Bernstein polynomial")
plot!(xi,map(pI,xi), label="Lagrange interpolation")
plot!(xi,map(pL,xi), label="Least squares approximation")
end
[ ]: plot_all_polynomials(t->sin(2*pi*t),4)
[ ]:
6
In this first example, the sine function is very smooth and looks very much like a polinomial. The
Lagrange interpolation does a very good job, as well as the least square approximation.
Let us now try a Gaussian, that is very localized near 0.5. We use 9 points.
[ ]: plot_all_polynomials(t->exp(-50*(t-0.5)^2),8)
[ ]:
7
The Lagrange interpolation polynomial does not look so well any more. The least squares approx-
imation stays somewhere close to the Gaussian. The Bernstein polynomial is still far from the
graph, but at least it has more or less the same visual shape.
What if we do it with a few more points?
[ ]: plot_all_polynomials(t->exp(-50*(t-0.5)^2),12)
[ ]:
8
If I try with more points the least squares approximation seems to lose symmetry. Honestly, I do
not understand why. There is no reason for that to happen. I can only think of two reasons: *
The truncation errors in the numerical integration grow out of control. * I made a mistake in the
implementation of the function. I would appreciate if someone sheds some light on this issue.
Let us now try other functions. Recall that the error in Lagrange interpolation had to do with some
higher order derivative of the function. What would happen then, if we apply it to a continuous
function that is not differentiable. Let us try a tent function 𝑓(𝑥) = (1 − 4|𝑥 − 0.5|)+ .
[ ]: pos(t) = t>0 ? t : 0.
plot_all_polynomials(t->pos(1 - 4*abs(t-0.5)),5)
[ ]:
9
[ ]: plot_all_polynomials(t->pos(1 - 4*abs(t-0.5)),15)
[ ]:
10
Noticeably, the Lagrange interpolation has some severe ondulations near the endpoints of the
interval.
Let us now try an even more singular function: the heavyside!
[ ]: plot_all_polynomials(t-> t>0.5 ? 0. : 1.,5)
[ ]:
11
2.3 We want a movie!!
In all the pictures above it seems that the Bernstein polynomials do a bad job at approximating
the function. However, the Bernstein polynomials always converge to the function as 𝑛 → ∞. It
just takes a large 𝑛 to get there. Let us see how a sequence of Bernstein polynomials converge to
the Gaussian function.
[ ]: anim = @animate for n in 1:30
f(t) = exp(-50*(t-0.5)^2)
pb = bernstein(f,n)
xg = 0:0.01:1
yg = map(pb,xg)
plot(xg,yg, label="Bernstein with degree "*string(n))
plot!(f,0,1, label="Gaussian")
end
gif(anim,fps=5)
[ ]: Plots.AnimatedGif("/Users/bradleyyu/Documents/Coding-Things/MATH212/tmp.gif")
I stopped the computation at degree 30 because I was getting rubbish afterwards. The Bernstein
12
polynomials are supposed to converge to the function as their degree → ∞. But the numerical
errors due to floating point truncation eventually become dominant.
13