0% found this document useful (0 votes)
65 views18 pages

Elementary Numerical Analysis Prof. Rekha P. Kulkarni Department of Mathematics Indian Institute of Technology, Bombay

The document discusses the Gauss two-point quadrature rule for numerical integration. It summarizes the construction of orthogonal polynomials on the interval [-1,1] using Gram-Schmidt orthogonalization. The roots of the quadratic orthogonal polynomial, -1/√3 and 1/√3, are used as the Gauss points for numerical integration. The formula for the Gaussian integration and its error term are derived. The error is expressed in terms of the fourth derivative of the function evaluated at some point, and is bounded by f^{(4)}(d)/135.

Uploaded by

Ram Janam Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views18 pages

Elementary Numerical Analysis Prof. Rekha P. Kulkarni Department of Mathematics Indian Institute of Technology, Bombay

The document discusses the Gauss two-point quadrature rule for numerical integration. It summarizes the construction of orthogonal polynomials on the interval [-1,1] using Gram-Schmidt orthogonalization. The roots of the quadratic orthogonal polynomial, -1/√3 and 1/√3, are used as the Gauss points for numerical integration. The formula for the Gaussian integration and its error term are derived. The error is expressed in terms of the fourth derivative of the function evaluated at some point, and is bounded by f^{(4)}(d)/135.

Uploaded by

Ram Janam Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Elementary Numerical Analysis

Prof. Rekha P. Kulkarni


Department of Mathematics
Indian Institute of Technology, Bombay

Lecture No # 13
Gauss 2-point Rule: Error

So, we are considering Gaussian integration. Last time what we have done is we looked
at three functions - f x is equal to 1, then f x is equal to x and f x is equal to x square for x
belonging to interval minus 1 to 1. Then using gram Schmidt Orthonormalization
process, we constructed three orthonormal functions. The third orthonormal function
which is the quadratic function, it is perpendicular to any linear polynomial.

This quadratic function, it has got two distinct roots; so, those are known as the Gauss
points, and then, based on that, we fit a polynomial of degree less than or equal to 1
integrate; so, that is our Gaussian integration. So, today we are going to find a formula
for this Gaussian integration based on these two points. So, we will be doing it first for
the interval minus 1 to 1. We will also find the error in this interval minus 1 to 1.

Then using the result on interval minus 1 to 1, we will look at interval a b, general
interval a b. After that we will consider composite Gauss two-point rule, and then, we are
going to prove the convergence of Gaussian Quadrature; that means first we are looking
at Gaussian integration based on two points. So, we will define the a general Gaussian
quadrature based on say n plus 1 points and we are going to prove convergence of the
numerical Quadrature method.
(Refer Slide Time: 02:28)

So, let us look at, the, what we did last time. So, our functions were f 0 x is equal to 1
constant function, x 1 is equal to x and f 2 x is equal to x square on interval minus 1 to 1.
From these three functions, we constructed g 0 x to be 1 by root 2 again a constant
polynomial; g 1 x to be root 3 by 2 x a linear polynomial and g 2 x to be x square minus
1 third multiplied by this constant.

The constant is the normalization factor, which makes norm of g 2 to be equal to 1. So,
this function g 2, it is perpendicular to constant polynomial g 0; it is also perpendicular to
linear polynomial g 1. Then look at x square minus 1 third. So, that is factorized as x plus
1 by root 3 x minus 1 by root 3, because g 2 is perpendicular to the constant polynomial,
we get integral minus 1 to 1 x plus 1 by root 3 x minus 1 by root 3 d x is equal to 0, and
because g 2 is perpendicular to g 1, we get integral minus 1 to 1 x plus 1 by root 3 x
minus 1 by root 3 x d x is equal to 0.

So, last time, we had seen that when we want to find a numerical Quadrature formula of
the type w 0 f of x 0 plus w 1 f of x 1, if you want this to be exact for polynomials of
degree less than or equal to 3, then our points x 0 and x 1 should be so chosen that
integral a to b x minus x 0 x minus x 1 d x is 0 and integral a to b x minus x 0 x minus x
1 x d x is equal to 0. So, this fact, now we have achieved on the interval minus 1 to 1.

So, our x 0 is going to be minus 1 by root 3; x 1 is going to be plus 1 by root 3. We are


going to fit a linear polynomial which is based on interpolation points x 0 and x 1, which
is minus 1 by root 3 1 by root 3. We will integrate and we will get a approximate formula
and then we will look at the error.

(Refer Slide Time: 05:13)

Look at the polynomial f x 0 plus divided difference based on x 0 x 1 into x minus x 0.


This is the interpolating polynomial and this is the error divided difference based on x 0
x 1 x x minus x 0 x minus x 1. Let us integrate between minus 1 to 1 f x dx. So, it is
integration of f x 0 plus divided difference based on x 0 x 1 into x minus x 0 dx which
will be the first term will be give us 2 f x 0. The divided difference f x 0 x 1 is f x 1
minus f x 0 divided by x 1 minus x 0 x minus x 0 square by 2.

Evaluate it between minus 1 to 1. Then the integration when you put 1 and minus 1, that
will give us the term 1 minus x 0 square by 2 minus 1 minus x 0 square by 2. Use the fact
that x 1 minus x 0 is going to be 2 by root 3 and that is equal to minus 2 x 0. When you
expand this, you are going to get minus 2 x 0; then from here also minus 2 x 0 and then
divided by 2, so, that is minus 2 x 0. That will get cancelled with x 1 minus x 0 and that
gives us f x 0 plus f x 1. Next, let us look at the error. So, error has f of x 0 x 1 x
multiplied by our function w x dx integral minus 1 to 1. So, as we had done before, we
will use recurrence relation for the divided difference f x 0 x 1 x.

You cannot take out f of x 0 x 1 x as such, but when you using recurrence relation, we
write it as f of say y 0 x 0 x 1 plus some term; then f of y 0 x 0 x 1 that being a constant,
it will come out of the integrations i.
And then, we use the fact that our x 0 x 1, these are some special points. So, using those
properties, we are going to get our error to be based on divided difference of f with x 0
repeated twice x 1 repeated twice and then some point c and then integral minus 1 to 1 x
minus x 0 square x minus x 1 square dx. In order to have this term, we will be using the
mean value theorem for integral and that will give us a error bound.

(Refer Slide Time: 08:21)

So, this is our expression for the error. We have got integral minus 1 to 1 x minus x 0 x
minus 1 dx is 0 also integral minus 1 to 1 x minus x 0 x minus x 1 x dx is equal to 0. This
since it depends on x, I cannot take it out of integration, but using the recurrence relation
for divided difference, we write it as f x 0 x 1 x is equal to f x 0 x 0 x 1 plus divided
difference based on x 0 x 0 x 1 x into x minus x 0. You can take these term on the other
side and divide by x minus x 0. So, that is the divided difference formula for f of x 0 x 0
x 1 x.

Now, again for this divided difference, we use a similar formula. So, this is the divided
difference we write as it is f of x 0 x 0 x 1 plus you have divided difference based on x 0
x 0 x 1 x 1.

So, I am introducing point x 1 multiplied by x minus x x 0 plus the term divided


difference based on x 0 repeated twice x 1 repeated twice and x multiplied by, now we
have introduced x 1, so, that is why you have x minus x 0 into x minus x 1. So, this we
have obtained using the recurrence formula for divided difference.
This expression, we will substitute in our error. So, when I substitute this term, there is
no dependence on x. So, it will come out of the integration sign integral minus 1 to 1 x
minus x 0 x minus x 1 d x is 0. So, no contribution from here, again, this divided
difference is independent of x; so, it will come out of the integration sign and you will
have x minus x 0 x minus x 0 x minus x 1 dx.

Then using these two relations, the contribution from this term also will be 0 and you are
left with integral minus 1 to 1. The divided difference based on x 0 repeated twice x 1
repeated twice x. You have got x minus x 0 x minus x 1, and here, you had x minus x 0 x
minus x 1. So, you get x minus x 0 square x minus x 1 square d x. So, this is the error
term.

Now, in this error term, our divided difference is going to be a continuous function
provided our function is sufficiently differentiable. We have got our divided difference
based on x 0 repeated twice x 1 repeated twice and x and x can take all the values. So, it
can take value x 0; it can take value x 1. So, in order that such a divided difference
should be defined, we will need the function to be three times differentiable. Look at the
term x minus 0 square x minus x 1 square. This term is going to be bigger than or equal
to 0.

So, we have got two functions - one function is continuous; other function is bigger than
or equal to 0, and hence, the mean value theorem for integrals, it is going to be
applicable. So, using the mean value theorem, we take out the divided difference term
out of the integration as divided difference based on x 0 repeated twice x 1 repeated
twice and some point c, and then, what remains is integral a to b x minus x 0 square x
minus x 1 square and then dx.

Now, recall our x 0 is minus 1 by root 3 x 1 is plus 1 by root 3. So, when you consider x
minus x 0 into x minus x 1, that is x square minus 1 third. We have got x minus x 0
square x minus x 1 square, so, that is going to be x square minus 1 third whole square,
and that is something you can integrate between minus 1 to 1, which is what we will do
in order to get a formula for the error.

Also the divided difference which is based on five points. If our function f is four times
differentiable, then that divided difference we can write as equal to fourth derivative
evaluated at some other point say d divided by 4 factorial. So, that gives us the error in
this Gaussian rule based on two points minus - 1 by root 3 to plus 1 by root 3.

(Refer Slide Time: 13:39)

So, the integration of minus 1 to 1 x square minus 1 third whole square dx. You will
have term x raise to 4, so, it is integral will be x raise to 5 by 5. Then you have minus 2
by 3 x square that its integration will be minus 2 by 9 x cube plus 1 by 9 it is integration
is going to be x. The divided difference we are writing is at f 4 d upon 4 factorial - where
d is some point in the interval minus 1 to 1. Then you simplify this put 1 and then we
will put minus by the value obtained by putting minus 1 here. So, that value you can
check that it comes out to be 8 by 45 and that gives you the error to be f 4 d divided by
135.
(Refer Slide Time: 14:34)

And thus the Gauss two-point rule is going to be integral minus 1 to 1 f x dx is


approximately equal to f of minus 1 by root 3 plus f of 1 by root 3 and error is going to
be f 4 d divided by 135. Now, look at the error term. The error term contains the fourth
derivative of the function. So, if the fourth derivative is identically 0, which is the case if
our function is a cubic polynomial, then the error is going to be 0.

So, we have got only two function evaluations, and then, the formula is exact for cubic
polynomial. When we had looked at the trapezoidal rule, there also we had only two
interpolation points. The interpolation points were end points - point a and point b, and in
that case, the formula was exact for polynomials of degree less than or equal to one.

So, this is the advantage of Gaussian rule. Now, this formula is valid on the interval
minus 1 to 1. We want to derive a formula on the interval a to b a general interval. So,
what we are going to do is we are going to look at a 1 to 1 on to map from the interval
minus 1 to 1 to interval a b and then we will be looking at integral a to b f x dx. So, if
you have a map phi from the interval minus 1 to a b, then we will do the change of
variable formula and then obtain Gauss two-point rule on a general interval a b.

And we will also look at the error term. In this case, our error term is f 4 d divided by
135. When we look at the interval a b, then there will be b minus a coming into picture.
So, you will have the power of b minus a raise to 5 and then some constant.
(Refer Slide Time: 17:11)

So, let us now look at 1 to 1 on to a fine map from interval minus 1 to 1 to a general
interval a b. This map is given by t plus 1 into b plus 1 minus t into a divided by 2. So,
the domain of our map is minus 1 to 1. You can simplify this to say that this is equal to a
plus b by 2 plus t into b minus a by 2. So, first of all notice that when t is equal to minus
1, in that case, this term will vanish and you will get phi of minus 1 to be equal to a.
When you put t is equal to plus 1, no contribution from 1 minus t into a. So, you are
going to have d.

Then the derivative of phi, that is equal to b minus a by 2. So, phi dash of t is going to be
strictly bigger than 0; that means phi is a strictly increasing function. So, we have a map
from minus 1 taking going to a 1 going to b and it is strictly increasing. So, that is why
the range of our phi is going to be closed interval a b, which proves that such a map is on
to phi dash t bigger than 0; that means it is strictly increasing and hence it is going to be
1 to 1.

And it is called a fine, because it is of this form. When if u had no constant only t times
something, that is known as a linear map and this will be a fine map. We have got phi
double dash t to be equal to 0. So, thus we have a 1 to 1 on to map from the interval
minus 1 to 0 to interval a b.
Now, look at integral a to b f x dx; x is varying in the interval a to b. Any point in the
interval a b is going to be phi of t for some t in the interval minus 1 to 1. So, we will put
x is equal to phi t. Then d x by d t is going to be phi dash t; so, that phi dash t is b minus
a by 2. So, our integration a to b f x d x we will replace it by integration over minus 1 to
1 of the composite map f composed with phi of t, and then, on 1 minus 1 to 1, we have
our formula for Gauss two-point integration.

(Refer Slide Time: 20:01)

Look at integral a to b g x d x, that will be equal to integral minus 1 to 1 g of phi t phi


dash t dt and this is equal to phi dash t. We have seen that it is b minus a by 2. So, it
comes out of the integration sign and it is integral minus 1 to f t d t - where f is the
composite map g composited with phi. Now, integral minus 1 to 1 f t d t is if u
approximate it by f of minus 1 by root 3 plus f of 1 by root 3, then the error is given by f
4 d divided by 135, and hence, integral a to b g x d x if u approximate it by b minus a by
1 coming from here f, and then, phi of 1 minus root 3 plus f of phi f composed with phi
of 1 by root 3. So, it should be actually g; it should be g of phi of minus 1 by root 3 plus
g of phi of 1 by root 3, and then, the error is going to be b minus a by 2 f 4 d divided by
135.

Look at our Gauss point. In the interval minus 1 to 1, the Gauss points were minus 1 by
root 3 and 1 by root 3. Then we look at our a fine map phi and image of minus 1 by root
3 and 1 by root 3 by this a fine map, those are going to be Gauss points in the interval a
b. In the interval minus 1 to 1, the Gauss points minus 1 by root 3 and plus 1 by root 3.

They are symmetric about the origin 0. So, 0 was the midpoint of the interval minus 1 to
1. Now, here in the interval a b, the Gauss points they are going to be symmetric about
the midpoint a plus b by 2. Look at our error. Error has got 1 term b minus a by 2 and
then fourth derivative of the function. That function is f and our f was g composed with
5. So, our g is the function, which we are trying to integrate over the interval a b. So, we
will like to have the error formula in terms of the derivatives of g.

So, this f 4 of d - fourth derivative - that is going to be g composed with phi it is fourth
derivative. So, we will use chain rule and then obtain a formula in terms of the derivative
of g. So, when we do that in the process, we will get powers of b minus a. So, let us do
that now.

(Refer Slide Time: 23:19)

So, this is the error f 4 d upon 135 b minus a by 2. Our function f of t is g composed with
phi t. So, we use the chain rule. So, the chain rule is f dash of t is equal to g dash of phi t
into phi dash t; phi dash t is b minus a. So, we get g dash phi t so that is the first
derivative.

Now, look at the second derivative - f double dash phi t f double dash at t that will be g
double dash phi t and then phi dash t square, because we have got here, we are
differentiating this. So, it will be d double dash phi t into phi dash t square and then we
can have g dash phi t and then the derivative of phi dash but phi double dash t is 0. So,
that is why we have got only g double dash phi t and then phi dash being b minus a it is b
minus a it is b minus a by 2 square.

(Refer Slide Time: 25:13)

When we look at the third derivative f triple dash t, that is going to be equal to g triple
dash phi t 1 more b minus a by 2. So, you will get b minus a by 2 cube, and then, fourth
derivative will be fourth derivative of g at phi t b minus a by 2 raise to 4, and hence, the
error is fourth derivative of g evaluated at some point phi d in the interval a b divided by
135 and then b minus a by 2 raise to phi, and the Gauss two-point rule for the interval a b
is b minus a by 2 into f x 0 plus f x 1 - where x 0 is the Gauss point a plus b by 2 minus 1
by root 3 into b minus a by 2 and x 1 which is a plus b by 2 plus 1 by root 3 b minus a by
2.

So, x 0 and x 1 they are symmetric about the midpoint. The error is fourth derivative
evaluated at some point divided by 135 into b minus a by 2 raise to phi. So, we have got
only two function evaluations. The formula is of the type w 0 f x 0 plus w 1 f x 1 the
weight w 0 is equal to w 1 is equal to b minus a by 2 and x 0 and x 1 these are the Gauss
points.
Now, we have found a basic Gauss two-point rule, and what we did for the trapezoidal
rule, Simpson’s rule, midpoint rule etcetera? We considered with composite rules; that
means we divided our interval a b into sub intervals of equal length. On each interval, we
applied our basic rule and we added up the result to get integral a to b f x dx. So, now
same thing we are going to do for Gauss two-point rule.

In case of composite rules which we have studied earlier, we saw that if you have the b
minus a raise to k in the error formula for the basic rule. Then when you add it up from
each sub interval, the contribution is some constant times h raise to k - where h is the
length of the sub interval. We are adding n such terms and our h is b minus a by n. So,
we associate a with 1 h with such as sum of n quantities and then we will get a constant,
but in the process, we have lost one power of h. That cannot be helped in the composite
rule that is going to happen.

So here, for our Gauss two-point rule, when we will apply to say interval t i to t i plus 1
of length h, then we are going to have a term h by 2 raise to phi. In the basic rule, we
have b minus a as a by 2 raise to phi. Now, it will be h by 2 raise to phi.

(Refer Slide Time: 28:48)

And 1 h will get lost. So, our composite Gauss two-point rule will have the order of
convergence to be equal to h raise to 4. Now, this type of argument we have used before.
So, I will be quickly going through the argument, and showing that in the composite
Gauss two-point rule, our order of convergence is going to be h raise to 4. So, the
interval a b consider it is uniform partition each sub interval t i to t i plus 1 will be of
length h which is b minus a by n.

In order to find the Gauss two-points in the interval t i to t i plus 1, we have to look at
fine map from minus 1 to 1. That I denote by phi i and value of phi i at t will be given by
the midpoint t i plus t i plus 1 by 2 plus t times earlier we had b minus a by 2.

So, now it is going to be t i plus 1 minus t i divided by 2. So, this is nothing but h by 2.
Then in each interval, we are going to have two Gauss points. The Gauss two-points in
the interval t i to t i plus 1, we denote by u to 2 i plus 1 which is image of minus 1 by
root 3 by this a fine map phi i and u 2 i plus 2, which is image of 1 by root 3 by the same
map phi i.

So, thus our 2 Gauss two-points in the interval t i to t i plus 1, they are given by the
midpoint of the interval minus t i plus 1 minus t i being h minus h by 2 root 3 and t i plus
t i plus 1 by 2 plus h by 2 root 3 and i is going to vary from 0 1 up to n minus 1. This n
should be n minus 1.

(Refer Slide Time: 30:37)

So, you are going to have in all 2 n Gauss points. Next, integral a to b f x d x. We split it
as integral t i to t i plus 1 f x d x; i going from 0 to n minus 1 which is approximately
equal to summation i goes from 0 to n minus 1 h by 2 that was our b minus a by 2 value
of f at u to i plus 1 plus value of f at u to i plus 2.
These are the Gauss two-points in the interval t i to t i plus 1. When we look at the error,
it will be summation i goes from 0 to n minus 1 f 4 d i by 135 h by 2 raise to 5.
Assuming the fourth derivative of f to be continuous, we can replace summation i goes
from 0 to n minus 1 f 4 d i divided by n by f 4 eta.

H is b minus a by n. So, that one we, that n we associate to get f 4 eta. So, we have got b
minus a. It was h by 2; so, that is why this 135 becomes 270 and 1 h by 2 is gone. So, we
have got h by 2 raise to 4. Now, compare this with the composite Simpson’s rule.

In case of Simpson’s rule, we had got three interpolation points. The interpolation points
were two-end points and the midpoint. We fit a parabola. So, as such we expect that the
error should be 0 for quadratic polynomial, but it is a property of even degree
interpolation that if your interpolation points are chosen symmetrically, in that case, you
get 1 extra degree of exactness; that means we expect that there should not be any error
for quadratic polynomial but there is also no error for cubic polynomial.

So, for Simpson’s rule, we achieve the exactitude for cubic polynomials with three-
points. For Gaussian quadrature with two points, we achieve that there is no error for
cubic polynomials with two points. Now, in each interval, we have got two Gauss points.
So, total there are 2 n Gauss points. For the Simpson’s rule in each interval, we have got
three points, but the two end points or included in the Simpson’s point.

In case of Gaussian Quadrature, both the interpolation points those are the interior points
in the interval t i to t i plus 1. So, in Simpson’s rule, because the end points are
interpolation points, they will be common to the adjoining interval, and hence, the total
number of points which will come into picture for the composite Simpson’s rule. They
are going to be 2 n plus 1, and for the Gaussian point, we have got 2 n. So then, there is n
difference between 2 n and 2 n plus 1, because when you look at the value of n to be
large, then 2 n and 2 n plus 1; they are considered as the same.

So, that means the Simpson’s rule and the Gauss two-point rule they are going to be on
par when we compare the order of convergence and a number of function evaluations.
When we compare two rules that is generally our criteria, how many number of times I
need to evaluate the function and then what is the order of convergence, and then, we
saw that in the corrected trapezoidal rule, we need also the derivative values. So, that
becomes an additional say a criteria. So, whether you need to evaluate only functions or
whether you need the derivative values, how many function evaluations and the order of
convergence. So, based on these criteria, the Simpson’s rule and Gauss two-point rule
they are going to be on par.

(Refer Slide Time: 30:37)

Suppose our function f is continuous. As I had remarked in the last lecture, that the
differentiability properties of function, they are assumed for obtaining orders of
convergence, but if I am interested only in knowing whether there is a convergence or
not, then in that case, we may not need the differentiability; like in case of say either
composite rectangle rule or composite midpoint rule, we saw that our rule is nothing but
Riemann sum, and hence, we had continuity; hence, we had convergence for continuous
function. Same was the case for trapezoidal rule; it is the sum of two. We can write
trapezoidal rule as a two Riemann sums both of which converge to integral a to b f x d x
and then we are dividing by 2.

In Simpson’s rule also similar thing can be done. In all these cases so far, what were
doing was we were fixing degree of polynomial and then applying it to small intervals.

Now for the Gaussian rule, whether we can increase the degree of the polynomial; that
means we have found Gauss two-points. So, whether there are Gauss three-points, Gauss
four-points and so on. So, you increase the degree of the polynomial. So, our method has
been a replace the function by interpolating polynomial.
We know how to integrate a polynomial; get a numerical quadrature rule, and we have
no rule or we does not have a set of interpolation points which guarantee convergence of
interpolation polynomials for all continuous functions.

Now, the Gauss two-points rule, like we are going to define what are the Gauss points;
like we have defined Gauss two-points, like that we are going to define Gauss k points.
Now, we have got a set of rules like for any n, we can specify n Gauss points; you fit a
polynomial.

Now, this set of interpolating polynomials will not converge to the given function in the
maximum norm. For all continuous functions, that is not possible, but what we are going
to show is even though the interpolating polynomials may not converge, our numerical
quadrature is going to converge, and in that, what the important point is that the weights
in the Gauss points, they are going to be bigger than 0.

So, that is what we are going to show. We are going to show that the Gaussian
integration - it converges for all continuous function. Just the assumption is that the
function f should be continuous. So, for that, now we have to first define what are the
Gauss points.

The Gauss two-points they were obtained as roots of certain quadratic polynomial and
that quadratic polynomial was perpendicular to one and constant polynomial and the
polynomial x. So, instead of applying, the gram Schmidt Ortho normalization process to
three functions - 1 x x square. We can apply it to n functions obtain a Ortho normal
polynomial and then look at the roots of such appropriate Ortho normal polynomial;
those are going to be our Gauss points.
(Refer Slide Time: 40:14)

So those are the known as Legendre polynomials. So, recall that x is c a b the vector
space of continuous functions defined on interval a b taking real values. We define inner
product f g to be integral a to b f x g x dx. Norm f 2 is the induced norm from this in
product. So, norm f 2 is square root of f, f positive square root. Look at the functions f 0
x is equal to 1; f 1 x is equal to x; f k x is equal to x raise to k. This is gram Schmidt
Ortho normalization process that define g 0 x to be equal to f 0 upon norm f 0 and for k
is equal to 1 2 and so on or k is defined as f k minus summation a goes from 0 to k minus
1 inner product of f k with g j g j j going from 0 to k minus 1, and then, g k will be equal
to defined as r k divided by norm of r k. So, it is the same process we had considered
before where we had looked at three functions - f 0, f 1, f 2.

Now, we are looking at a infinite set and then we construct such polynomials and these
are known as Legendre polynomials and zeroes of g k they are known as the Gauss
points. So, we will show that these polynomials which we construct g k is going to be a
polynomial of degree k. So, it is going to have k roots, but what is important is it is going
to have k distinct roots. Those roots or those zeroes those are known as the Gauss points
and those are going to be our interpolation points for fitting a polynomial of degree less
than or equal to k minus 1.

Now, because of the Orthogonality property, our g k is going to be a polynomial b of


degree k and it is going to be perpendicular to functions 1 x x raise to k minus 1; so, that
means our g k is going to be perpendicular to any polynomial of degree less than or equal
to k minus 1.

As I said g k is going to have k distinct roots, so, I can factorize it and I can write g k x
as x minus x 0 into x minus x 1 into x minus x k minus 1 and then multiply by some
constant the leading term.

So, the Orthogonality property of g k 2 functions 1 x x raise to k minus 1; that means if I


look at integral a to b x minus x 0 into x minus x k, if I multiply by constant function 1
and take the integral, that will be 0. If I multiply by x and take the integral, that is going
to be 0 and so on. So, that is the property of interpolation points allows us to obtain
exactitude for higher degree polynomial. So, that is what we are going to do next time,
and then, so, we will, we are going to define these Gauss points or we have defined
Gauss points.

We consider Gauss Gaussian integration. Then in the Gaussian integration, we will show
that the weights; so, the Gaussian integration formula is of the type summation w i f x i, i
goes from 0 to n. Then these weights we will show that they are bigger than 0, and using
that, we are going to prove the convergence of Gaussian rule.

So, we are going to do this, and then, we are going to consider some problems next time.
So, thank you.

You might also like