Eli Maor, e The Story of A Number, Among References
Eli Maor, e The Story of A Number, Among References
Simmons,
Calculus with Analytic Geometry, NY, McGraw Hill, 1985, pp. 734739
How Chebycev polynomials arise
Set y(x) = x
n
p
ott
n1
(x) where p
ott
n1
is the unique polynomial of degree at
most n 1 solving the minimum problem
min
pPn1
max
[1,1]
|x
n
p(x)|
where P
n1
is the set of all polynomials of degree less than or equal to n 1.
Moreover, set = max
[1,1]
|y(x)|.
Graphical considerations let us nd such p
ott
n1
, for n = 1, 2, 3:
For n = 1 the given problem min
pP0
max
[1,1]
|x p(x)| has the obvious
solution p
ott
0
(x) = 0. Moreover, observe that y(x
i
) = (1)
i
, = 1, x
1
= 1,
x
0
= 1.
For n = 2 the problem min
pP1
max
[1,1]
|x
2
p(x)| is solved by p
ott
1
(x) =
1
2
.
Moreover, y(x
i
) = (1)
i
, =
1
2
, x
2
= 1, x
1
= 0, x
0
= 1.
For n = 3 the solution p
ott
2
of the problem min
pP2
max
[1,1]
|x
3
p(x)| must
be a straight line with positive slope which intersects x
3
in three distinct points
whose abscissas are
2
(1, 0),
1
= 0,
0
=
2
(0, 1), i.e. p
ott
2
= x
with 0 < < 1. Consider the function g(x) = x
3
x in the interval [1, 1]
and notice that g
(x) = 3x
2
and thus g
(
_
3
) = 0, g(
_
3
) =
3
_
3
(
_
3
) =
2
3
3
. Moreover, g(1) = 1 (1). So, we have to choose
(0, 1) so that max{1 ,
2
3
3
} is minimum, i.e. (0, 1) such that
1 =
2
3
=
3
4
.
Thus, p
ott
2
(x) =
3
4
x. Moreover, observe that y(x
i
) = (1)
i
, =
1
4
, x
3
= 1,
x
2
=
1
2
, x
1
=
1
2
, x
0
= 1.
In general, Chebycev-Tonelli theory states that y(x) = x
n
p
ott
n1
(x) must
assume the values and alternately in n + 1 points x
j
of [1, 1], 1
x
n
< x
n1
< . . . < x
2
< x
1
< x
0
1: y(x
j
) = (1)
j
. Obviously y
(x
i
) = 0,
i = 1, . . . , n 1, whereas y
(x
0
)y
(x
n
) = 0 since y
(x), is zero in x
1
, x
2
, . . . , x
n1
. It
follows that y(x)
2
2
= c(x
2
1)y
(x)
2
for some real constant c. Noting that
the coecient of x
2n
is on the left 1 and on the right cn
2
, we conclude that
n
2
1 x
2
=
y
(x)
2
2
y(x)
2
,
n
1 x
2
=
y
(x)
_
2
y(x)
2
.
The latter equality is solved by y(x) = cos(narccos x + c), c R. Then the
identity y(1) = implies c = 2k, and thus
y(x) = x
n
p
ott
n1
(x) = cos(narccos x), 1 x
2
0.
1
Finally, observe that
cos(0 arccos x) = 1|
[1,1]
=: T
0
(x)|
[1,1]
,
cos(arccos x) = x|
[1,1]
=: T
1
(x)|
[1,1]
,
cos(2 arccos x) = 2 cos(arccos x) cos(arccos x) cos(0 arccos x)
= 2x
2
1|
[1,1]
=: T
2
(x)|
[1,1]
,
cos((j + 1) arccos x) = 2 cos(arccos x) cos(j arccos x) cos((j 1) arccos x)
= 2xT
j
(x) T
j1
(x)|
[1,1]
=: T
j+1
(x)|
[1,1]
.
Thus, =
1
2
n1
because T
n
(x) = 2
n1
x
n
+ . So, we have the important
result:
y(x) = x
n
p
ott
n1
(x) =
1
2
n1
cos(narccos x) =
1
2
n1
T
n
(x), 1 x
2
0.
Let us see two examples. The already studied specic case n = 3 is now imme-
diately obtained:
y(x) = x
3
p
ott
2
(x) =
1
4
cos(3 arccos x) =
1
4
(4x
3
3x) = x
3
3
4
x,
y(x
j
) = (1)
j 1
4
, p
ott
2
(x) = x
3
(x
3
3
4
x) =
3
4
x.
The cases n > 3 are analogously easily solved. In particular, for n = 4 we have
y(x) = x
4
p
ott
3
(x) =
1
8
cos(4 arccos x) =
1
8
(8x
4
8x
2
+ 1) = x
4
x
2
+
1
8
,
y(x
j
) = (1)
j 1
8
, p
ott
3
(x) = x
4
(x
4
x
2
+
1
8
) = x
2
1
8
.
Deation
Le A be a nn matrix. Denote by
i
, i = 1, . . . , n, the eigenvalues of A and
by y
i
the corresponding eigenvectors. So, we have Ay
i
=
i
y
i
, i = 1, . . . , n.
Assume that
1
, y
1
are given and that
1
= 0. Choose w C
n
such that
w
y
1
= 0 (given y
1
choose w not orthogonal to y
1
) and set
W = A
1
w
y
1
y
1
w
.
It is known that the eigenvalues of W are
0,
2
, . . . ,
j
, . . . ,
n
i.e. they are the same of A except
1
which is replaced with 0. Let us prove
this fact. Consider a matrix S whose rst column is y
1
and whose remaining
columns x
2
, . . . , x
n
are chosen such that S is non singular. Observe that
S
1
AS = S
1
[Ay
1
Ax
2
Ax
n
] = [
1
e
1
S
1
Ax
2
S
1
Ax
n
].
So, if we call B the (n 1) (n 1) lower right submatrix of S
1
AS, then
p
A
() = (
1
)p
B
(). But we also have
S
1
WS = S
1
AS S
1 1
w
y1
y
1
w
S
=
_
1
c
T
0 B
_
1
w
y1
e
1
[w
y
1
w
x
2
w
x
n
]
=
_
1
c
T
0 B
_
_
1
d
T
0 O
_
=
_
0 c
T
d
T
0 B
_
,
2
and thus the identity p
W
() = p
B
(), from which the thesis.
Let w
1
, w
2
, . . ., w
j
, . . ., w
n
be the corresponding eigenvectors (Ww
1
= 0,
Ww
j
=
j
w
j
j = 2, . . . , n). Is it possible to obtain the w
j
from the y
j
?
First observe that
Ay
1
=
1
y
1
Wy
1
= 0 : w
1
= y
1
. (a)
Then, for j = 2, . . . , n,
Wy
j
= Ay
j
1
w
y
1
y
1
w
y
j
=
j
y
j
1
w
y
j
w
y
1
y
1
. (1)
If we impose y
j
= w
j
+cy
1
, j = 2, . . . , n, then (1) becomes,
Ww
j
+cWy
1
=
j
w
j
+c
j
y
1
1
w
wj
w
y1
y
1
c
1
y
1
=
j
w
j
+y
1
[c
j
1
w
wj
w
y1
1
c]
So, if
j
=
1
and
w
j
= y
j
1
j
1
w
w
j
w
y
1
y
1
, (2)
then Ww
j
=
j
w
j
. If, moreover,
j
= 0, then w
y
j
= w
w
j
+
1
j1
w
w
j
w
y
j
= w
w
j
j
j1
w
w
j
=
j 1
j
w
y
j
. So, by (2),
for all j {2 . . . n} |
j
=
1
, 0 :
Ay
j
=
j
y
j
W(y
j
1
j
w
yj
w
y1
y
1
) =
j
(y
j
1
j
w
yj
w
y1
y
1
) : w
j
= y
j
1
j
w
yj
w
y1
y
1
.
(b)
Note that a formula for y
j
in terms of w
j
holds: see (2).
As regards the case
j
=
1
, it is simple to show that
for all j {2 . . . n} |
j
=
1
:
Ay
j
=
j
y
j
W(y
j
w
yj
w
y1
y
1
) =
j
(y
j
w
yj
w
y1
y
1
) : w
j
= y
j
w
yj
w
y1
y
1
.
(c)
Note that the vectors y
j
w
yj
w
y1
y
1
are orthogonal to w. Is it possible to nd
from (c) an expression of y
j
in terms of w
j
?
It remains the case
j
= 0: nd ? in
for all j {2 . . . n} |
j
= 0 :
Ay
j
=
j
y
j
= 0 W(?) =
j
(?) = 0 : w
j
=?
(d?)
(y
j
= w
j
w
wj
w
y1
y
1
w
y
j
= 0) . . .
Choices of w. Since y
1
y
1
= 0 one can set w = y
1
. In this way, if A is
hermitian also W is hermitian. . . . . If i is such that (y
1
)
i
= 0 then e
T
i
Ay
1
=
1
(y
1
)
i
= 0. So one can set w
= e
T
i
A = row i of A. In this way the row i of W
is null and therefore we can introduce a matrix of order n1 whose eigenvalues
are
2
, . . .,
n
(the unknown eigenvalues of A).
Exercise on deation
3
The matrix
G =
_
_
1
4
1
4
1
2
3
4
1
8
1
8
11
16
1
4
1
16
_
_
satises the identity Ge = e, e = [1 1 1]
T
. So, G has the eigenvalue 1 with corre-
sponding eigenvector e. Moreover, since (G) G
e
ew
= G
1
e
T
i
Ge
ee
T
i
G = Gee
T
i
G
for any i = 1, 2, 3 has 0,
2
,
3
as eigenvalues. For i = 1 we obtain
W =
_
_
0 0 0
1
2
1
8
3
8
7
16
0
7
16
_
_
,
thus the remaining eigenvalues of G are
1
8
and
7
16
.
Now observe that 1,
2
=
1
8
,
3
=
7
16
are eigenvalues also of G
T
. In
particular, there exists p such that G
T
p = p, but p has to be computed. The
following inverse power iterations
v
0
, v
0
1
= 1, a
k
= (G
T
(1 +)I)
1
v
k
, v
k+1
= a
k
/a
k
1
, . . .
generate v
k
convergent to p, p
1
= 1, with a convergence rate O(
1+1
1++
1
8
).
One eigenvalue at a time with power iterations
Assume A diagonalizable with eigenvalues
j
such that |
1
| > |
k
|, k =
2, . . . , n. Let v = 0 be a vector. Then
A
k
v =
j
A
k
x
j
=
k
j
x
j
,
1
k
1
A
k
v =
1
x
1
+
j=1
k
j
k
1
x
j
.
Thus
1
k
1
z
A
k
v =
1
z
x
1
+
j=1
k
j
k
1
z
x
j
,
1
k+1
1
z
A
k+1
v =
1
z
x
1
+
j=1
k+1
j
k+1
1
z
x
j
,
z
A
k+1
v
z
A
k
v
1
, k .
So, if an eigenvalue dominates the other eigenvalues, then such eigenvalue can
be approximated better and better by computing the quantities:
Av,
z
Av
z
v
, A
2
v = A(Av),
z
A
2
v
z
Av
, A
3
v = A(A
2
v),
z
A
3
v
z
A
2
v
, . . .
It is clear that each new approximation requires a multiplication Aw.
4
Positive denite matrices and the choice w = y
1
Let A be a positive denite n n matrix and let
j
, y
j
be such that Ay
j
=
j
y
j
. Assume that 0 <
n
<
n1
< <
2
<
1
. Then compute
1
via
power iterations, and y
1
from a weak approximation
1
of
1
via inverse power
iterations, both applied to A. Then the eigenvalues of
A
1
y
1
2
y
1
y
1
are 0,
n
,
n1
, ,
2
.
Compute
2
via power iterations, and y
2
from a weak approximation
2
of
2
via
inverse power iterations, both applied to A
1
y1
2
y
1
y
1
. Then the eigenvalues
of
(A
1
y
1
2
y
1
y
1
)
2
y
2
2
y
2
y
2
are 0, 0,
n
, ,
3
.
. . .
( (A
1
y
1
2
y
1
y
1
)
2
y
2
2
y
2
y
2
)
n
y
n
2
y
n
y
n
are 0, 0, . . . , 0.
It follows that A =
n
j=1
j
yj
2
y
j
y
j
= QDQ
, Q = [
1
y12
y
1
1
y22
y
2
1
yn2
y
n
].
Note that the matrix Q is unitary (eigenvectors corresponding to distinct eigen-
values of a hermitian matrix must be orthogonal).
The QR method for 2 2 matrices
Set
A =
_
x y
z w
_
, x, y, w, z R.
Choose , R such that
2
+
2
= 1 and [Q
1
A]
21
= 0, where
Q
1
=
_
_
,
i.e. =
x
x
2
+z
2
, =
z
x
2
+z
2
. Then
Q
1
A =
_
x
2
+z
2
xy+zw
x
2
+z
2
0
zy+xw
x
2
+z
2
_
=: R.
Now dene the matrix B = RQ
T
1
:
B =
_
x +
z(xy+zw)
x
2
+z
2
z +
x(xy+zw)
x
2
+z
2
z(xwzy)
x
2
+z
2
x(xwzy)
x
2
+z
2
_
.
Note that B = Q
1
AQ
T
1
, Q
T
1
= Q
1
1
, so that B has the same eigenvalues of A.
(Moreover, B is real symmetric if A is real symmetric).
So, by setting x
0
= x, y
0
= y, w
0
= w, z
0
= z we can dene the four
sequences
x
k+1
= x
k
+
z(x
k
y
k
+z
k
w
k
)
x
2
k
+z
2
k
, y
k+1
= z
k
+
x
k
(x
k
y
k
+z
k
w
k
)
x
2
k
+z
2
k
,
z
k+1
=
z(x
k
w
k
z
k
y
k
)
x
2
k
+z
2
k
, w
k+1
=
x
k
(x
k
w
k
z
k
y
k
)
x
2
k
+z
2
k
,
k = 0, 1, 2, . . . ,
5
which satisfy (by the theory on QR method) the properties:
z
k
0, x
k
, w
k
eigenvalues of A, k +
provided the eigenvalues of A are distinct in modulus (try to prove this asser-
tion). For example, if x = w = 2 and y = z = 1, then x
1
=
14
5
, y
1
=
3
5
,
w
1
=
6
5
, z
1
=
3
5
, x
2
=
122
41
, y
2
=
9
41
, w
2
=
42
41
, z
2
=
9
41
, . . .. It is clear that
x
k
and w
k
tend to 3 and 1, the eigenvalues of A.
Some results on matrix algebras
Given a n n matrix X, set
K
X
= {A : AX XA = 0}, P(X) = {p(X) : p polynomials}.
Note that P(X) K
X
, and
P(X) = K
X
i dimP(X) = dimK
X
= n.
Let Z denote the n n shift-forward matrix, i.e. [Z]
ij
= 1 if i = j + 1, and
[Z]
ij
= 0 otherwise. Note that
K
Z
= P(Z) = {lower triangular Toeplitz matrices},
K
Z
T = P(Z
T
) = {upper triangular Toeplitz matrices},
K
Z
T
+ene
T
1
= P(Z
T
+e
n
e
T
1
) = { circulant matrices},
K
Z
T
+Z
= P(Z
T
+Z) = { matrices},
{symmetric circulant matrices} = P(Z
T
+Z +e
n
e
T
1
+e
1
e
T
n
)
K
Z
T
+Z+ene
T
1
+e1e
T
n
= {A+JB : A, B circulant matrices }
(e
T
i
J = e
ni+1
i = 1, . . . , n, J =counteridentity).
Set X = Z+Z
T
. Then the condition AX = XA, A = (a
ij
)
n
i,j=1
, is equivalent
to the n
2
conditions:
a
i,j1
+a
i,j+1
= a
i1,j
+a
i+1,j
, 1 i, j n,
a
i,0
= a
i,n+1
= a
0,j
= a
n+1,j
= 0. Thus a generic matrix of has the form (in
the case n = 5):
_
_
a b c d e
b a +c b +d c +e d
c b +d a +c +e b +d c
d c +e b +d a +c b
e d c b a
_
_
.
Since XS = SD, S
ij
=
_
2
n+1
sin
ij
n+1
(S
2
= I), D = diag (2 cos
j
n+1
), and ma-
trices from are determined from their rst row z
T
, we have the representation:
(z) = Sd(Sz)d(Se
1
)
1
S
((z) = matrix of whose rst row is z
T
).
Given a generic non singular matrix M, we have the representation
{Md(z)M
1
: z C
n
} = {Md(z)d(M
T
v)
1
M
1
: z C
n
}
for any vector v such that (M
T
v)
j
= 0, j (note that v
T
Md(z)d(M
T
v)
1
M
1
=
z
T
). For M =Fourier, sine matrices, one can choose v = e
1
(so circulants and
6
matrices are determined by their rst row). But there are signicant matrices
M (associated to fast discrete transforms) for which v cannot be chosen equal
to e
1
(i.e. matrices diagonalized by M are not determined by their rst row).
An example of matrix algebra which is not commutative is L = {A + JB :
A, B circulants}. The best approximation (in the Frobenius norm) in L of a
given matrix A, call it L
A
, is well dened. It is known that L
A
is hermitian
any time A is hermitian. But it is not known if (in case A hermitian) z
Az > 0
z = 0 implies z
L
A
z > 0 z = 0.
Assume {t
k
}
+
k=0
, t
k
R, such that
+
k=0
|t
k
| < +. (1)
Set t() =
+
k=
t
|k|
e
ik
, t
min
= min t(), t
max
= max t(). Then the eigenval-
ues of T
(n)
= (t
|ij|
)
n
j,j=1
are in the interval [t
min
, t
max
] for all n (proof omitted).
Let C
T
(n) be the best circulant approximation of T
(n)
. Since
C
T
(n) = F diag ((F
T
(n)
F)
ii
)F
, F
ij
=
1
(i1)(j1)
n
,
n
= e
i2/n
,
we have
t
min
min (T
(n)
) min (C
T
(n) ), max (C
T
(n) ) max (T
(n)
) t
max
.
In particular, if
t
min
> 0, (2)
then the T
(n)
and the C
T
(n) are positive denite, and
2
(C
T
(n) )
2
(T
(n)
)
tmax
tmin
; moreover, if E
n
E
T
n
= C
T
(n) , and
(n)
j
and
(n)
j
are the eigenvalues, respec-
tively, of I E
1
n
T
(n)
E
T
n
and C
T
(n) T
(n)
in nondecreasing order, then
1
t
max
|
(n)
j
|
1
max (C
T
(n) )
|
(n)
j
| |
(n)
j
|
1
min (C
T
(n) )
|
(n)
j
|
1
t
min
|
(n)
j
|
(2.5)
(apply the Courant-Fisher minimax characterization of the eigenvalues of a real
symmetric matrix to I E
1
n
T
(n)
E
T
n
).
Theorem. If (1) holds, then the eigenvalues of C
T
(n) T
(n)
are clustered
around 0. If (1) and (2) hold, then the eigenvalues of I C
1
T
(n)
T
(n)
are clustered
around 0.
Proof. For the sake of simplicity, set T = T
(n)
. Fix a number N, n > 2N,
and let W
(N)
and E
(N)
be the n n matrices dened by
[W
(N)
]
ij
=
_
[C
T
T]
ij
i, j n N
0 otherwise
and
C
T
T = E
(N)
+W
(N)
. (3)
Note that [C
T
]
1j
= ((n j +1)t
j1
+(j 1)t
nj+1
)/n, j = 1, . . . , n, and thus,
for i, j = 1, . . . , n, we have
[C
T
T]
ij
=
s
|ij|
|i j|
n
, s
k
= t
k
t
nk
.
7
Now observe that the rank of E
(N)
is less than or equal to 2N, so E
(N)
has at
least n 2N null eigenvalues. Also observe that C
T
T, E
(N)
and W
(N)
are
all real symmetric matrices. In the following we prove that, for any xed > 0,
there exist N
and
2N
such that
W
(N)
1
< n >
. (4)
As a consequence of this fact and of the identity (3) for N = N
, we shall
have that for all n >
at least n 2N
eigenvalues of C
T
T are in (, ).
Moreover, if t
min
> 0, then, by (2.5), we shall also obtain the clustering around
0 of the eigenvalues of I C
1
T
T.
So, let us prove (4). First we have
W
(N)
1
2
n
nN1
j=1
j|s
j
| 2
n1
j=N+1
|t
j
| +
2
n
N
j=1
j|t
j
|. (5)
Then, for any > 0 choose N
such that 2
+
j=N+1
|t
j
| <
2
and set N = N
2N
,
2
n
N
j=1
j|t
j
| <
2
(the sequence
1
n
n1
j=1
j|t
j
| tends to 0 if (1) holds), then by
(5) we have the thesis (4).
Stai usando il seguente algoritmo (il primo a p.18 dellarticolo) che calcola
direttamente una successione di e vettori x
k
convergente a x tale che p =
1
x1
x ? Se non lo stai usando, allora leggilo attentamente ed implementalo
accuratamente, rispondendomi alle domande che troverai.
x
k+1
= x
k
+F(I
nd(Fc))
1
F
(v A
T
x
k
)
F
s,j
=
1
(s1)(j1)
n
, s, j = 1, . . . , n,
n
= e
i2/n
d(z) =
_
_
z
1
z
2
.
.
.
z
n
_
_
c = [c
0
c
1
c
n1
]
T
, c
0
= s
0
/n = 0, c
i
= (s
i
+s
n+i
)/n, i = 1, . . . , n 1
s
1
=
n1
i=1
[P]
i,i+1
, s
1
=
n1
i=1
[P]
i+1,i
, . . .
Quindi, ogni volta che n e una potenza di 2: calcolo dei c
j
, j = 0, . . . , n 1
(c
0
= 0); calcolo di Fc; calcolo dei Fc (il vettore coniugato di Fc); calcolo della
matrice diagonale D = (I
nd(Fc))
1
. Poi, per ogni k = 0, 1, . . ., calcolo di
x
k+1
= x
k
+FDF
(v (I P
T
)x
k
)
(scegliendo x
0
= v = [1/n 1/n]
T
).
Nota che esiste una matrice di permutazione Q tale che F
= QF, F = QF
,
hai usato questo fatto per calcolare x
k+1
? Quindi F
z e semplicemente una
permutazione di Fz (e viceversa); la FFT che hai tu calcola Fz o F
z ?
8
I vettori x
k
dovrebbero convergere a un vettore x che una volta normalizzato
dovrebbe coincidere con il vettore page-rank p, cioe p =
1
x1
x.
Mi scrivi dettagliatamente i tre criteri di arresto che usi? Quello per potenze
dovrebbe dierire da quelli usati per RE e RE precondizionato perche i vettori
generati dal metodo delle potenze sono gia normalizzati.
y
(t) =
1
2y(t)
, y(0) = 1 (y(t) =
1 t)
1 t =
1
p
iff t = 1
1
p
Integrate in [0, 1
1
p
] the Cauchy problem to obtain an approximation of
1
p
.
p = 3: Eulero for h =
1
3
, two steps; for h =
1
6
, four steps.
(x
i
+h) = (x
i
) +hf(x
i
, (x
i
)) = (x
i
) h
1
2(x
i
)
(0 +
1
3
) = (0)
1
3
1
2(0)
= 1
1
6
=
5
6
(
1
3
+
1
3
) = (
1
3
)
1
3
1
2(
1
3
)
=
5
6
1
5
=
19
30
Idem, implicit Euler: h =
1
3
not ok; h =
1
6
ok?.
(x
i
+h) = (x
i
) +hf(x
i
+h, (x
i
+h)) = (x
i
) h
1
2(x
i
+h)
(x
i
+h)
2
(x
i
+h)(x
i
) +h
1
2
= 0
(x
i
+h) =
1
2
((x
i
)
_
(x
i
)
2
2h)
(0 +
1
3
) =
1
2
((0)
_
(0)
2
2
3
) =
1
2
(1
_
1
2
3
) =
1
2
1
2
1
3 + 1
2
3
(
1
3
+
1
3
) =
1
2
((
1
3
)
_
(
1
3
)
2
2
3
)
not real!
The given matrix is non negative and stochastic by columns
2
(1 a b) b a = ( 1)(
2
+ (a +b) +a)
Eigenvalues:
1,
a +b
2
_
(a +b)
2
4a
2
We know that their absolute value is less than or equal to 1. Question: when is
it equal to 1?
Assume they are real. Then question becomes:
a +b
2
+
_
(a +b)
2
4a
2
= 1
9
a +b
2
_
(a +b)
2
4a
2
= 1
Assume they are not real. Then they can be rewritten as follows:
a +b
2
i
_
4a (a +b)
2
2
Thus, question becomes:
(a +b)
2
4
+
4a (a +b)
2
4
= 1
equality which is satised i a = 1
An equivalent denition of Bernoulli polynomials
The degree n Bernoulli polynomial B
n
(x) is uniquely determined by the
conditions
B
n
(x + 1) B
n
(x) = nx
n1
,
_
1
0
B
n
(x) dx = 0. (1)
Note that the rst condition in (1) implies:
_
t+1
t
B
n
(x + 1)dx
_
t+1
t
B
n
(x)dx = n
_
x
n
n
_
t+1
t
,
_
t+2
t+1
B
n
(y)dy
_
t+1
t
B
n
(x)dx = (t + 1)
n
t
n
.
By writing the latter identity for t = 0, 1, . . . , x 1, taking into account the
second condition in (1), and summing, we obtain:
_
x+1
x
B
n
(y)dy = x
n
, x R. (2)
So, (1) implies (2). Of course, (2) implies the second condition in (1) (choose
x = 0). It can be shown that (2) implies also that B
n
must be a polynomial of
degree at least n and must satisfy the rst condition in (1).
Assume that we know that (2) implies that B
n
must be a polynomial. Let us
show that then its degree is at least n. If, on the contrary, B
n
(y) = a
0
y
n1
+. . .
then
_
x+1
x
B
n
(y)dy = [
a0
n
y
n
+. . .]
x+1
x
=
a0
n
[(x +1)
n
x
n
] +. . . is a degree n1
polynomial, and thus cannot be equal to x
n
.
Finally, the fact that (2) implies the rst condition in (1) can be shown by
deriving (2) with respect to x, and remembering the rule:
d
dx
_
g(x)
f(x)
h(x, y)dy = h(x, g(x))g
(x) +
_
g(x)
f(x)
x
h(x, y) dy.
10