Eigenvalue Estimates For Non-Normal Matrices and The Zeros of Random Orthogonal Polynomials On The Unit Circle

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Journal of Approximation Theory 141 (2006) 189213

www.elsevier.com/locate/jat
Eigenvalue estimates for non-normal matrices and the
zeros of random orthogonal polynomials
on the unit circle
E.B. Davies
a,1
, B. Simon
b,,2
a
Department of Mathematics, Kings College London, Strand, London WC2R 2LS, UK
b
Mathematics 253-37, California Institute of Technology, Pasadena, CA 91125, USA
Received 30 August 2005; accepted 3 March 2006
Communicated by Paul Nevai
Available online 15 May 2006
Abstract
We prove that for any n n matrix, A, and z with |z| A, we have that (z A)
1
cot(

4n
)
dist(z, spec(A))
1
. We apply this result to the study of random orthogonal polynomials on the unit circle.
2006 Elsevier Inc. All rights reserved.
1. Introduction
This paper concerns a sharp bound on the approximation of eigenvalues of general non-normal
matrices that we found in a study of the zeros of orthogonal polynomials. We begin with a brief
discussion of the motivating problem, which we return to in Section 7.
Given a probability measure dj on C with
_
|z|
n
dj(z) < (1.1)

Corresponding author. Fax: +1 626 585 1728.


E-mail addresses: [email protected] (E.B. Davies), [email protected] (B. Simon).
1
Supported in part by EPSRC grant GR/R81756.
2
Supported in part by NSF grant DMS-0140592.
0021-9045/$ - see front matter 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.jat.2006.03.006
190 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
we dene the monic orthogonal polynomials,
n
(z), by

n
(z) = z
n
+lower order (1.2)
_
z
j

n
(z) dj(z) = 0, j = 0, 1, . . . , n 1. (1.3)
If
P
n
=orthogonal projection in L
2
(C, dj)
onto polynomials of degree n 1 or less (1.4)
then

n
= (1 P
n
)z
n
. (1.5)
A key role is played by the operator
A
n
= P
n
M
z
P
n
Ran(P
n
), (1.6)
where M
z
is the operator of multiplication by z and A
n
is an operator on the n-dimensional space
Ran(P
n
).
If z
0
is a zero of
n
(z) of order k, then f
z
0
(z z
0
)
k

n
(z) is in Ran(P
n
) and
(A
n
z
0
)
k
f
z
0
= 0, (A z
0
)
k1
f
z
0
= 0, (1.7)
which implies

n
(z) = det(z A
n
). (1.8)
Also,
n
(z) is the minimal polynomial for A
n
.
In the study of orthogonal polynomials on the real line (OPRL), a key role is played by the fact
that for any y Ran(P
n
) with y
L
2 = 1,
dist(z
0
, {zeros of
n
})(A
n
z
0
)y (OPRL case). (1.9)
This holds because, in the OPRL case, A
n
is self-adjoint. Indeed, for any normal operator, B,
(throughout is a Hilbert space norm; for n n matrices, the usual matrix norm induced by
the Euclidean inner product)
dist(z
0
, spec(B)) = (B z
0
)
1

1
(1.10)
and, of course, for any invertible operator C,
inf{Cy | y = 1} = C
1

1
. (1.11)
We were motivated by seeking a replacement of (1.9) in a case where A
n
is non-normal. Indeed,
we had a specic situation of orthogonal polynomials on the unit circle (OPUC; see [17,18]) where
one has a sequence z
n
*D = {z | |z| = 1} and corresponding unit trial vectors, y
n
, so that
(A
n
z
n
)y
n
C
1
e
C
2
n
(1.12)
for all n with C
2
> 0. We would like to conclude that
n
(z) has zeros near z
n
.
It is certainly not sufcient that (A
n
z
n
)y
n
0. For the case dj(z) =
d0/2 has
n
(z) = dist(1, spec(A
n
)) = 1, but if y
n
= (1 + z + + z
n1
)/

n, then
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 191
(A
n
1)y
n
= P
n
(z 1)y
n
= n
1/2
P
n
(z
n
1) = n
1/2
1 = n
1/2
. As we will
see later, by a clever choice of y
n
, one can even get trial vectors with (A
n
1)y
n
= O(n
1
).
Of course, by (1.11), we are really seeking some kind of bound relating (A
n
z
n
)
1
to
dist(z
n
, spec(A
n
)). At rst sight, the prognosis for this does not seem hopeful. The n n matrix
N
n
=

0 1 0
.
.
.
.
.
.
.
.
.
1
0 0

(1.13)
has
(z N
n
)
1
|z|
n
(1.14)
since (z N
n
)
1
=

n1
j=0
z
j1
(N
n
)
j
has z
n
in the 1, n position. Thus, as is well known,
(A
n
z)
1
for general n n matrices A
n
and general z cannot be bounded by better than
dist(z, spec(A
n
))
n
. Indeed, the existence of such bounds by Henrici [4] is part of an extensive
literature on general variational bounds on eigenvalues. Translated to a variational bound, this
would give dist(z
n
, {zeros of
n
})C(A
n
z
n
)y
1/n
, which would not give anything useful
from (1.12).
We note that as n , there can be difculties even if z
0
stays away from spec(A
n
). For, by
(1.14),
(1 2N
n
)
1
2
n1
(1.15)
diverges as n even though 2N
n
is bounded in n.
Despite these initial negative indications, we have found a linear variational principle that lets
us get information from (1.12). The key realization is that z
n
and A
n
are not general. Indeed,
|z
n
| = A
n
= 1. (1.16)
It is not a new result that a linear bound holds in the generality we discuss. In [11], Nikolski
presents a general method for estimating norms of inverses in terms of minimal polynomials (see
the proof of Lemma 3.2 of [11]) that is related to our argument in Section 6.1. His ideas yield a
linear bound but not with the optimal constant we nd.
Our main theorem is
Theorem 1. Let M
n
be the set of pairs (A, z) where A is an n n matrix, z C with
|z| A (1.17)
and
z / spec(A). (1.18)
Then
c(n) sup
M
n
dist(z, spec(A))(A z)
1
= cot
_

4n
_
. (1.19)
192 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
Of course, the remarkable fact, given (1.14), is that c(n) < when we only use the rst power
of dist(z, spec(A)). It implies that so long as (1.17) holds,
dist(z, spec(A))c(n)(A z)y (1.20)
for any unit vector y. For this to be useful in the context of (1.12), we need only mild growth
conditions on c(n); see (1.21) below.
As an amusing aside, we note that
c(1) = 1 = 0 +

1,
c(2) = 1 +

2,
c(3) = 2 +

3,
but the obvious extrapolation from this fails. Instead, because of properties of cot(x),
c(n)
4

n, (1.21)
c(n)
n
is monotone increasing to
4

so, in fact, for n3,


2 +

3
3

c(n)
n

a spread of 2.3%.
We note that, by replacing A by A/z and z by 1, it sufces to prove
sup
A<1
dist(1, spec(A))(1 A)
1
= cot
_

4n
_
(1.22)
and it is this that we will establish by proving three statements. We will use the special n n
matrix
M
n
=

1 2 . . . 2
0 1 . . . 2
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 1

(1.23)
given by
(M
n
)
k
=

2 if k < ,
1 if k = ,
0 if k > .
Our three sub-results are
Theorem 2. M
n
= cot(/4n).
Theorem 3. For each 0 < a < 1, there exist n n matrices A
n
(a) with
A
n
(a)1, spec(A
n
) = {a} (1.24)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 193
and
lim
a1
(1 a)(1 A
n
(a))
1
= M
n
. (1.25)
Theorem 4. Let A be an upper triangular matrix with A1 and 1 / spec(A). Then
dist(1, spec(A))|(1 A)
1
k
|

2 if k < ,
1 if k = ,
0 if k > .
(1.26)
Proof that Theorems 24 Theorem 1. Any matrix has an orthonormal basis in which it is
upper triangular: one constructs such a Schur basis by applying GramSchmidt to any algebraic
basis in which A has Jordan normal form. In such a basis, (1.26) says that
dist(1, spec(A))(1 A)
1
yM
n
yM
n
y
so Theorem 2 implies LHS of (1.22) cot(/4n).
On the other hand, using A
n
(a) in dist(1, spec(A))(1 A)
1
implies LHS of (1.22) cot
(/4n). We thus have (1.22) and, as noted, this implies (1.19).
To place Theorem 1 in context, we note that if |z| > A,
(z A)
1

j=0
|z|
j1
A
j
= (|z| A)
1
. (1.27)
So (1.19) provides a borderline between the dimension-independent bound (1.27) for |z| > A
and the exponential growth that may happen if |z| < A, essentially the phenomenon of pseu-
dospectra which is well documented in [24]; see also [15].
The structure of this paper is as follows. In Section 2, we will prove Theorem 4, the most
signicant result in this paper since it implies c(n) < and, indeed, with no effort that c(n)2n.
Our initial proofs of c(n) < were more involvedthe fact that our nal proof is quite simple
should not obscure the fact that c(n) < is a result we nd both surprising and deep.
In Section 3, we use upper triangular Toeplitz matrices to construct A
n
(a) and prove Theorem
3. Sections 4 and 5 prove Theorem 2; indeed, we also nd that if
(Q
n
(a))
k
=

1 if k < ,
a if k = ,
0 if k >
(1.28)
then
Q
n
(1) =
1
2 sin(

4n+2
)
(1.29)
which means we can compute Q
n
(a) for a = 0,
1
2
, 1. While the calculation of M
n
and
Q
n
(1) is based on explicit formulae for all the eigenvalues and eigenvectors of certain associated
operators, we could just pull them out of a hat. Instead, in Section 4, we discuss the motivation
that led to our guess of eigenvectors, and in Section 5 explicitly prove Theorem 2.
Section6contains a number of remarks andextensions concerningTheorem1, most importantly
to numerical range concerns. Section 7 contains the application to random OPUC.
194 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
2. The key bound
Our goal in this section is to prove Theorem 4. A is an upper triangular n n matrix. Let
z
1
, . . . , z
n
be its diagonal elements. Since
det(z A) =
n

j=1
(z z
j
) (2.1)
the z
j
s are the eigenvalues of A counting algebraic multiplicity. In particular,
sup
j
|1 z
j
|
1
= dist(1, spec(A))
1
. (2.2)
Dene
C = (1 A)
1
+(1 A

)
1
1. (2.3)
Proposition 2.1. Suppose A1. Then
(a) C
jj
= |1 z
j
|
2
(1 |z
j
|
2
)
2|1 z
j
|
1
, (2.4)
(b) C0,
(c) |C
jk
| |C
jj
|
1/2
|C
kk
|
1/2
. (2.5)
(d) If j < k, then (1 A)
1
jk
= C
jk
.
Proof. (a) Since A is upper triangular,
[(1 A)
1
]
jj
= (1 z
j
)
1
(2.6)
so (2.4) comes from
(1 z
j
)
1
+(1

z
j
)
1
1 = |1 z
j
|
2
(1 |z
j
|
2
) (2.7)
and the fact that for |z| 1,
|1 z|
1
(1 |z|
2
) = (1 +|z|)(1 |z|)(|1 z|
1
)
2
since 1 |z| |1 z|.
(b) The operator analog of (2.7) is the direct computation
C = [(1 A)
1
]

(1 A

A)(1 A)
1
0 (2.8)
since A1 implies A

A1.
(c) This is true for any positive denite matrix.
(d) (1 A

)
1
is lower triangular and 1 is diagonal.
Proof of Theorem 4. (1 A)
1
is upper triangular so [(1 A)
1
]
k
= 0 if k > . By (2.6)
and (2.2),
|[(1 A)
1
]
kk
| = |1 z
k
|
1
dist(1, spec(A))
1
. (2.9)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 195
By (a), (c), (d) of the proposition, if k < ,
|[(1 A)
1
]
k
| [|1 z
k
|
2
|1 z

|
2
(1 |z
k
|
2
)(1 |z

|
2
)]
1/2
2[|1 z
k
|
1
|1 z

|
1
]
1/2
2[dist(1, spec(A))]
1
by (2.2).
3. Upper triangular Toeplitz matrices
A Toeplitz matrix [1] is one that is constant along diagonals, that is, A
jk
is a function of j k.
An n n upper triangular Toeplitz matrix (UTTM) is thus of the form

a
0
a
1
a
2
. . . a
n1
0 a
0
a
1
. . . a
n2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 a
0

. (3.1)
These concern us because M
n
is of this form and because the operators, A
n
(a), of Theorem 3
will be of this form. In this section, after recalling the basics of UTTM, we will prove Theorem
3. Then we will state some results, essentially due to Schur [16], on the norms of UTTM that we
will need in Section 5 in one calculation of the norm of M
n
.
Given any function, f , which is analytic near zero, we write T
n
(f ) for the matrix in (3.1) if
f (z) = a
0
+a
1
z + +a
n1
z
n1
+O(z
n
). (3.2)
f is called a symbol for T
n
(f ).
We note that
T
n
(fg) = T
n
(f )T
n
(g). (3.3)
This can be seen by multiplying matrices and Taylor series or by manipulating projections on
2
(see, e.g., Corollary 6.2.3 of [17]).
In addition, if f is analytic in {z | |z| < 1}, then
T
n
(f ) sup
|z|<1
|f (z)|. (3.4)
To see this well-known fact, associate an analytic function
v(z) = v
0
+v
1
z + (3.5)
to the vector
n
(v) C
n
by

n
(v) = (v
n1
, v
n2
, . . . , v
0
)
T
(3.6)
and note that with
2
, the H
2
norm,

n
(v) = inf{v
2
|
n
=
n
(v)}, (3.7)
T
n
(f )
n
(v) =
n
(f v) (3.8)
196 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
and
f v
2
f

v
2
. (3.9)
If N
n
is given by (1.13), then T
n
(f ) = f (N
n
), so an alternate proof of (3.4) may be based on
von Neumanns theorem; see Section 6.5.
Proof of Theorem 3. For a with 0 < a < 1, dene
f
a
(z) =
z +a
1 +az
(3.10)
and dene
A
n
(a) = T
n
(f
a
). (3.11)
Then f
a
(e
i0
) = e
i0
(1 +ae
i0
)/(1 +ae
i0
) has |f
a
(e
i0
)| = 1, so sup
|z|<1
|f
a
(z)| = 1 and thus,
by (3.4),
A
n
(a)1. (3.12)
By (3.1),
spec(A
n
(a)) = {f
a
(0)} = {a}. (3.13)
By (3.5),
(1 A
n
(a))
1
= T
n
((1 f
a
(z))
1
). (3.14)
Now
(1 a)(1 f
a
(z))
1
=
z +a
1 z
(3.15)
so
lim
a1
(1 a)(1 f
a
(z))
1
=
1 +z
1 z
. (3.16)
Thus,
lim
a1
(1 a)(1 A
n
(a))
1
= T
n
_
1 +z
1 z
_
= M
n
(3.17)
since (1 +z)/(1 z) = 1 +2z +2z
2
+ .
We now want to rene (3.4) to get equality for a suitable f . A key role is played by
Lemma 3.1. Let : D and A an operator with :
1
/ spec(A). Dene
B = (A :)(1 :A)
1
. (3.18)
Then
(1) B1 A1, (3.19)
(2) B = 1 A = 1. (3.20)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 197
Proof. By a direct calculation,
1 B

B = (1 :A

)
1
[(1 |:|
2
)(1 A

A)](1 :A)
1
. (3.21)
Eq. (3.19) follows since 1 B

B0 1 A

A0, and (3.20) follows since (3.21) implies


inf
=1
(, (1 B

B)) = 0 inf
=1
(, (1 A

A)) = 0.
Remark. This lemma is further discussed in Section 6.5.
Theorem 3.2. If A is an n n UTTM with A1, then there exists an analytic function, f , on
D such that
sup
|z|<1
|f (z)| 1 (3.22)
and
A = T
n
(f ). (3.23)
Proof. The proof is by induction on n. If n = 1, A1 means |a
0
| 1 and we can take
f (z) a
0
. For general n, A1 means |a
0
| 1. If |a
0
| = 1, then A = a
0
1 and we can
take f (z) a
0
. If a
0
< 1, dene B by (3.18) with : = a
0
. B is a UTTM with zero diagonal
terms, so
B =

0

B
.
.
.
0 0

, (3.24)
where

B = B1 by the lemma.
By the induction hypothesis,

B = T
n1
(g) where
sup
|z|<1
|g(z)| 1. (3.25)
Then (3.23) holds with
f =
a
0
+zg
1 +a
0
zg
. (3.26)
(3.25) and (3.26) imply (3.22).
Remarks. (1) By iterating f g, we see that one constructs f via the Schur algorithm; see
Section 1.3 of [17].
(2) Combining this and (3.4), one obtains Schurs celebrated result that a
0
+a
1
z+ +a
n1
z
n1
is the start of the Taylor series of a Schur function if and only if the matrix Aof (3.1) obeys A

A1.
This result is intimately connected to Neharis theorem on the norm of Hankel operators [8,13];
see Partington [12].
(3) This is classical; see [1,10,13].
198 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
To state the last result of this section, we need a denition:
Denition. A Blaschke factor is a function on D of the form
f (z, w) =
z w
1 wz
, (3.27)
where w D. A (nite) Blaschke product is a function of the form
f (z) = c
k

j=1
f (z, w
k
), (3.28)
where c *D. k is called the order of f . We allow k = 0, in which case f (z) is a constant value
in *D.
Theorem 3.3. An n n UTTM, A, has A = c if and only if A = T
n
(f ) for an f so that c
1
f
is a Blaschke product of order k n 1.
Proof (See as alternates: [10,13]). Without loss, we can take c = 1. The proof is by induction
on n. If n = 1, k must be 0, and the theorem says |a
0
| = 1 if and only if f (0) = c *D, which
is true.
It is not hard to see that if f and f
1
are related by
f
1
(z) = z
1
f (z) f (0)
1 f (0) f (z)
then f is a Blaschke product of order k 1 if and only if f
1
is a Blaschke product of order k 1.
Given A a UTTM with A1, |a
0
| = 1 if and only if A = T
n
(a
0
), that is, A is given by a
Blaschke product of order 0. If |a
0
| < 1, we dene B by (3.18). B = 1 if and only if A = 1.

B given by (3.25) is related to A by A = T


n
(f ) if and only if

B = T
n1
(f
1
). Thus, by induction,
A = 1 if and only if f is a Blaschke product of order k n 1.
4. Inverse of differential/difference operators
In this section and the next, we will nd explicit formulae for the norms of M
n
and Q
n
Q
n
(1)
given by (1.28). Indeed, we will nd all the eigenvalues and eigenvectors for |M
n
| and |Q
n
| where
|A| =

A

A. A key to our nding this was understanding a kind of continuum limit of M


n
: Let
K be the Volterra-type operator on H = L
2
([0, 1], dx) with integral kernel
K(x, y) =
_
1 0x y 1,
0 0y < x < 1.
In some formal sense, K is a limit of either M
n
or Q
n
, but in a precise sense, M
n
is a restriction
of K:
Proposition 4.1. Let
n
be the projection of H onto the space of functions constant on each
interval [
j
n
,
j+1
n
), j = 0, 1, . . . , n 1. Then

n
K
n
(4.1)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 199
is unitarily equivalent to
1
2
M
n
/n. In particular,
M
n
2nK, (4.2)
lim
n
M
n

n
= 2K. (4.3)
Proof. Let {f
(n)
j
}
n1
j=0
be the functions
f
(n)
j
(x) =
_

n
j
n
x <
j+1
n
,
0 otherwise
(4.4)
which form an orthonormal basis for Ran(
n
). Since
nf
(n)
j
, Kf
(n)
k
=
1
2
(M
n
)
jk
(4.5)
we have the claimed unitary equivalence. Eq. (4.2) is immediate from
n
K
n
K. Eq. (4.3)
follows if we note s- lim
n

n
= 1, so lim
n
K
n
= K.
Notice that
(Kf )(x) =
_
1
x
f (y) dy (4.6)
so
d
dx
(Kf ) = f, Kf (1) = 0 (4.7)
and K is an inverse of a derivative. That means K

K will be the inverse of a second-order operator.


Indeed,
(K

K)(x, y) =
_
1
0
K(z, x) K(z, y) dz
=
_
min(x,y)
0
dz
=min(x, y) (4.8)
which, as is well known, is the integral kernel of the inverse of
d
2
dx
2
with u(0) = 0, u

(1) = 1
boundary conditions.
We can therefore write down a complete orthonormal basis of eigenfunctions for K

K:

n
(x) = sin(
1
2
(2n 1)x), n = 1, 2, . . . (4.9)
(K

K)
n
=
4
(2n 1)
2

2
(4.10)
so
K = K

K
1/2
=
2

. (4.11)
200 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
By (4.2), (4.3), we have
Corollary 4.2.
M
n

4n

, (4.12)
lim
n
M
n

n
=
4

. (4.13)
Of course, we will see this when we have proven Theorem2, but it is interesting to have it now.
While M
n
is related to differential operators via (4.5), we can compute the norm of Q
n
by
realizing it as the inverse of a difference operator. Specically, let N
n
be given by (1.13). Then
(1 N
n
)
1
= 1 +N
n
+N
2
n
+ +N
n1
n
= Q
n
. (4.14)
Theorem 4.3. Let
D
n
= (1 N
n
)(1 N
n
)

. (4.15)
Then D
n
has a complete set of eigenvectors:
v
()
j
= sin
_
(2 +1)j
2n +1
_
, j = 1, . . . , n; = 0, . . . , n 1, (4.16)
D
n
v
()
= 4 sin
2
_
(2 +1)
2(2n +1)
_
v
()
, (4.17)
Q
n
=(min eigenvalue of D
n
)
1/2
=
_
2 sin
_

4n +2
__
1
. (4.18)
Proof. By a direct calculation,
D
n
=

2 1 0
1 2 1
0 1 2
.
.
.
2 1 0
1 2 1
0 1 1

(4.19)
is a discrete Laplacian with Dirichlet boundary condition at 0 and Neumann at n. Since
sin(q(j +1)) +2 sin(qj) sin(q(j 1)) = 4 sin
2
_
q
2
_
sin(qj)
(4.16)/(4.17) hold so long as q is such that sin(q(n +1)) = sin(qn), that is,
1
2
[q(n +1) +qn] = ( +
1
2
)
or q = (2 +1)/(2n +1).
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 201
Remark. For OPUC with dj = d0/2, in the basis 1, z, . . . , z
n1
, A
n
is given by the matrix,
N
n
, of (1.13), and so (1N
n
)
1
= Q
n
2n/. Thus, there are unit vectors, y
n
, in this case
with (1 A
n
)y
n
/2n.
5. The norm of M
n
In this section, we will give two distinct but related proofs of Theorem 2. Both depend on a
generating function relation:
Theorem 5.1. For 0 (0, ) and z D, dene
S
0
(z) =

j=0
sin((2j +1)0)z
j
, (5.1)
C
0
(z) =

j=0
cos((2j +1)0)z
j
. (5.2)
Then
1 +z
1 z
C
0
(z) = cot(0)S
0
(z). (5.3)
Proof. Let c = e
i0
so, summing the geometric series,
S
0
(z) =(2i)
1

j=0
(c
2j+1
z
j
c
2j+1
z
j
)
=(2i)
1
_
c
1 zc
2

c
1 z c
2
_
(5.4)
=
sin(0)(1 +z)
(1 zc
2
)(1 z c
2
)
. (5.5)
For C
c
(z), the calculation is similar; in (5.4), (2i)
1
is replaced by (2)
1
and the minus sign
becomes a plus:
C
c
(z) =
cos(0)(1 z)
(1 zc
2
)(1 z c
2
)
(5.6)
(5.5) and (5.6) imply (5.3).
Our rst proof of Theorem 2 depends on looking at the Hankel matrix [12,13]

M
n
=

2 2 . . . 2 1
2 2 . . . 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 0 . . . 0 0

. (5.7)
If W
n
is the unitary permutation matrix
(Wv)
j
= v
n+1j
(5.8)
202 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
then
M
n
=

M
n
W,

M
n
= M
n
W (5.9)
and so
M
n
=

M
n
. (5.10)
Here is our rst proof of Theorem 2:
Theorem 5.2. Let
c
(n;)
j
= cos
__
2 +
1
2
_

2n
(2j 1)
_
, j = 1, 2, . . . , n; = 0, . . . , n 1. (5.11)
Then

M
n
c
(n;)
= cot
__
2 +
1
2
_

2n
_
c
(n;)
. (5.12)
Thus,
M
n
=

M
n
= cot
_

4n
_
. (5.13)
Proof. Let
c
(n;0)
j
= cos(0(2j 1)), j = 1, 2, . . . , n (5.14)
and
s
(n;0)
j
= sin(0(2j 1)), j = 1, . . . , n. (5.15)
Then (5.3) implies that
M
n
Wc
(n;0)
= cot(0)Ws
(n;0)
(5.16)
by looking at coefcients of 1, z, . . . , z
n1
. The W comes from (3.6)/(3.8). If
0 =

2
+2, = 0, . . . , n 1 (5.17)
then
Ws
(n;0)
= c
(n;0)
(5.18)
and (5.16) becomes (5.12).
Since

M is self-adjoint, (5.13) follows from(5.12) either by noting that max|cot((2+
1
2
)

2n
)| =
cot(

4n
) or by noting that c
(n;0=/4n)
is a positive eigenvector of a positive self-adjoint matrix, so
its eigenvalue is the norm by the PerronFrobenius theorem.
Our second proof relies on the following known result (see [5, p. 272], and references therein;
this result is called the EnestrmKakeya theorem; see also [14, problem 22 on pp. 107 and 301],
who also mention Hurwitz):
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 203
Lemma 5.3. Suppose
0 < a
0
< a
1
< < a
n
. (5.19)
Then
P(z) = a
0
+a
1
z + +a
n
z
n
(5.20)
has all its zeros in D.
Theorem 5.4. Let
S
(n)
(z) =
n1

j=0
sin
_
(2j +1)

4n
_
z
j
, (5.21)
C
(n)
(z) =
n1

j=0
cos
_
(2j +1)

4n
_
z
j
. (5.22)
Then
b
(n)
(z) =
S
(n)
(z)
C
(n)
(z)
(5.23)
is a Blaschke product of order n 1. Moreover,
cot
_

4n
_
b
n
(z) = 1 +2
n1

j=1
z
j
+O(z
n
) (5.24)
and
M
n
= cot
_

4n
_
. (5.25)
Proof. The coefcients of S
(n)
obey (5.19) so, by the lemma, S
(n)
has all its zeros in D. Moreover,
by (5.18), C
(n)
(z) = z
n
S
(n)
(1/ z), which implies (5.23) is a Blaschke product.
Eq. (5.24) is just a translation of (5.3). Eq. (5.24) implies (5.25) by Theorem 3.3.
6. Some remarks and extensions
In this section,we make some remarks that shed light on or extend Theorem 1, our main result.
6.1. An alternate proof
We give a simple proof of a weakened version of Theorem 4 but which sufces for applications
like those in Section 7. This argument is related to ones in Section 3 of Nikolski [11].
Theorem 6.1. If A1 and 1 / spec(A), then
dist(1, spec(A))(1 A)
1
2m, (6.1)
where m is the degree of the minimal polynomial for A.
204 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
Proof. We prove the result for A < 1. The general result follows by taking limits. We make
repeated use of Lemma 3.1 which implies that if, for z D, and we dene
B(z) =
_
A z
1 zA
_
_
1 z
1 z
_
(6.2)
then
B(z)1. (6.3)
By algebra,
(1 x)
1
_
1
x z
1 zx
_
1 z
1 z
__
=
1
1 z
_
1 +z
_
x z
1 xz
__
(6.4)
so, by Lemma 3.1 again,
(1 A)
1
(1 B(z))|1 z|
1
(1 +|z|). (6.5)
Now let

m
j=1
(x z
j
) be the minimal polynomial for A. Then
m

j=1
B(z
j
) = 0
so
(1 A)
1
=(1 A)
1

1
m

j=1
B
j
(z)

=
m

j=1
(1 A)
1
[1 B
j
(z)]
m

k=j+1
B
k
(z) (6.6)
(the empty product for j = m is interpreted as the identity operator) which, by (6.3) and (6.5),
implies
LHS of (6.1)
m

j=1
dist(1, spec(A))|1 z
j
|
1
(1 +|z
j
|)
2m
since 1 +|z
j
| 2 and z
j
spec(A) so dist(1, spec(A))|1 z
j
|
1
1.
Remarks. (1) The factor (1 z)/(1 z) is taken in (6.2) so f
z
(z) = (z z)(1 zz)
1
(1 z)(1 z)
1
has 1 f
z
(1) = 0.
(2) In place of the algebra (6.4), one can compute that the sup
|z|<1
LHS of (6.4) is |1 z|
1
[1 +|z|] and use von Neumanns theorem as discussed in Section 6.5.
6.2. Minimal polynomials
While the constant 2 in (6.1) is worse than 4/ in (1.19)/(1.21), (6.1) appears to be stronger in
that m, not n, appears, but we can also strengthen (1.19) in this way:
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 205
Theorem 6.2. If A1, 1 / spec(A), and m is the degree of the minimal polynomial for A,
then
dist(1, spec(A))(1 A)
1
cot
_

4m
_
. (6.7)
Proof. Let y = 1. Since A
m
y is a linear combination of {A
j
y}
m1
j=0
, the cyclic subspace, V
y
,
has dim(V
y
) m
y
m. Since AV
y
is an operator of a space of dimension m
y
, we have
dist(1, spec(A))(1 A)
1
yc(m
y
) = cot
_

4m
y
_
cot
_

4m
_
.
6.3. Numerical range
For any bounded operator, A, on a Hilbert space, the numerical range, Num(A), is dened by
Num(A) = {, A | = 1}. (6.8)
It is a bounded convex set (see [3, p. 150]), and when A is a nite matrix, also closed. Theorem
1 can be improved to read:
Theorem 6.3. Let

M
n
be the set of pairs (A, z) where A is an n n matrix, z C with
z / spec(A), z / Num(A)
int
. (6.9)
Then
sup

M
n
dist(z, spec(A))(A z)
1
= cot
_

4n
_
. (6.10)
Remarks. (1) Since Num(A) {z | |z| A}, M
n


M
n
, and this is a strict improvement of
(1.19).
(2) We need only prove
dist(z, spec(A))(A z)
1
cot
_

4n
_
since the equality then follows from M
n


M
n
.
(3) By replacing A by e
i0
(A z) for suitable 0 and z, we need only prove
Re(A)0, 0 / spec(A) dist(0, spec(A))A
1
cot
_

4n
_
(6.11)
for by convexity of Num(A), if z / Num(A)
int
, there is a half-plane, P, with Num(A) P and
z *P. It is (6.11) we will prove below.
First Proof of Theorem 6.3. Let
C =A
1
+(A

)
1
(6.12)
=(A

)
1
2Re(A)(A)
1
0. (6.13)
206 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
Thus,
|C
jk
| |C
jj
|
1/2
|C
kk
|
1/2
. (6.14)
Now just follow the proof of Theorem 4 in Section 2.
Second Proof of Theorem 6.3. We use Cayley transforms. For 0 < s, dene
B(s) = (1 sA)(1 +sA)
1
. (6.15)
Since
(1 +sA)
2
(1 sA)
2
= 4s Re(, A)0
we have that
B(s)1. (6.16)
Because
1 B(s) = 2sA(1 +sA)
1
(6.17)
we have for s small that
dist(1, spec(B(s))) = 2s dist(0, spec(A)) +O(s
2
). (6.18)
Thus, by Theorem 1,
2s dist(0, spec(A))(1 B(s))
1
cot
_

4n
_
+O(s). (6.19)
By (6.17),
(1 B(s))
1
= (2s)
1
[A
1
+s]
so
A
1
|s| +2s(1 B(s))
1
. (6.20)
This plus (6.18) implies (6.11) as s 0.
6.4. Bounded powers
We note that there is also a result if
sup
m0
A
m
= c < . (6.21)
We suspect the
3
2
power in the following is not optimal. We note that one can also use this method
if A
m
is polynomially bounded in m.
Theorem 6.4. If (6.21) holds, then
(1 A)
1
c(3n)
3/2
dist(1, spec(A))
3/2
. (6.22)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 207
Proof. By the argument of Section 1 (using (1.11)), this is equivalent to
dist(1, spec(A))3n(c(1 A)y)
2/3
(6.23)
for all unit vectors y.
Dene for 1 < r,
f, g
r
=

m=0
r
2m
A
m
f, A
m
g. (6.24)
By (6.21),
f f
r
cr(r
2
1)
1/2
f . (6.25)
By (6.24),
Af
2
r
r
2
f
2
r
(6.26)
so
A
r
r (6.27)
so if C = r
1
A, then
C
r
1. (6.28)
Clearly, for y = 1y
r
,
Cy y
r
|r
1
1| y
r
+r
1
(A 1)y
r
|r
1
1| y
r
+c(r
2
1)
1/2
(A 1)y
((r 1) +c[2(r 1)]
1/2
(A 1)y)y
r
. (6.29)
It follows by Theorem 1 and the fact that spec(A) is independent of
r
that
dist(1, r
1
spec(A))
4n

{c(A 1)y(2(r 1))


1/2
+(r 1)} (6.30)
and thus
dist(1, spec(A))(r 1) +
4
n
{c(A 1)y(2(r 1))
1/2
+(r 1)}. (6.31)
Choosing r = 1 +
1
2
(c(A 1)y)
2/3
and using
1
2
+
6n

3n, we obtain (6.23).


6.5. von Neumanns theorem
Lemma 3.1 is a special case of a theorem of von Neumann. The now standard proof of this
result uses Nagy dilations [23]; we have found a simple alternative that relies on
Lemma 6.5. For any A, with A < 1 and A = U|A|, and U unitary, there exists an operator-
valued function, g, analytic in a neighborhood of D so that g(e
i0
) is unitary and g(0) = A.
208 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
Proof. Let
g(z) = U
_
z +|A|
1 +z|A|
_
. (6.32)
The factor in [. . . ] is unitary if z = e
i0
, since
(e
i0
+|A|)

(e
i0
+|A|) = 1 +A

A +2 cos 0|A|
= (1 +e
i0
|A|)

(1 +e
i0
|A|).
Theorem 6.6 (von Neumann [25]). Let f : D D. If A < 1, dene f (A) by
f (z) =

n=0
a
n
z
n
, f (A)

n=0
a
n
A
n
. (6.33)
Then
f (A)1. (6.34)
Proof of von Neumanns theorem, given the lemma. Suppose rst that A obeys the hypothe-
ses of the lemma. By a limiting argument, suppose f is analytic in a neighborhood of D. Applying
the maximum principle to f (g(z)), we see
f (A) =f (g(0)) sup
0
f (g(e
i0
))
=sup
0
|f (e
i0
)| 1, (6.35)
where (6.35) uses the spectral theorem for the unitary g(e
i0
).
For general A, if

A = A0 on HH, then

A = U|

A| with U unitary and we obtain
f (

A)1. But f (

A) = f (A)0.
Remarks. (1) In general, A = V|A| with V a partial isometry. We can extend this to a unitary
U so long as dim(Ran(V)

) = dim(ker(V)

). This is automatic in the nite-dimensional case


and also if dim(H) = for A0 since then both spaces are innite-dimensional.
(2) This proof is close to one of Nelson [9] who also uses the maximum principle and
polar decomposition, but uses a different method for interpolating the self-adjoint part (see
also [10]).
7. Zeros of random OPUC
In this section, we apply Theorem 1 to obtain results on certain OPUC. We begin by recall-
ing the recursion relations for OPUC [1719]. For each non-trivial probability measure, dj,
on *D, there is a sequence of complex numbers, {:
n
(dj)}

n=0
, called Verblunsky coefcients
so that

n+1
(z) = z
n
(z) :
n

n
(z), (7.1)
where

n
(z) = z
n

n
(1/ z). (7.2)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 209
The :
n
obey |:
n
| < 1 and Verblunskys theorem[17,19] says that j {:
n
(dj)}

n=0
is a bicon-
tinuous bijection from the non-trivial measures on *D with the topology of vague convergence
to D

with the product topology.


For each j in (0, 1), we dene the j-model to be the set of random Verblunsky coefcients
where :
n
are independent, identically distributed random variables, each uniformly distributed
in {z | |z| j}. A point in the model space of :s will be denoted c;
n
(z; c) will be the
corresponding OPUC and {z
(n)
j
(c)}
n
j=1
the zeros of
n
counting multiplicity. Our results here
depend heavily on earlier results of Stoiciu [20,21], who studied a closely related problem (see
below). In turn, Stoiciu relied, in part, on earlier work on eigenvalues of random Schrdinger
operators [7,6].
We will prove the following three theorems:
Theorem 7.1. Let 0 < j < 1. Let k {1, 2, . . . }. Then for a.e. c in the j-model,
limsup
n
#{j | |z
(n)
j
(c)| < 1 n
k
}
[log(n)]
2
< . (7.3)
Thus, the overwhelming bulk of zeros are polynomially close to *D. If we look at a small slice
of the argument, we can say more:
Theorem 7.2. Let 0 < j < 1. Let 0
0
[0, 2) and a < b real. Let p < 1. Then with probability
1, for large n, there are no zeros in {z | arg z (0
0
+
2a
n
, 0
0
+
2b
n
); |z| < 1 exp(n
p
)}.
Finally and most importantly, we can describe the statistical distribution of the arguments:
Theorem 7.3. Let 0 < j < 1. Let 0
0
[0, 2). Let a
1
< b
1
a
2
< b
2
a

< b

and let
k
1
, . . . , k

be in {0, 1, 2, . . . }. Then as n ,
Prob
_
#
_
j

arg z
(n)
j
(c)
_
0
0
+
2a
m
n
, 0
0
+
2b
n
n
___
= k
m
for m = 1, . . . ,
(7.4)
converges to

m=1
(b
m
a
m
)
k
m
k
m
!
e
(b
m
a
m
)
. (7.5)
This says the zeros are asymptotically Poisson distributed. As we stated, our proofs rely on
ideas of Stoiciu, essentially using Theorem 1 to complete his program. To state the results of his
that we use, we need a denition.
For [ *D, the paraorthogonal polynomials (POPUC) are dened by

([)
n
(z) =
n1
(z)

[

n1
(z). (7.6)
These have zeros on *D. Indeed, they are eigenvalues of a rank one unitary perturbation of the
operator A
n
of (1.6). We extend the j-model to include an additional set of independent parameters
210 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
{[
j
}

j=0
in *D, each uniformly distributed on *D. z
(n)
j
(c) denotes the zeros of
([
n
)
n
(z; c). Stoiciu
[20,21] completely analyzed these POPUC zeros. We will need three of his results:
Theorem 7.4 (=Theorem 6.1.3 of [21] =Theorem 6.3 of [20]). Let I be an interval in *D. Then
Prob(2 or more z
(n)
j
(c) lie in I)
1
2
_
n|I|
2
_
2
, (7.7)
where |I| is the d0 measure of I.
For the next theorem, we needthe fact that there is anexplicit realizationof A
n
andthe associated
rank one perturbations as n n complex CMV matrices (see [2,1719]), C
n
, whose eigenvalues
are the z
n
j
, and

C
([
n
)
n
whose eigenvalues are the z
n
j
, so that
(C
n
C
([
n
)
n
)|
n1
| +|
n
|. (7.8)
The next theorem uses the components so (7.8) holds.
Theorem 7.5 (=Theorem 1.1.2 of [21] =Theorem 2.2 of [20]). There exists a constant D
2
(depending only on j) so that for every eigenvector
(j,c;n)
of

C
([
n
)
n
, we have for
|mm(
(j,c;n)
)| D
2
(log n) (7.9)
that
|
(j,c;n)
m
| C
c
e
4|mm(
(j,c;n)
)|/D
2
, (7.10)
where C
c
is an a.e. nite constant and
m() = rst k so |
k
| = max
m
|
m
|. (7.11)
We will also need the results that Stoiciu proves along the way that for each C
0
,
{c | C
c
< C
0
}
C
0
(7.12)
is invariant under rotation of the measures dj
c
, and that for each C
0
xed and all c
C
0
,
#(j | m(
(j,c;n)
) = m
0
)D
3
(log n), (7.13)
where D
3
is only C
0
-dependent and is independent of c, m
0
, and n. (7.13) comes from the fact
that, by (7.10), for D
3
only depending on C
0
,

|mm()|
1
4
D
3
(log n)
|
m
|
2

1
2
(7.14)
so, by (7.11), for s with m() = m
0
,
1
2
D
3
(log n)|
m
0
|
2

1
2
(7.15)
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 211
which, given

|
m
0
|
2
= 1 (7.16)
implies (7.13).
The last of Stoicius results we will need is
Theorem 7.6 (=Theorem 1.0.6 of [21] =Theorem 1.1 of [20]). For 0
0
[0, 2) and a
1
< b
1
a
2
< b
2
a

< b

and k
1
, . . . , k

in {0, 1, 2, . . . }, we have, as n , that (7.4) with


z
(n)
j
replaced by z
(n)
j
converges to (7.5).
With this background out of the way, we begin the proofs of the new Theorems 7.17.3 with
Theorem 7.7. Fix j (0, 1). Then for a.e. c, there exists N
c
so if nN
c
, then
min
j=k
| z
(n)
j
z
(n)
k
| 2n
4
. (7.17)
Remark. n
3
will work in place of n
4
.
Proof. For each n, cover *D by two sets of intervals of size 4n
4
: one set non-overlapping,
except at the end, starting with [0, 4n
4
] and the other set starting with [2n
4
, 6n
4
]. If (7.17)
fails for some n, then there are two zeros within one of these intervals. By (7.7), the probability
of two zeros in one of these intervals is O((nn
4
)
2
) = O(n
6
). The number of intervals at order
n is O(n
4
). Since

n=1
n
4
n
6
< , the sum of the probabilities of two zeros in an interval is
summable. By the BorelCantelli lemma [22] for a.e. c, only nitely many intervals have two
zeros. Hence, for large n, (7.17) holds.
Proof of Theorem 7.1. Obviously, if (7.3) holds for some k, it holds for all smaller k, so we
will prove it for k 4. We also need only prove it on any
C
0
given by (7.12) since
C
0
has
probability 1 by Theorem 7.5. Consider those
(j,c;n)
with
|m(
(j,c;n)
) n| K(log n). (7.18)
By (7.13), the number of j for which (7.18) fails is O((log n)
2
).
By (7.10) and (7.8) and the fact that is a unit eigenfunction, then
(C
n
z
(n)
j
)
(j,c;n)
2C
c
n
4K/D
2
(7.19)
so picking K large enough and n large enough that
4

2C
c
n
1
< 1, we have
(C
n
z
(n)
j
)
(j,c;n)


4n
n
k
. (7.20)
Thus, by Theorem 1 and C
n
= 1 = | z
(n)
j
|, we see that for each j obeying (7.18), there is a
z
(n)
j
so
|z
(n)
j
z
(n)
j
| n
k
. (7.21)
212 E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213
By Theorem 7.7 and k 4, the z
(n)
j
are distinct for n large, so we have nO((log n)
2
) zeros with
|z
(n)
j
| 1 n
k
. This is (7.3).
Proof of Theorem 7.2. In place of (7.18), we look for s so
|m(
(j,c;n)
) n|
D
2
2
n
1p
. (7.22)
For such js, using the above arguments, there are zeros z
(n)
j
with
|z
(n)
j
z
(n)
j
| C
c
exp(2n
p
). (7.23)
As in Stoiciu [20,21], the distribution of z
(n)
j
for which (7.22) fails is rotation invariant. Since
the number is O(n
1p
log n) out of O(n) zeros, the probability of any of these had zeros lying in
{z | arg z (0
0
+
2a
n
, 0
0
+
2b
n
)} goes to zero as n .
Proof of Theorem 7.3. By the last proof, the zeros of
n
with the given arguments lie within
O(e
n
p
) of those of
([)
n
and, by Theorem 7.7, these zeros are distinct. Theorem 7.6 completes
the proof if one gets upper and lower bounds by slightly increasing/decreasing the intervals on an
O(1/n) scale.
We close with the remark about improving these theorems. While (7.13) is the best one can
hope for as a uniform bound, with overwhelming probability the number should be bounded.
Thus, we expect in Theorem 7.1 that one can obtain O((log n)
1
) in place of O((log n)
2
). It is
possible in Theorem 7.2 that one can improve O(e
n
p
) for all p 1 to O(e
An
) for some A.
Acknowledgments
This work was done while B. Simon was a visitor at Kings College London. He would like to
thank A.N. Pressley and E.B. Davies for the hospitality of Kings College, and the London Math-
ematical Society for partial support. The calculations of M. Stoiciu [20,21] were an inspiration
for our pursuing the estimate we found. We appreciate useful correspondence/discussions with
M. Haase, N. Higham, R. Nagel, N.K. Nikolski, V. Totik, and L.N. Trefethen.
References
[1] A. Bttcher, B. Silbermann, Analysis of Toeplitz Operators, Springer, Berlin, 1990.
[2] M.J. Cantero, L. Moral, L. Velzquez, Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle,
Linear Algebra Appl. 362 (2003) 2956.
[3] E.B. Davies, One-Parameter Semigroups, London Mathematical Society Monographs, vol. 15, Academic Press,
London, New York, 1980.
[4] P. Henrici, Bounds for iterates, inverses, spectral variation and elds of values of non-normal matrices, Numer. Math.
4 (1962) 2440.
[5] G.V. Milovanovi c, D.S. Mitrinovi c, Th.M. Rassias, Topics in Polynomials: Extremal Problems, Inequalities, Zeros,
World Scientic Publishing, River Edge, NJ, 1994.
[6] N. Minami, Local uctuation of the spectrum of a multidimensional Anderson tight binding model, Comm. Math.
Phys. 177 (1996) 709725.
[7] S.A. Molchanov, The local structure of the spectrum of the one-dimensional Schrdinger operator, Comm. Math.
Phys. 78 (1980/81) 429446.
E.B. Davies, B. Simon / Journal of Approximation Theory 141 (2006) 189213 213
[8] Z. Nehari, On bounded bilinear forms, Ann. Math. 65 (1957) 153162.
[9] E. Nelson, The distinguished boundary of the unit operator ball, Proc. Amer. Math. Soc. 12 (1961) 994995.
[10] N.K. Nikolski, Operators, Functions, and Systems: An Easy Reading, vol. 2: Model Operators and Systems,
Mathematical Surveys and Monographs, vol. 93, American Mathematical Society, Providence, RI, 2002.
[11] N.K. Nikolski, Condition numbers of large matrices, and analytic capacities, St. Petersburg Math. J., to appear.
[12] J.R. Partington, An Introduction to Hankel Operators, London Mathematical Society Student Texts, vol. 13,
Cambridge University Press, Cambridge, 1988.
[13] V.V. Peller, Hankel Operators and Their Applications, Springer Monographs in Mathematics, Springer, New York,
2003.
[14] G. Plya, G. Szeg o, Problems and Theorems in Analysis. I, reprint of the 1978 English translation, Classics in
Mathematics, Springer, Berlin, 1998.
[15] Pseudospectra Gateway, http://web.comlab.ox.ac.uk/projects/pseudospectra/.
[16] I. Schur, ber Potenzreihen, die im Innern des Einheitskreises beschrnkt sind, I, II, J. Reine Angew. Math. 147
(1917) 205232; 148 (1918) 122145, English translation in: I. Gohberg (Ed.), I. Schur Methods in Operator Theory
and Signal Processing, pp. 3159, 6688, Operator Theory: Advances and Applications, vol. 18, Birkhuser, Basel,
1986.
[17] B. Simon, Orthogonal Polynomials on the Unit Circle, Part 1: Classical Theory, AMS Colloquium Series, American
Mathematical Society, Providence, RI, 2005.
[18] B. Simon, Orthogonal Polynomials on the Unit Circle, Part 2: Spectral Theory, AMS Colloquium Series, American
Mathematical Society, Providence, RI, 2005.
[19] B. Simon, OPUC on one foot, Bull. Amer. Math. Soc. 42 (2005) 431460.
[20] M. Stoiciu, The statistical distribution of the zeros of random paraorthogonal polynomials on the unit circle,
J. Approx. Theory 139 (2006) 2964.
[21] M. Stoiciu, Zeros of Random Orthogonal Polynomials on the Unit Circle, Ph.D. dissertation, 2005,
http://etd.caltech.edu/etd/available/etd-05272005-110242/.
[22] D. Stroock, AConcise Introduction to the Theory of Integration, Series in Pure Mathematics, vol. 12, World Scientic
Publishing, River Edge, NJ, 1990.
[23] B. Sz.-Nagy, C. Foias, Harmonic Analysis of Operators on Hilbert Space, North-Holland Publishing, Amsterdam,
London; American Elsevier Publishing, New York; Akadmiai Kiad, Budapest, 1970.
[24] L.N. Trefethen, M. Embree, Spectra and Pseudospectra: The Behavior of Non-normal Matrices and Operators,
Princeton University Press, Princeton, NJ, 2005.
[25] J. von Neumann, Eine Spektraltheorie fr allgemeine Operatoren eines unitren Raumes, Math. Nachr. 4 (1951)
258281.

You might also like