Operator Algebras PDF
Operator Algebras PDF
John M. Erdman
Portland State University
Chapter 5. C ∗ -ALGEBRAS 43
5.1. Adjoints of Hilbert Space Operators 43
5.2. Algebras with Involution 44
5.3. C ∗ -Algebras 46
5.4. The Gelfand-Naimark Theorem—Version I 47
- NORBERT WIENER
1
CHAPTER 1
1.1.18. Example. Let P be the plane in R3 whose equation is x − z = 0 and L be the line whose
equations are y = 0 and x = −z. Let E be the projection of R3 along L onto P and F be the
projection of R3 along P onto L. Then
1 1
1
0 − 12
2 0 2 2
[E] = 0 1 0 and [F ] = 0 0 0 .
1 1
2 0 2 − 12 0 12
1.1.19. Definition. A complex number λ is an eigenvalue of an operator T on a vector space
V if ker(T − λIV ) contains a nonzero vector. Any such vector is an eigenvector of T associated
with λ and ker(T − λIV ) is the eigenspace of T associated with λ. The set of all eigenvalues of
the operator T is its point spectrum and is denoted by σp (T ).
If M is an n × n matrix, then det(M − λIn ) (where In is the n × n identity matrix) is a
polynomial in λ of degree n. This is the characteristic polynomial of M . A standard way
of computing the eigenvalues of an operator T on a finite dimensional vector space is to find the
zeros of the characteristic polynomial of its matrix representation. It is an easy consequence of
the multiplicative property of the determinant function that the characteristic polynomial of an
operator T on a vector space V is independent of the basis chosen for V and hence of the particular
matrix representation of T that is used.
1.1.20. Example. The eigenvalues
of the operator on (the real vector space) R3 whose matrix
0 0 2
representation is 0 2 0 are −2 and +2, the latter having (both algebraic and geometric)
2 0 0
multiplicity 2. The eigenspace associated with the negative eigenvalue is span{(1, 0, −1)} and the
eigenspace associated with the positive eigenvalue is span{(1, 0, 1), (0, 1, 0)}.
The central fact asserted by the finite dimensional vector space version of the spectral theorem
is that every diagonalizable operator on such a space can be written as a linear combination of
projection operators where the coefficients of the linear combination are the eigenvalues of the
operator and the ranges of the projections are the corresponding eigenspaces. Thus if T is a
diagonalizable operator on a finite dimensional vector space V , then V has a basis consisting of
eigenvectors of T .
Here is a formal statement of the theorem.
1.1.21. Theorem (Spectral Theorem: vector space version). Suppose that T is a diagonalizable
operator on a finite dimensional vector space V . Let λ1 , . . . , λn be the (distinct) eigenvalues of T .
Then there exists a resolution of the identity IV = E1 + · · · + En , where for each k the range of the
projection Ek is the eigenspace associated with λk , and furthermore
T = λ1 E1 + · · · + λn En .
Proof. A good proof of this theorem can be found in [17] on page 212.
1.1.22.
Example.
Let T be the operator on (the real vector space) R2 whose matrix representation
−7 8
is .
−16 17
(a) The characteristic polynomial for T is cT (λ) = λ2 − 10λ + 9.
(b) The eigenspace M1 associated with the eigenvalue 1 is span{(1, 1)}.
(c) The eigenspace M2 associated with the eigenvalue 9 is span{(1, 2)}.
6 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM
Then l2 is an inner product space. (It must be shown, among other things, that the series in the
preceding definition actually converges.)
1.2.6. Example. For a < b let C([a, b], C) be the family of all continuous complex valued functions
on the interval [a, b]. For every f , g ∈ C([a, b]) define
Z b
hf, gi = f (x)g(x) dx.
a
Then C([a, b]) is an inner product space.
1.2.7. Theorem. In every inner product space the Schwarz inequality
|hx, yi| ≤ kxk kyk.
holds for all vectors x and y.
1.2.8. Proposition. If (xn ) is a sequence in an inner product space V which converges to a vector
a ∈ V , then hxn , yi → ha, yi for every y ∈ V .
1.2.9. Definition. Let V be a vector space. A function k k : V → R : x 7→ kxk is a norm on V if
(i) kx + yk ≤ kxk + kyk for all x, y ∈ V ;
(ii) kαxk = |α| kxk for all x ∈ V and α ∈ R; and
(iii) if kxk = 0, then x = 0.
The expression kxk may be read as “the norm of x” or “the length of x”. If the function k k
satisfies (i) and (ii) above (but perhaps not (iii)) it is a seminorm on V .
A vector space on which a norm has been defined is a normed linear space (or normed
vector space). A vector in a normed linear space which has norm 1 is a unit vector.
1.2.10. Proposition. If k k is norm (or a seminorm) on a vector space V , then kxk ≥ 0 for
every x ∈ V and k0k = 0.
Every inner product space is a normed linear space.
1.2.11. Proposition. Let V be an inner product space. The map x 7→ kxk defined on V in 1.2.2
is a norm on V .
Every normed linear space is a metric space. More precisely, a norm on a vector space induces
a metric d, which is defined by d(x, y) = kx − yk. That is, the distance between two vectors is the
length of their difference.
? _
x x−y
/
y
If no other metric is specified we always regard a normed linear space as a metric space under
this induced metric. Thus every metric (and hence every topological) concept makes sense in a
(semi)normed linear space.
1.2.12. Proposition. Let V be a normed linear space. Define d : V × V → R by d(x, y) = kx − yk.
Then d is a metric on V . If V is only a seminormed space, then d is a pseudometric.
When there is a topology on a vector space, in particular in normed linear spaces, we reserve
the word “operator” for those linear mappings from the space into itself which are continuous. We
are usually not made aware of this conflicting terminology in elementary linear algebra because that
subject focuses primarily on finite dimensional vector and inner product spaces where the question
is moot: on finite dimensional normed linear spaces all linear maps are automatically continuous
(see proposition 1.2.14 below).
8 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM
1.2.13. Definition. An operator on a normed linear space V is a continuous linear map from
V into itself.
1.2.14. Proposition. If V and W are normed linear spaces and V is finite dimensional, then
every linear map T : V → W is continuous.
Proof. See [5], proposition III.3.4.
1.2.15. Definition. Vectors x and y in an inner product space V are orthogonal (or perpen-
dicular) if hx, yi = 0. In this case we write x ⊥ y. Subsets A and B of V are orthogonal if
a ⊥ b for every a ∈ A and b ∈ B. In this case we write A ⊥ B.
1.2.16. Definition. If M and N are subspaces of an inner product space V we use the notation
V = M ⊕ N to indicate not only that V is the (vector space) direct sum of M and N but also that
M and N are orthogonal. Thus we say that V is the (internal) orthogonal direct sum of M
and N .
1.2.17. Proposition. Let a be a vector in an inner product space V . Then a ⊥ x for every x ∈ V
if and only if a = 0.
1.2.18. Proposition (The Pythagorean theorem). If x ⊥ y in an inner product space, then
kx + yk2 = kxk2 + kyk2 .
1.2.19. Definition. Let V and W be inner product spaces. For (v, w) and (v 0 , w0 ) in V × W and
α ∈ C define
(v, w) + (v 0 , w0 ) = (v + v 0 , w + w0 )
and
α(v, w) = (αv, αw) .
This results in a vector space, which is the (external) direct sum of V and W . To make it into an
inner product space define
h(v, w), (v 0 , w0 )i = hv, v 0 i + hw, w0 i.
This makes the direct sum of V and W into an inner product space. It is the (external orthog-
onal) direct sum of V and W and is denoted by V ⊕ W .
Notice that the same notation ⊕ is used for both internal and external direct sums and for both
vector space direct sums (see definition 1.1.8) and orthogonal direct sums. So when we see the
symbol V ⊕ W it is important to know which category we are in: vector spaces or inner product
spaces, especially as it is common practice to omit the word “orthogonal” as a modifier to “direct
sum” even in cases when it is intended.
1.2.20. Example. In R2 let M be the x-axis and L be the line whose equation is y = x. If we
think of R2 as a (real) vector space, then it is correct to write R2 = M ⊕ L. If, on the other
hand, we regard R2 as a (real) inner product space, then R2 6= M ⊕ L (because M and L are not
perpendicular).
1.2.21. Proposition. Let V be an inner product space. The inner product on V , regarded as a
map from V ⊕ V into C, is continuous. So is the norm, regarded as a map from V into R.
Concerning the proof of the preceding proposition, notice that the maps (v, v 0 ) 7→ kvk + kv 0 k,
(v, v ) 7→ kvk2 + kv 0 k2 , and (v, v 0 ) 7→ max{kvk, kv 0 k} are all norms on V ⊕ V . Which one is
0
p
induced by the inner product on V ⊕ V ? Why does it not matter which one we use in proving that
the inner product is continuous?
1.2.22. Proposition (The parallelogram law). If x and y are vectors in an inner product space,
then
kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2 .
1.2. NORMAL OPERATORS ON AN INNER PRODUCT SPACE 9
1.2.23. Example. Consider the space C([0, 1]) of continuous complex valued functions defined on
[0, 1]. Under the uniform norm
kf ku := sup{|f (x)| : 0 ≤ x ≤ 1}
C([0, 1]) is a normed linear space. There is no inner product on C([0, 1]) which induces this norm.
Hint for proof . Use the preceding proposition.
1.2.24. Proposition (The polarization identity). If x and y are vectors in an inner product space,
then
hx, yi = 14 (kx + yk2 − kx − yk2 + i kx + iyk2 − i kx − iyk2 ) .
1.2.25. Notation. Let V be an inner product space, x ∈ V , and A, B ⊆ V . If x ⊥ a for every
a ∈ A, we write x ⊥ A; and if a ⊥ b for every a ∈ A and b ∈ B, we write A ⊥ B. We define A⊥ ,
⊥
the orthogonal complement of A, to be {x ∈ V : x ⊥ A}. We write A⊥⊥ for A⊥ .
1.2.26. Proposition. If A is a subset of an inner product space V , then A⊥ is a closed linear
subspace of V .
1.2.27. Theorem (Gram-Schmidt Orthogonalization). If {v 1 , . . . , v n } is a linearly independent
subset of an inner product space V , then there exists an orthogonal set {e1 , . . . , en } of vectors such
that span{v 1 , . . . , v n } = span{e1 , . . . , en }.
1.2.28. Corollary. If M is a subspace of a finite dimensional inner product space V then V =
M ⊕ M ⊥.
For a counterexample showing that the preceding result need not hold in an infinite dimensional
space, see example 2.1.6.
1.2.29. Definition. A linear functional on a vector space V is a linear map from V into its
scalar field. The set of all linear functionals on V is the (algebraic) dual space of V . We will
use the notation V # (rather than L(V, C)) for the algebraic dual space.
1.2.30. Theorem (Riesz-Fréchet Theorem). If f ∈ V # where V is a finite dimensional inner
product space, then there exists a unique vector a in V such that
f (x) = hx, ai
for all x in V .
We will prove shortly that every continuous linear functional on an arbitrary inner product
space has the above representation. The finite dimensional version stated here is a special case,
since every linear map on a finite dimensional inner product space is continuous.
1.2.31. Definition. Let T : V → W be a linear transformation between complex inner product
spaces. If there exists a function T ∗ : W → V which satisfies
hT v, wi = hv, T ∗ wi
for all v ∈ V and w ∈ W , then T ∗ is the adjoint (or conjugate transpose, or Hermitian
conjugate) of T .
1.2.32. Proposition. If T : V → W is a linear map between finite dimensional inner product
spaces, then T ∗ exists.
Hint for proof . The functional φ : V × W → C : (v, w) 7→ hT v, wi is sesquilinear. Fix w ∈ W
and define φw : V → C : v 7→ φ(v, w). Then φw ∈ V # . Use the Riesz-Fréchet theorem (1.2.30).
1.2.33. Proposition. If T : V → W is a linear map between finite dimensional inner product
spaces, then the function T ∗ defined above is linear and T ∗∗ = T .
10 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM
With this inner product (and the obvious pointwise vector space operations) L2 (S) is a Hilbert
space.
Here is a standard example of a space which is a Banach space but not a Hilbert space.
2.1.9. Example. As above letR µ be a measure on a σ-algebra A of subsets of a set S. A function
f ∈ M(S) is integrable if S |f (x)| dµ(x) < ∞. We denote the family of (equivalence classes
of) integrable functions on S by L1 (S).
R An attempt to define an inner product on L1 (S) as
R on L2 (S) by setting hf, gi = S f (x)g(x) dµ(x) fails. (Why?) Nevertheless, the function
we did
f 7→ S |f (x)| dµ(x) is a norm on L1 (S). It is denoted by k · k1 . With respect to this norm (and the
obvious pointwise vector space operations), L1 (S) is a Banach space.
2.1.10. Example. If X is a compact Hausdorff space the family C(X) of continuous complex
valued functions on X is a Banach space under the uniform norm, which is defined by
kf ku := sup{|f (x)| : x ∈ X}.
We have seen in example 1.2.23 that this norm does not arise from an inner product.
2.1.11. Example. If X is a locally compact Hausdorff space the uniform norm may not be defined
on the family C(X) of continuous complex valued functions on X. (Why not?) However, it is
defined on the family Cb (X) of bounded continuous complex valued functions on X and on C0 (X)
the family of continuous complex valued functions on X that vanish at infinity (A complex valued
function on X is said to vanish at infinity if for every > 0 there exists a compact subset K
of X such that |f (x)| < whenever x ∈
/ K.) Both Cb (X) and C0 (X) are Banach spaces under the
uniform norm (and the obvious pointwise vector space operations).
2.1.12. Example. Let H be the set of all absolutely continuous functions on [0, 1] such that f 0
belongs to L2 ([0, 1]) and f (0) = 0. For f and g in H define
Z 1
hf, gi = f 0 (t)g 0 (t) dt.
0
This is an inner product on H under which H becomes a Hilbert space.
2.1.13. Convention. In the context of Hilbert (and, more generally, Banach) spaces the word
“subspace” will always mean closed vector subspace. To indicate that M is a subspace of H we
write M 4 H. A (not necessarily closed) vector subspace of a Hilbert space is often called by other
names such as linear subspace or linear manifold.
2.1.14. Definition. Let A beWa nonempty subset of a Banach space B. We define the closed
linear span of A (denoted by A) to be the intersection of the family of all subspaces of B which
contain A. This is frequently referred to as the smallest subspace of B containing A.
W
2.1.15. Proposition. The preceding definition makes sense. It is equivalent to defining A to be
the closure of the (linear) span of A.
2.1.16. Definition. Let V be a vector space and a, b ∈ V . Then the segment between a and b,
denoted by Seg[a, b], is defined to be
{(1 − t)a + tb : 0 ≤ t ≤ 1}.
A subset C of V is convex if Seg[a, b] ⊆ C whenever a, b ∈ C.
2.1.17. Proposition. The intersection of a family of convex subsets of a vector space is convex.
2.1.18. Proposition. If T : V → W is a linear map between vector spaces and C is a convex
subset of V , then T → (C) is a convex subset of W .
2.1.19. Definition. Let a be a element of a normed linear space V and r > 0. The open
ball of radius r about a is {x ∈ V : kx − ak < r}. The closed ball of radius r about a is
{x ∈ V : kx − ak ≤ r}. And the sphere of radius r about a is {x ∈ V : kx − ak = r}.
2.1. HILBERT SPACE GEOMETRY 15
2.1.20. Proposition. Every open ball (and every closed ball) in a normed linear space is convex.
2.1.21. Theorem (Minimizing Vector Theorem). If C is a nonempty closed convex subset of a
Hilbert space H and a ∈ C c , then there exists a unique b ∈ C such that kb − ak ≤ kx − ak for every
x ∈ C.
3x
a /b
2.1.22. Example. The vector space R2 under the uniform metric is a Banach space. To see that
in this space the minimizing vector theorem does not hold take C to be the closed unit ball about
the origin and a to be the point (2, 0).
R 1/2 R1
2.1.23. Example. The sets C1 = {f ∈ C([0, 1], R) : 0 f − 1/2 f = 1} and C2 = {f ∈
R1
L1 ([0, 1], R) : 0 f = 1} are examples that show that neither the existence nor the uniqueness
claims of the minimizing vector theorem necessarily holds in a Banach space.
2.1.24. Theorem (Vector decomposition theorem). Let H be a Hilbert space, M be a subspace
of H, and x ∈ H. Then there exist unique vectors y ∈ M and z ∈ M ⊥ such that x = y + z.
2.1.25. Proposition. Let H be a Hilbert space. Then the following hold:
(a) if M ⊆ H, then M ⊆ M ⊥⊥ ;
(b) if M ⊆ N ⊆ H, then N ⊥ ⊆ M ⊥ ;
(c) if M is a subspace
W of H, then M = M ⊥⊥ ; and
(d) if M ⊆ H, then M = M . ⊥⊥
2.3. Algebras
2.3.1. Definition. A (complex) algebra is a (complex) vector space A together with a binary
operation (x, y) 7→ xy, called multiplication, which satisfy
(i) (ab)c = a(bc),
(ii) (a + b)c = ac + bc,
(iii) a(b + c) = ab + ac, and
(iv) α(ab) = (αa)b = a(αb)
for all a, b, c ∈ A and α ∈ C. In other words, an algebra is a vector space which is also a ring and
satisfies (iv). If an algebra A has a multiplicative identity (or unit), that is, a nonzero element 1
(or 1A ) which satisfies
(v) 1 a = a 1 = a
for every a ∈ A, then the algebra is unital. An algebra A for which ab = ba for all a, b ∈ A is a
commutative algebra.
A map φ : A → B between algebras is an algebra homomorphism if it is linear and multi-
plicative (meaning that φ(aa0 ) = φ(a)φ(a0 ) for all a, a0 ∈ A). If the algebras A and B are unital a
homomorphism φ : A → B is unital if φ(1A ) = 1B .
A subset B of an algebra A is a subalgebra of A if it is an algebra under the operations it
inherits from A. A subalgebra B of a unital algebra A is a unital subalgebra if it contains the
multiplicative identity of A. CAUTION. To be a unital subalgebra it is not enough for B to have
a multiplicative identity of its own; it must contain the identity of A. Thus, an algebra can be both
unital and a subalgebra of A without being a unital subalgebra of A. (An example is given later
in 3.1.4.)
2.3.2. Convention. As with vector spaces, all algebras in the sequel will be assumed to be complex
algebras unless the contrary is explicitly stated.
2.3.3. Proposition. An algebra can have at most one multiplicative identity.
2.3.4. Definition. An element a of a unital algebra A is left invertible if there exists an element
al in A (called a left inverse of a) such that al a = 1 and is right invertible if there exists
an element ar in A (called a right inverse of a)such that aar = 1. The element is invertible
if it is both left invertible and right invertible. The set of all invertible elements of A is denoted
by inv A.
An element of a unital algebra can have at most one multiplicative inverse. In fact, more is
true.
2.3.5. Proposition. If an element of a unital algebra has both a left inverse and a right inverse,
then these two inverses are equal (and so the element is invertible).
When an element a of a unital algebra is invertible its (unique) inverse is denoted by a−1 .
2.3.6. Proposition. If a is an invertible element of a unital algebra, then a−1 is also invertible
and −1
a−1 = a.
2.3.7. Proposition. If a and b are invertible elements of a unital algebra, then their product ab is
also invertible and
(ab)−1 = b−1 a−1 .
2.3.8. Proposition. If a and b are invertible elements of a unital algebra, then
a−1 − b−1 = a−1 (b − a)b−1 .
2.3.9. Proposition. Let a and b be elements of a unital algebra. If both ab and ba are invertible,
then so are a and b.
20 2. THE ALGEBRA OF HILBERT SPACE OPERATORS
2.4. Spectrum
2.4.1. Definition. Let a be an element of a unital algebra A. The spectrum of a, denoted by
σA (a) or just σ(a), is the set of all complex numbers λ such that a − λ1 is not invertible.
2.4.2. Example. If z is an element of the algebra C of complex numbers, then σ(z) = {z}.
2.4.3. Example. Let X be a compact Hausdorff space. If f is an element of the algebra C(X) of
continuous complex valued functions on X, then the spectrum of f is its range.
2.4.4. Example. Let X be an arbitrary topological space. If f is an element of the algebra Cb (X)
of bounded continuous complex valued functions on X, then the spectrum of f is the closure of its
range.
2.4.5. Example. Let S be a positive measure space and [f ] ∈ L∞ (S) be an (equivalence class of)
essentially bounded function(s) on S. Then the spectrum of [f ] is its essential range.
2.4.6. Example. The family M3 of 3 × 3 matrices of complex
numbersis a unital algebra under
5 −6 −6
the usual matrix operations. The spectrum of the matrix −1 4 2 is {1, 2}.
3 −6 −4
2.4.7. Proposition. Let a be an element of a unital algebra such that a2 = 1. Then either
(i) a = 1, in which case σ(a) = {1}, or
(ii) a = −1, in which case σ(a) = {−1}, or
(iii) σ(a) = {−1, 1}.
1
Hint for proof . In (iii) to prove σ(a) ⊆ {−1, 1}, consider (a + λ1).
1 − λ2
22 2. THE ALGEBRA OF HILBERT SPACE OPERATORS
BANACH ALGEBRAS
The operation ∗ P
is called convolution. (To see where the definition comes from try multiplying
the power series ∞ k and
P∞ k .) With this additional operation l (Z) becomes a unital
−∞ ka z b
−∞ k z 1
commutative Banach algebra.
3.1.9. Example. The Banach space L1 (R) (see example 2.1.9) can be made into a commutative
Banach algebra. For f , g ∈ L1 (R) define
Z ∞
h(x) = f (x − y)g(y) dy (1)
−∞
whenever the function y 7→ f (x − y)g(y) belongs to L1 (R). Then h(x) is defined and finite for
almost all x ∈ R. Set h(x) = 0 whenever (1) is undefined. Furthermore h belongs to L1 (R) and
khk1 ≤ kf k1 kgk1 . The function h is usually denoted by f ∗ g; this is the convolution of f and g.
(In this definition does any problem arise from the fact that members of L1 are in fact equivalence
classes of functions?)
3.1.10. Definition. Let (ak ) be a sequence of vectors in a normed linear space V . We say that the
sequence (ak ) isPsummable, or that the series ∞
P
n
a P∞if there exists an element b ∈ V
k=1 k converges
such that kb − k=1 ak k → 0 as n → ∞. In this case we write k=1 ak = b.
3.1.11. Proposition (The Neumann series). Let P a bekan element of a unital Banach algebra A.
If kak < 1, then 1 − a ∈ inv A and (1 − a)−1 = ∞
k=0 a .
Hint for proof . In a unital algebra we take a0 to mean 1A . Start by proving that the sequence
(1, a, a2 , . . . ) is summable by showing that the sequence of partial sums nk=0 ak is Cauchy.
P
◦
3.1.12. Proposition. If A is a unital Banach algebra, then inv A ⊆ A.
Hint for proof . Let a ∈ inv A. Show, for sufficiently small h, that 1 − a−1 h is invertible.
3.1.13. Proposition. If a belongs to a unital Banach algebra and kak < 1, then
kak
(1 − a)−1 − 1 ≤ .
1 − kak
3.1.14. Proposition. Let A be a unital Banach algebra. The map a 7→ a−1 from inv A into itself
is a homeomorphism.
3.1.15. Notation. Let f be a complex valued function on some set S. Denote by Zf the set of all
points x in S such that f (x) = 0. This is the zero set of f .
3.1.16. Proposition. The invertible elements in the Banach algebra C(X) of all continuous com-
plex valued functions on a compact Hausdorff space X are the functions which vanish nowhere.
That is,
inv C(X) = {f ∈ C(X) : Zf = ∅} .
3.1.17. Proposition. Let A be a unital Banach algebra. The map r : a 7→ a−1 from inv A into
itself is differentiable and at each invertible element a, we have dra (h) = −a−1 ha−1 for all h ∈ A.
3.1. DEFINITION AND ELEMENTARY PROPERTIES 25
3.1.18. Proposition. Let a be an element of a unital Banach algebra A. Then the spectrum of a
is compact and |λ| ≤ kak for every λ ∈ σ(a).
Hint for proof . Use the Heine-Borel theorem. To prove that the spectrum is closed notice that
(σ(a))c = f ← (inv A) where f (λ) = a − λ1 for every complex number λ. Also show that if |λ| > kak,
then 1 − λ−1 a is invertible.
3.1.19. Definition. Let a be an element of a unital Banach algebra. The resolvent mapping
for a is defined by
Ra : C \ σ(a) → A : λ 7→ (a − λ1)−1 .
◦
3.1.20. Definition. Let U ⊆ C and A be a unital Banach algebra. A function f : U → A is
analytic on U if
f (z) − f (a)
f 0 (a) := lim
z→a z−a
exists for every a ∈ U . A complex valued function which is analytic on all of C is an entire
function.
3.1.21. Proposition. For a an element of a unital Banach algebra A and φ a bounded linear
functional on A let f := φ ◦ Ra : C \ σ(a) → C. Then
(i) f is analytic on its domain, and
(ii) f (λ) → 0 as |λ| → ∞.
Hint for proof . For (i) use proposition 2.3.8.
In order to prove our next major result, that the spectrum of an element is never empty (see
theorem 3.1.25), we need two theorems: Liouville’s theorem from complex variables and the Hahn-
Banach theorem from functional analysis.
3.1.22. Theorem (Liouville’s theorem). Every bounded entire function on C is constant.
A proof of this theorem can be found in nearly any text on complex variables.
What is known as the Hahn-Banach theorem is really a family of related theorems that guarantee
the existence of a generous supply of linear functionals. Some authors refer to the version given
below, which says that linear functionals on subspaces can be extended without increasing their
norm, as the Bohnenblust-Sobczyk-Suhomlinov theorem.
3.1.23. Theorem (Hahn-Banach theorem). If M is a linear subspace of a normed linear space V
and f ∈ M ∗ , then there exists an extension fˆ of f to all of V such that kfˆk = kf k.
Proof. See [15], theorem 14.12.
3.1.24. Corollary. Let M be a linear subspace of a normed linear space V . If z is a vector in M c
such that the distance d(z, M ) between z and M is strictly greater than zero, then there exists a
linear functional g ∈ V ∗ such that g → (M ) = {0}, g(z) = d(z, M ), and kgk = 1.
Proof. See [15], corollary 14.13.
3.1.25. Theorem. The spectrum of every element of a unital Banach algebra is nonempty.
Hint for proof . Argue by contradiction. Use Liouville’s theorem to show that φ◦Ra is constant
for every bounded linear functional φ on A. Then use the (corollary to the) Hahn-Banach theorem
to prove that Ra is constant. Why must this constant be 0?
It is important to keep in mind that we are working only with complex algebras.
This result
0 1
is false for real Banach algebras. An easy counterexample is the matrix regarded as an
−1 0
element of the (real) Banach algebra of all 2 × 2 matrices of real numbers.
The next result says that essentially the only (complex) Banach division algebra is the field of
complex numbers.
26 3. BANACH ALGEBRAS
3.2.4. Proposition. Let I be a proper closed ideal in a unital commutative Banach algebra A.
Then I is maximal if and only if A/I is a field.
The following is an immediate consequence of the preceding proposition and the Gelfand-Mazur
theorem (3.1.26).
3.2.5. Corollary. If I is a maximal ideal in a commutative unital Banach algebra, then A/I is
isometrically isomorphic to C.
3.2.6. Example. For every subset C of a topological space X the set
JC := f ∈ C(X) : f → (C) = {0}
3.3. Characters
3.3.1. Definition. A character (or nonzero multiplicative linear functional) on an
algebra A is a nonzero homomorphism from A into C. The set of all characters on A is denoted
by ∆A.
3.3.2. Proposition. Let A be a unital algebra and φ ∈ ∆A. Then
(a) φ(1) = 1;
(b) if a ∈ inv A, then φ(a) 6= 0;
(c) if a is nilpotent (that is, if an = 0 for some n ∈ N), then φ(a) = 0;
(d) if a is idempotent (that is, if a2 = a), then φ(a) is 0 or 1; and
(e) φ(a) ∈ σ(a) for every a ∈ A.
We note in passing that part (e) of the preceding proposition does not give us an easy way of
showing that the spectrum σ(a) of an algebra element is nonempty. This would depend on knowing
that ∆(A) is nonempty.
3.3.3. Example. The identity map is the only character on the algebra C.
3.3.4. Example. Let A be the algebra of 2 × 2 matrices a = aij such that a12 = 0. This algebra
has exactly two characters φ(a) = a11 and ψ(a) = a22 . Hint. Use proposition 3.3.2.
3.3.5. Example. The algebra of all 2 × 2 matrices of complex numbers has no characters.
3.3.6. Proposition. Let A be a unital algebra and φ be a linear functional on A. Then φ ∈ ∆A
if and only if ker φ is closed under multiplication and φ(1) = 1.
Hint for proof . For the converse apply φ to the product of a − φ(a)1 and b − φ(b)1 for a, b ∈ A.
28 3. BANACH ALGEBRAS
3.3.7. Proposition. Every multiplicative linear functional on a unital Banach algebra A is con-
tinuous. In fact, if φ ∈ ∆(A), then φ is contractive and kφk = 1.
3.3.8. Example. Let X be a topological space and x ∈ X. We define the evaluation func-
tional at x, denoted by EX x, by
EX x : C(X) → C : f 7→ f (x) .
This functional is a character on C(X) and its kernel is Jx . When there is only one topological
space under discussion we simplify the notation from EX x to Ex . Thus, in particular, for f ∈ C(X)
we often write Ex (f ) for the more cumbersome EX x(f ).
Proposition 3.3.7 turns out to be very important: it says that characters on a unital Banach
algebra A all live on the unit sphere of the dual space A∗ . The trouble with the unit sphere in the
dual space is that, while it is closed and bounded, it is not compact in the usual (norm) topology
on A∗ . We need a new topology on A∗ , one that is weak enough to make the closed unit ball (and
hence the unit sphere) compact and yet strong enough to be Hausdorff.
3.4.8. Proposition. Let (fλ ) be a net in the dual B ∗ of a Banach space B and g ∈ B ∗ . Then
w∗ / g if and only if fλ (x) → g(x) in C for every x ∈ B.
fλ
3.4.9. Definition. In proposition 3.3.7 we discovered that every character on a unital Banach
algebra A lives on the unit sphere of the dual A∗ . Thus we may give the set ∆(A) of characters on
A the relative w∗ -topology it inherits from A∗ . This is the Gelfand topology on ∆(A) and the
resulting topological space we call the character space (or the structure space) of A.
In order to show that the character space is compact we need an important theorem from
functional analysis.
3.4.10. Theorem (Alaoglu’s theorem). If V is a normed linear space, then the closed unit ball of
its dual V ∗ is compact in the w∗ -topology.
Proof. See [5], theorem V.3.1.
3.4.11. Proposition. The character space of a unital Banach algebra is a compact Hausdorff
space.
3.4.12. Example. The maximal ideal space of the unital Banach algebra l1 (Z) (see example 3.1.8)
is (homeomorphic to) the unit circle T.
Hint for proof . For each z ∈ T define
∞
X
ψz : l1 (Z) → C : a 7→ ak z k .
k=−∞
3.5.8. Proposition. Let a be an element of a unital commutative Banach algebra A. Then the
following are equivalent:
(a) a is quasinilpotent;
(b) ρ(a) = 0;
(c) σ(a) = {0};
(d) Γa = 0;
(e) φ(a)T= 0 for every φ ∈ ∆A;
(f) a ∈ Max A.
3.5.9. Definition. A Banach algebra is semisimple if it has no nonzero quasinilpotent elements.
3.5.10. Proposition. Let A be a unital commutative Banach algebra. Then the following are
equivalent:
(a) A is semisimple;
(b) if ρ(a) = 0, then a = 0;
(c) if σ(a) = {0}, then a = 0;
(d) the Gelfand transform ΓA is a monomorphism (that is, an injective homomorphism);
T φ(a) = 0 for every φ ∈ ∆A, then a = 0;
(e) if
(f) Max A = {0}.
3.5.11. Proposition. Let A be a unital commutative Banach algebra. Then the following are
equivalent:
(a) ka2 k = kak2 for all a ∈ A;
32 3. BANACH ALGEBRAS
The function t 7→ eit is a bijection from the interval [−π, π) to the unit circle T in the complex
plane. One consequence of this is that we need not distinguish between
(i) 2π-periodic functions on R,
(ii) all functions on [−π, π),
(iii) functions f on [−π, π] such that f (−π) = f (π), and
(iv) functions on T.
In the sequel we will frequently without further explanation identify these classes of functions.
Another convenient identification is the one between the unit circle T in C and the maximal
ideal space of the algebra l1 (Z). The homeomorphism ψ between these two compact Hausdorff
space was defined in example 3.4.12. It is often technically more convenient in working with the
3.6. THE FOURIER TRANSFORM 33
Gelfand transform Γa of an element a ∈ l1 (Z) to treat it as a function, let’s call it Ga , whose domain
is T as the following diagram suggests.
T
Ga
ψ
∆l1 (Z) /C
Γa
3.6.6. Definition. If f ∈ L1 ([−π, π)), the Fourier series for f is the series
∞
X
fe(n) exp(int) −π ≤t≤π
n=−∞
for all n ∈ Z. The doubly infinite sequence fe is the Fourier transform of f , and the number fe(n)
is the nth Fourier coefficient of f . If fe ∈ l1 (Z) we say that f has an absolutely convergent
Fourier series. The set of all continuous functions on T with absolutely convergent Fourier series
is denoted by A C(T).
3.6.7. Proposition. If f is a continuous 2π-periodic function on R whose Fourier transform is
zero, then f = 0.
3.6.8. Corollary. The Fourier transform on C(T) is injective.
3.6.9. Proposition. The Fourier transform on C(T) is a left inverse of the Gelfand transform
on l1 (Z).
3.6.10. Proposition. The range of the Gelfand transform on l1 (Z) is the unital commutative
Banach algebra A C(T).
3.6.11. Proposition. There are continuous functions whose Fourier series diverge at 0.
Proof. See, for example, [15], exercise 18.45.)
What does the preceding result say about the Gelfand transform Γ : l1 (Z) → C(T)?
Suppose a function f belongs to AC(T) and is never zero. Then 1/f is certainly continuous
on T, but does it have an absolutely convergent Fourier series? One of the first triumphs of the
abstract study of Banach algebras was a very simple proof of the answer to this question given
originally by Norbert Wiener. Wiener’s original proof by comparison was quite difficult.
3.6.12. Theorem (Wiener’s theorem). Let f be a continuous function on T which is never zero.
If f has an absolutely convergent Fourier series, then so does its reciprocal 1/f .
3.6.13. Example. The Laplace transform can also be viewed as a special case of the Gelfand
transform. For details see [2], pages 173–175.
CHAPTER 4
morphisms Mor(M, M ) and the morphisms in this class are the elements of the monoid M . The
composite of two morphisms a and b is their product a ∗ b in M .
4.1.11. Notation. If C is a category we denote by C0 the class of objects in C and by C1 the
class of morphism in C. Thus, for example, the notation X ∈ CpH0 would indicate that X is
a compact Hausdorff space and f ∈ CpH1 would mean that f is a continuous function between
compact Hausdorff spaces.
φ
4.1.12. Notation. In these notes we restrict the notation A / B to morphisms. When this
notation appears it should be clear from context what category is being discussed. We then infer
that A and B are objects in that category and φ : A → B is a morphism.
4.1.13. Definition. The terminology for inverses of morphisms in categories is essentially the same
α / β
as for functions. Let S T and T / S be morphisms in a category. If β ◦ α = IS , then β is a
left inverse of α and, equivalently, α is a right inverse of β. We say that the morphism α is
β
an isomorphism (or is invertible) if there exists a morphism T / S which is both a left and a
−1
right inverse for α. Such a function is denoted by α and is called the inverse of α. To indicate
that objects A and B in some category are isomorphic we will, in general, use the notation A ∼ = B.
There is one exception however: in various categories of topological spaces X ≈ Y means that the
spaces X and Y are homeomorphic (topologically isomorphic).
4.2. Functors
4.2.1. Definition. If A and B are categories a (covariant) functor F from A to B (written
A
F / B) is a pair of maps: an object map F which associates with each object S in A an
object F (S) in B and a morphism map (also denoted by F ) which associates with each morphism
f ∈ Mor(S, T ) in A a morphism F (f ) ∈ Mor(F (S), F (T )) in B, in such a way that
(1) F (g ◦ f ) = F (g) ◦ F (f ) whenever g ◦ f is defined in A; and
(2) F (idS ) = idF (S) for every object S in A.
preceding definition only in that, first, the morphism map associates with each morphism f ∈
Mor(S, T ) in A a morphism F (f ) ∈ Mor(F (T ), F (S)) in B and, second, condition (1) above is
replaced by
(10 ) F (g ◦ f ) = F (f ) ◦ F (g) whenever g ◦ f is defined in A.
4.2.2. Definition. A lattice is a partially ordered set in which every pair of elements has a
supremum and an infimum. A lattice L is order complete if the sup A and inf A exist (in L) for
every nonempty subset A of L.
4.2.3. Example. Let S be a nonempty set.
(a) The power set P(S) of S partially ordered by ⊆ is an order complete lattice.
(b) The class of order complete lattices and order preserving maps is a category.
(c) For each function f between sets let P(f ) = f → . Then P is a covariant functor from
the category of sets and functions to the category of order complete lattices and order
preserving maps.
(d) For each function f between sets let P(f ) = f ← . Then P is a contravariant functor from
the category of sets and functions to the category of order complete lattices and order
preserving maps.
4.2.4. Example. Let T : B → C be a bounded linear map between Banach spaces. Define
T ∗ : C ∗ → B ∗ : g 7→ g T .
The map T ∗ is the adjoint of T .
4.2. FUNCTORS 37
are injective if and only if they are left-cancellable. The agreement between surjective and right-
cancellable is less general. In the category of topological spaces and continuous maps, for example,
morphisms are right-cancellable if and only if they have dense range.
4.2.10. Example. If C is a category let C2 be the category whose objects are ordered pairs of
f
objects in C and whose morphisms are ordered pairs of morphisms in C. Thus if A / C and
g (f,g)
/ D are morphisms in C, then (A, B) / (C, D) (where (f, g)(a, b) = f (a), g(b) for all
B
a ∈ A and b ∈ B) is a morphism in C2 . Composition of morphism is defined in the obvious way:
(f, g) ◦ (h, j) = (f ◦ h, g ◦ j). We define the diagonal functor C
D / C2 by D(A) := (A, A).
This is a covariant functor.
τA τA0
G(A) / G(A0 )
G(f )
two contravariant functors should be obvious: just reverse the horizontal arrows in the preceding
diagram.)
τ /
A natural transformation F G is a natural equivalence (or a natural isomorphism)
if each morphism τA is an isomorphism in B. Two functors are naturally equivalent if there
exists a natural equivalence between them.
4.3.2. Example. On the category BAN∞ of Banach spaces and bounded linear transformations
let I be the identity functor and ( · )∗∗ be the second dual functor (see example 4.2.4(a) ). Show
that the map J which takes each Banach space B to its natural injection JB (see definition 3.4.2) is
a natural transformation from I to ( · )∗∗ . In the category of reflexive Banach spaces and bounded
linear maps this natural transformation is a natural equivalence.
4.3.3. Example. The mapping E : X 7→ EX , which takes compact Hausdorff spaces to their
corresponding evaluation maps is a natural equivalence between the identity functor and the functor
∆C in the category of compact Hausdorff spaces and continuous maps. (See example 3.3.8 and
proposition 3.4.21.)
Thus X and ∆C(X) are not only homeomorphic, they are naturally homeomorphic. This is the
justification for the very common informal assertion that the maximal ideals of C(X) “are” just the
points of X.
4.3.4. Example. The Gelfand transform Γ is a natural transformation from the identity functor to
the C∆ functor on the category UCBA of unital commutative Banach algebras and unital algebra
homomorphisms.
4.4. UNIVERSAL MORPHISMS 39
B
u / F(A) A
F(f˜) f˜
f
F(A0 ) A0 (4.1)
4.4.2. Example. Let S be a set and VEC / SET be the forgetful functor from the category
ι /
VEC to the category SET. If there exists a vector space V and an injection S V which
constitute a universal morphism for S (with respect to ), then V is the free vector space
over S. Of course merely defining an object does not guarantee its existence. In fact, free vector
spaces exist over arbitrary sets. Given the set S let V be the set of all complex valued functions
on S which have finite support. Define addition and scalar multiplication pointwise. The map
ι : s 7→ χ{s} of each element s ∈ S to the characteristic function of {s} is the desired injection. To
verify that V is free over S it must be shown that for every vector space W and every function
f fe
S / W there exists a unique linear map V / W which makes the following diagram commute.
S
ι / V V
f˜ f˜
f
W W
4.4.3. Example. Let S be a nonempty set and let S 0 = S∪{1} where 1 is any element not belonging
to S. A word in the language S is a sequence s of elements of S 0 which is eventually 1 and satisfies:
if sk = 1, then sk+1 = 1. The constant sequence (1, 1, 1, . . . ) is called the empty word. Let F
be the set of all words of members in the language S. Suppose that s = (s1 , . . . , sm , 1, 1, . . . ) and
t = (t1 , . . . , tn , 1, 1, . . . ) are words (where s1 , . . . , sm , t1 , . . . , tn ∈ S). Define
x ∗ y := (s1 , . . . , sm , t1 , . . . , tn , 1, 1, . . . ).
This operation is called concatenation. It is not difficult to see that the set F under concatenation
is a monoid (a semigroup with identity) where the empty word is the identity element. This is the
free monoid generated by S. If we exclude the empty word we have the free semigroup
40 4. INTERLUDE: THE LANGUAGE OF CATEGORIES
S
ι / F F
f˜ f˜
f
G G
where ι is the obvious injection s 7→ (s, 1, 1, . . . ) (usually treated as an inclusion mapping), G is
an arbitrary semigroup, f : S → G is an arbitrary function, and is the forgetful functor from the
category of monoids and homomorphisms (or the category of semigroups and homomorphisms) to
the category SET.
4.4.4. Example. Here is the usual presentation of the coproduct of two objects in a category. Let
A1 and A2 be two objects in a category C. A coproduct of A1 and A2 is a triple (Q, ι1 , ι2 ) with
ιk
Q an object in C and Ak / Q (k = 1, 2) morphisms in C which satisfies the following condition:
fk
if B is an arbitrary object in C and Ak / B (k = 1, 2) are arbitrary morphisms in C, then there
f
exists a unique C-morphism Q / B which makes the following diagram commute.
? BO _
f1 f2
f
A1 /Qo A2 (4.2)
ι1 ι2
It may not be obvious at first glance that this construction is universal in the sense of defini-
tion 4.4.1. To see that it in fact is, let D be the diagonal functor from a category C to the category
of pairs C2 (see example 4.2.10). Suppose that (Q, ι1 , ι2 ) is a coproduct of the C-objects A1 and A2
ι /
in the sense defined above. Then A = (A1 , A2 ) is an object in C2 , A D(Q) is a C2 -morphism,
and the pair (A, ι) is universal in the sense of 4.4.1. The diagram corresponding to diagram (4.1) is
A
ι / D(Q) Q
D(f˜) f˜
f
D(B) B (4.3)
fk
where B is an arbitrary object in C and (for k = 1, 2) Ak / B are arbitrary C-morphisms so
that f = (f1 , f2 ) is a C2 -morphism.
4.4.5. Example. The coproduct of two objects H and K in the category HIL of Hilbert spaces
and bounded linear maps (and more generally in the category of inner product spaces and linear
maps) is their (external orthogonal) direct sum H ⊕ K (see 1.2.19).
4.4.6. Example. The coproduct of two objects S and T in the category SET is their disjoint
union S ] T .
4.4.7. Example. Let A and B be Banach spaces. On the Cartesian product A × B define addition
and scalar multiplication pointwise. For every (a, b) ∈ A × B let k(a, b)k = max{kak, kbk}. This
makes A × B into a Banach space, which is denoted by A ⊕ B and is called the direct sum of A
and B. The direct sum is a coproduct in the topological category BAN∞ of Banach spaces but
not in the corresponding geometrical category BAN1 .
4.4. UNIVERSAL MORPHISMS 41
4.4.8. Example. To construct a coproduct on the geometrical category BAN1 of Banach spaces
define the vector space operations pointwise on A × B but as a norm use k(a, b)k1 = kak + kbk for
all (a, b) ∈ A × B.
Virtually everything in category theory has a dual concept—one that is obtained by reversing
all the arrows. We can, for example, reverse all the arrows in diagram (4.1).
B_o
u
F(A) AO
O
F(f˜) f˜
f
F(A0 ) A0 (4.4)
Some authors reverse the convention and call the morphism in 4.4.1 co-universal and the one here
universal. Other authors, this one included, call both universal morphisms.
4.4.10. Convention. Morphisms of both the types defined in 4.4.1 and 4.4.9 will be referred to
as universal morphisms.
4.4.11. Example. The usual categorical approach to products is as follows. Let A1 and A2 be
two objects in a category C. A product of A1 and A2 is a triple (P, π1 , π2 ) with P an object in
ιk
C and Q / Ak (k = 1, 2) morphisms in C which satisfies the following condition: if B is an
fk
arbitrary object in C and B / Ak (k = 1, 2) are arbitrary morphisms in C, then there exists a
f
unique C-morphism B / P which makes the following diagram commute.
B
f1 f2
f
A1 o π1 P π2
/ A2 (4.5)
Hint for proof . Let (M, d) be a metric space and fix a ∈ M . For each x ∈ M define φx : M → R
by φx (u) = d(x, u) − d(u, a). Show first that for every x ∈ M the function φx belongs to the space
Cb (M, R) of bounded real valued continuous functions on M . Then show that φ : M → Cb (M, R)
is an isometry. (To verify that du (φx , φy ) ≥ d(x, y) notice that kφx − φy ku ≥ |φx (y) − φy (y)|. )
Explain why the closure of ran φ in Cb (M ) is a completion of M .
4.4.18. Example. Let MS1 be the category of metric spaces and contractive maps and CMS1
be the category of complete metric spaces and contractive maps. The map M 7→ M c from MS1 to
CMS1 which takes a metric space to its completion is universal in the sense of definition 4.4.1.
The following consequence of proposition 4.4.15 allows us to speak of the completion of a metric
space.
4.4.19. Corollary. Metric space completions are unique (up to isometry).
Every inner product space is a metric space by propositions 1.2.11 and 1.2.12. It is an important
fact that completing an inner product space as a metric space produces a Hilbert space.
4.4.20. Proposition. Let V be an inner product space and H be the metric space completion
of V . Then the inner product on V can be extended to an inner product on H and the metric on
H induced by this inner product is the same as the original metric on H.
4.4.21. Proposition. Let V be an inner product space and H be its completion (see proposi-
tion 4.4.20). Then every operator on V extends to an operator on H.
Next is a closely related result which we will need in the sequel.
4.4.22. Proposition. Let D be a dense subset of a Hilbert space H and suppose that S and T are
operators on H such that hSu, vi = hu, T vi for all u, v ∈ D. Then hSx, yi = hx, T yi for all x,
y ∈ H.
4.4.23. Definition. A category A is a subcategory of category B if A0 ⊆ B0 and A1 ⊆ B1 .
It is a full subcategory of B if, additionally, for all objects A1 and A2 in A every B-morphism
τ /
A1 A2 is also an A-morphism.
4.4.24. Example. The category of unital algebras and unital algebra homomorphism is a subcat-
egory of the category of algebras and algebra homomorphisms, but not a full subcategory.
4.4.25. Example. The category of complete metric spaces and continuous maps is a full subcat-
egory of the category of metric spaces and continuous maps.
CHAPTER 5
C ∗ -ALGEBRAS
Hint for proof . Show that for every x ∈ H the map y 7→ φ(x, y) is a bounded linear functional
on K. Use the Riesz-Fréchet theorem (2.2.23).
The next proposition provides an entirely satisfactory extension of proposition 1.2.32 to the
infinite dimensional setting: adjoints of Hilbert space operators always exist.
5.1.6. Proposition. Let A : H → K be a bounded linear map between Hilbert spaces. The mapping
(x, y) 7→ hAx, yi from H × K into C is a bounded sesquilinear functional. Then there exists an
unique bounded linear map A∗ : K → H such that
hAx, yi = hx, A∗ yi
for all x ∈ H and y ∈ K. This is the adjoint of A (see definition 1.2.31). Also kA∗ k = kAk.
43
44 5. C ∗ -ALGEBRAS
for all x, y ∈ A and α ∈ C. An algebra on which an involution has been defined is a ∗ -algebra
(pronounced “star algebra”). An algebra homomorphism φ between ∗ -algebras which preserves
involution (that is, such that φ(a∗ ) = (φ(a))∗ ) is a ∗ -homomorphism (pronounced “star ho-
momorphism”. A ∗ -homomorphism φ : A → B between unital algebras is said to be unital if
φ(1A ) = 1B . In the category of ∗ -algebras and ∗ -homomorphisms, the isomorphisms (called for
emphasis ∗ -isomorphisms) are the bijective ∗ -homomorphisms.
5.2.2. Example. In the algebra C of complex numbers the map z 7→ z of a number to its complex
conjugate is an involution.
5.2.3. Example. The map of an n × n matrix to its conjugate transpose is an involution on the
unital algebra Mn (see example 2.3.14).
5.2.4. Example. Let X be a compact Hausdorff space. The map f 7→ f of a function to its
complex conjugate is an involution in the algebra C(X).
5.2.5. Example. The map T 7→ T ∗ of a Hilbert space operator to its adjoint is an involution in
the algebra B(H) (see proposition 5.1.10).
5.2.6. Proposition. Let a and b be elements of a ∗ -algebra. Then a commutes with b if and only
if a∗ commutes with b∗ .
5.2.7. Proposition. In a unital ∗ -algebra 1∗ = 1.
5.2.8. Proposition. If a ∗ -algebra A has a left multiplicative identity e, then A is unital and
e = 1A .
5.2.9. Proposition. Let a be an element of a unital ∗ -algebra. Then a∗ is invertible if and only
if a is. And when a is invertible we have
∗
(a∗ )−1 = a−1 .
5.2.10. Proposition. Let a be an element of a unital ∗ -algebra. Then λ ∈ σ(a) if and only if
λ ∈ σ(a∗ ).
5.2.11. Corollary. For every element a of a ∗ -algebra ρ(a∗ ) = ρ(a).
5.2.12. Definition. An element a of a ∗ -algebra A is self-adjoint (or Hermitian) if a∗ = a. It
is normal if a∗ a = aa∗ . And it is unitary if a∗ a = aa∗ = 1. The set of all self-adjoint elements of
A is denoted by H(A), the set of all normal elements by N(A), and the set of all unitary elements
by U(A).
5.2.13. Proposition. Let a be an element of a ∗ -algebra. Then there exist unique self-adjoint
elements u and v such that a = u + iv.
Hint for proof . Think of the special case of writing a complex number in terms of its real and
imaginary parts.
5.2.14. Definition. Let S be a subset of a ∗ -algebra A. Then S ∗ = {s∗ : s ∈ S}. The subset
S is self-adjoint if S ∗ = S.
A nonempty self-adjoint subalgebra of A is a ∗ -subalgebra (or a sub-∗ -algebra).
CAUTION. The preceding definition does not say that the elements of a self-adjoint subset of a
∗ -algebra are themselves self-adjoint.
5.2.15. Definition. In an algebra A with involution a ∗ -ideal is a self-adjoint ideal in A.
5.2.16. Proposition. Let φ : A → B be a ∗ -homomorphism between ∗ -algebras. Then the kernel
of φ is a ∗ -ideal in A and the range of φ is a ∗ -subalgebra of B.
5.2.17. Proposition. If J is a ∗ -ideal in a ∗ -algebra A, then defining [a]∗ = [a∗ ] for each a ∈ A
makes the quotient A/J into a ∗ -algebra and the quotient map a 7→ [a] is a ∗ -homomorphism. The
∗ -algebra A/J is,of course, the quotient of A by J.
46 5. C ∗ -ALGEBRAS
5.3. C ∗ -Algebras
5.3.1. Definition. A C ∗ -algebra is a Banach algebra A with involution which satisfies
ka∗ ak = kak2
for every a ∈ A. This property of the norm is usually referred to as the C ∗ -condition. An algebra
norm satisfying this condition is a C ∗ -norm. A C ∗ -subalgebra of a C ∗ -algebra A is a closed
∗ -subalgebra of A.
5.3.2. Example. The vector space C of complex numbers with the usual multiplication of complex
numbers and complex conjugation z 7→ z as involution is a unital commutative C ∗ -algebra.
5.3.3. Example. If X is a compact Hausdorff space, the algebra C(X) of continuous complex
valued functions on X is a unital commutative C ∗ -algebra when involution is taken to be complex
conjugation.
5.3.4. Example. If X is a locally compact Hausdorff space, the Banach algebra C0 (X) = C0 (X, C)
of continuous complex valued functions on X which vanish at infinity is a (not necessarily unital)
commutative C ∗ -algebra when involution is taken to be complex conjugation.
5.3.5. Example. If (X, µ) is a measure space, the algebra L∞ (X, µ) of essentially bounded measur-
able complex valued functions on X (again with complex conjugation as involution) is a C ∗ -algebra.
(Technically, of course, the members of L∞ (X, µ) are equivalence classes of functions which differ
on sets of measure zero.)
5.3.6. Example. The algebra B(H) of bounded linear operators on a Hilbert space H is a unital
C ∗ -algebra when addition and scalar multiplication of operators are defined pointwise, composition
is taken as multiplication, the map T 7→ T ∗ , which takes an operator to its adjoint, is the involution,
and the norm is the usual operator norm. (See proposition 5.1.11 for the crucial C ∗ -property
kT ∗ T k = kT k2 .)
5.3.7. Example. A special case of the preceding example is the set Mn of n × n matrices of
complex numbers. We saw in example 5.2.3 that Mn is a unital algebra with involution. To make
it into a C ∗ -algebra simply identify each matrix with the (necessarily bounded) linear operator in
B(Cn ) which it represents.
5.3.8. Proposition. In every C ∗ -algebra involution is an isometry. That is, ka∗ k = kak for every
element a in the algebra.
In definition 3.1.1 of normed algebra we made the special requirement that the identity element
of a unital normed algebra have norm one. In C ∗ -algebras this requirement is redundant.
5.3.9. Corollary. In a unital C ∗ -algebra k1k = 1.
5.3.10. Corollary. Every unitary element in a unital C ∗ -algebra has norm one.
5.3.11. Corollary. If a is an element of a C ∗ -algebra A such that ab = 0 for every b ∈ A, then
a = 0.
5.3.12. Proposition. Let a be a normal element of a unital C ∗ -algebra. Then ka2 k = kak2 and
therefore ρ(a) = kak.
5.3.13. Corollary. Let A be a unital commutative C ∗ -algebra. Then ka2 k = kak2 and ρ(a) = kak
for every a ∈ A.
5.3.14. Corollary. On a unital commutative C ∗ -algebra A the Gelfand transform Γ is an isometry;
that is, kΓa ku = kb
aku = kak for every a ∈ A.
5.3.15. Corollary. The norm of a unital C ∗ -algebra is unique in the sense that given a unital
algebra A with involution there is at most one norm which makes A into a C ∗ -algebra.
5.4. THE GELFAND-NAIMARK THEOREM—VERSION I 47
49
50 6. SURVIVAL WITHOUT IDENTITY
6.1.12. Definition. Let A be an algebra. A left ideal J in A is a modular left ideal if there
exists an element u in A such that au − a ∈ J for every a ∈ A. Such an element u is called a
right identity with respect to J. Similarly, a right ideal J in A is a modular right ideal
if there exists an element v in A such that va − a ∈ J for every a ∈ A. Such an element v is called
a left identity with respect to J. A two-sided ideal J is a modular ideal if there exists
an element e which is both a left and a right identity with respect to J.
6.1.13. Proposition. An ideal J in an algebra is modular if and only if it is both left modular and
right modular.
Hint for proof . Show that if u is a right identity with respect to J and v is a left identity with
respect to J, then vu is both a right and left identity with respect to J.
6.1.14. Proposition. An ideal J in an algebra A is modular if and only if the quotient algebra
A/J is unital.
6.1.15. Example. Let X be a locally compact Hausdorff space. For every x ∈ X the ideal Jx is
a maximal modular ideal in the C ∗ -algebra C0 (X) of continuous complex valued functions on X.
Proof. By the locally compact Hausdorff space version of Urysohn’s lemma (see, for example,
[11], theorem 17.2.10) there exists a function g ∈ C0 (X) such that g(x) = 1. Thus Jx is mod-
ular because g is an identity with respect to Jx . Since C0 (X) = Jx ⊕ span{g} the ideal Jx has
codimension 1 and is therefore maximal.
6.1.16. Proposition. If J is a proper modular ideal in a Banach algebra, then so is its closure.
6.1.17. Corollary. Every maximal modular ideal in a Banach algebra is closed.
6.1.18. Proposition. Every proper modular ideal in a Banach algebra is contained in a maximal
modular ideal.
6.1.19. Proposition. Let A be a commutative Banach algebra and φ ∈ ∆A. Then ker φ is a
maximal modular ideal in A and A/ ker φ is a field. Furthermore, every maximal modular ideal is
the kernel of exactly one character in ∆A.
6.1.20. Proposition. Every multiplicative linear functional on a commutative Banach algebra A
is continuous. In fact, every character is contractive.
6.1.21. Example. Let Ae be the unitization of a commutative Banach algebra A (see proposi-
tion 6.1.7). Define
φ∞ : Ae → C : (a, λ) 7→ λ .
Then φ∞ is a character on Ae .
6.1.22. Proposition. Every character φ on a commutative Banach algebra A has a unique exten-
sion to a character φe on the unitization Ae of A. And the restriction to A of any character on
Ae , with the obvious exception of φ∞ , is a character on A.
6.1.23. Proposition. If A is a commutative Banach algebra, then
(a) ∆A is a locally compact Hausdorff space,
(b) ∆Ae = ∆A ∪ {φ∞ },
(c) ∆Ae is the one-point compactification of ∆A, and
(d) the map φ 7→ φe is a homeomorphism from ∆A onto ∆Ae \ {φ∞ }.
If A is unital (so that ∆A is compact), then φ∞ is an isolated point of ∆Ae .
6.1.24. Theorem. If A is a commutative Banach algebra with a nonempty character space, then
the Gelfand transform
Γ = ΓA : C0 (∆A) : a 7→ ba = Γa
is a contractive algebra homomorphism and ρ(a) = kb aku . Furthermore, if A is not unital, then
a ∪ {0} for every a ∈ A.
σ(a) = ran b
6.2. EXACT SEQUENCES AND EXTENSIONS 51
is said to be exact at An if ran φn = ker φn+1 . A sequence is exact if it is exact at each of its
constituent C ∗ -algebras. A sequence of C ∗ -algebras and ∗ -homomorphisms of the form
φ ψ
0 /A /E /B /0 (6.1)
is a short exact sequence. (Here 0 denotes the trivial 0-dimensional C ∗ -algebra, and the
unlabeled arrows are the obvious ∗-homomorphisms.) The short exact sequence of C ∗ -algebras (6.1)
is split exact if there exists a ∗ -homomorphism ξ : B → E such that ψ ◦ ξ = idB .
The preceding definitions were for the category CSA of C ∗ -algebras and ∗ -homomorphisms.
Of course there is nothing special about this particular category. Exact sequences make sense in
many situations, in, for example, various categories of Banach spaces, Banach algebras, Hilbert
spaces, vector spaces, Abelian groups, modules, and so on.
Often in the context of C ∗ -algebras the exact sequence (6.1) is referred to as an extension.
Some authors refer to it as an extension of A by B (for example, [29] and [7]) while others say it
is an extension of B by A ([20], [12], and [3]). In [12] and [3] the extension is defined to be the
sequence 6.1; in [29] it is defined to be the ordered triple (φ, E, ψ); and in [20] and [7] it is defined
to be the C ∗ -algebra E itself. Regardless of the formal definitions it is common to say that E is
an extension of A by B (or of B by A).
6.2.2. Convention. In a C ∗ -algebra the word ideal will always mean a closed two-sided ∗ -ideal
(unless, of course, the contrary is explicitly stated). We will show shortly (in proposition 6.6.6)
that requiring an ideal in a C ∗ -algebra to be self-adjoint is redundant. A two-sided algebra ideal
of a C ∗ -algebra which is not necessarily closed will be called an algebraic ideal. A self-adjoint
algebraic ideal of a C ∗ -algebra will be called an algebraic ∗-ideal.
6.2.3. Proposition. The kernel of a ∗ -homomorphism φ : A → B between C ∗ -algebras is an ideal
in A and its range is a ∗ -subalgebra of B.
6.2.4. Proposition. Consider the following diagram in the category of C ∗ -algebras and ∗ -homomorphisms
j
0 /A /B k /C /0
f g h
0 / A0 / B0 / C0 /0
j0 k0
If the rows are exact and the left square commutes, then there exists a unique ∗ -homomorphism
h : C → C 0 which makes the right square commute.
6.2.5. Proposition (The Five Lemma). Suppose that in the following diagram of C ∗ -algebras and
∗ -homomorphisms
j
0 /A /B k /C /0
f g h
0 / A0 / B0 / C0 /0
j0 k0
52 6. SURVIVAL WITHOUT IDENTITY
the rows are exact and the squares commute. Prove the following.
(1) If g is surjective, so is h.
(2) If f is surjective and g is injective, then h is injective.
(3) If f and h are surjective, so is g.
(4) If f and h are injective, so is g.
6.2.6. Definition. Let A and B be C ∗ -algebras. We define the (external) direct sum of A
and B, denoted by A ⊕ B, to be the Cartesian product A × B with pointwise defined algebraic
operations and norm given by
k(a, b)k = max{kak, kbk}
for all a ∈ A and b ∈ B. An alternative notation for the element (a, b) in A ⊕ B is a ⊕ b.
6.2.7. Example. Let A and B be C ∗ -algebras. Then the direct sum of A and B is a C ∗ -algebra
and the following sequence is split short exact:
ι1 π2
0 /A / A⊕B /B /0
θ
0 /A / E0 /B /0
commute.
6.2.10. Proposition. In the preceding definition it is enough to require θ to be a ∗ -homomorphism.
6.2.11. Proposition. Let A and B be C ∗ -algebras. An extension
φ ψ
0 /A /E /B /0
is strongly equivalent to the direct sum extension A⊕B if and only if there exists a ∗ -homomorphism
ν : E → A such that ν ◦ φ = idA .
6.2.12. Proposition. If the sequences of C ∗ -algebras
φ ψ
0 /A /E /B /0 (6.2)
and
φ0 ψ0
0 /A / E0 /B /0 (6.3)
are strongly equivalent and (6.2) splits, then so does (6.3).
6.3. UNITIZATION OF C ∗ -ALGEBRAS 53
Hint for proof . The proof this result is a little complicated. Everyone should go through all
the details at least once in his/her life. What follows is an outline of a proof.
Notice that we speak of the unitization of C ∗ -algebra A whether or not A already has a unit
(multiplicative identity). We divide the argument into two cases.
Case 1: the algebra A is unital.
(1) On the algebra A ./ C define
k(a, λ)k := max{ka + λ1A k, |λ|}
and let A
e := A ./ C together with this norm.
(2) Prove that the map (a, λ) 7→ k(a, λ)k is a norm on A ./ C.
(3) Prove that this norm is an algebra norm.
(4) Show that it is, in fact, a C∗-norm on A ./ C.
(5) Observe that it is an extension of the norm on A.
(6) Prove that A ./ C is a C ∗ -algebra by verifying completeness of the metric space induced
by the preceding norm.
(7) Prove that the sequence
Q
/A ι /A
eo / /0
0 C
ψ
show that this sequence converges it suffices to show that it has a convergent subsequence.
Showing that the sequence (λn ) is bounded allows us to extract from it a convergent
subsequence λnk . Prove that (Lank ) converges.)
(18) Prove that the sequence
0 /A /A
e /C /0 (6.5)
is split exact.
(19) The C ∗ -algebra A
e is not equivalent to A ⊕ C.
In contrast to the situation in general Banach algebras there is no distinction between topo-
logical and geometric categories of C ∗ -algebras. One of the most remarkable aspects of C ∗ -algebra
theory is that ∗-homomorphisms between such algebras are automatically continuous—in fact, con-
tractive. It follows that if two C ∗ -algebras are algebraically ∗-isomorphic, then they are isometrically
isomorphic.
6.3.13. Proposition. Every ∗-homomorphism between C ∗ -algebras is contractive.
6.3.14. Proposition. Every injective ∗-homomorphism between C ∗ -algebras is an isometry.
e = X ∪ {∞} be its one-
6.3.15. Proposition. Let X be a locally compact Hausdorff space and X
point compactification. Define
ι : C0 (X) → C(X)
e : f 7→ fe
where
f (x), if x ∈ X;
fe(x) =
0, if x = ∞.
Also let E∞ be defined on C(X)
e by E∞ (g) = g(∞). Then the sequence
E∞
0 / C0 (X) ι / C(X)
e /C /0
is exact.
In the preceding proposition we refer to X e as the one-point compactification of X even in the
case that X is compact to begin with. Most definitions of compactification require a space to be
dense in any compactification. (See my remarks in the beginning of section 17.3 of [11].) We have
previously adopted the convention that the unitization of a unital algebra gets a new multiplicative
identity. In the spirit of consistency with this choice we will in the sequel subscribe to the convention
that the one-point compactification of a compact space gets an additional (isolated) point. (See
also the terminology introduced in 9.3.1.)
From the point of view of the Gelfand-Naimark theorem (6.3.11) the fundamental insight
prompted by the next proposition is that the unitization of a commutative C ∗ -algebra is, in some
sense, the “same thing” as the one-point compactification of a locally compact Hausdorff space.
6.3.16. Proposition. If X is a locally compact Hausdorff space, then the unital C ∗ -algebras
∼
C0 (X) and C(X)e are isometrically ∗ -isomorphic.
Proof. Define ∼
θ : C0 (X) → C(X) e : (f, λ) 7→ fe + λ1 e
X
(where 1Xe is the constant function 1 on X). Then consider the diagram
e
/ C0 (X) / C0 (X) ∼ /C /0
0
θ
0 / C0 (X) ι e E∞
/ C(X) /C /0
The top row is exact by proposition 6.3.1, the bottom row is exact by proposition 6.3.15, and the
diagram obviously commutes. It is routine to check that θ is a ∗ -homomorphism. Therefore θ is
an isometric ∗ -isomorphism by proposition 6.2.10 and corollary 6.3.14.
6.4. Quasi-inverses
6.4.1. Definition. An element b of an algebra A is a left quasi-inverse for a ∈ A if ba = a + b.
It is a right quasi-inverse for a if ab = a + b. If b is both a left and a right quasi-inverse for a
it is a quasi-inverse for a. When a has a quasi-inverse denote it by a0 .
56 6. SURVIVAL WITHOUT IDENTITY
6.4.6. Proposition. Let A be a Banach algebra and a ∈ A. If kak < 1, then a0 exists and
kak kak
≤ ka0 k ≤ .
1 + kak 1 − kak
6.4.7. Proposition. Let A be a unital Banach algebra and a ∈ A. If ρ(1 − a) < 1, then a is
invertible in A.
6.4.8. Proposition. Let A be a Banach algebra and a, b ∈ A. If b0 exists and kak < (1 + kb0 k)−1 ,
then (a + b)0 exists and
kak (1 + kb0 k)2
k(a + b)0 − b0 k ≤ .
1 − kak (1 + kb0 k)
Hint for proof . Show first that u = (a − b0 a)0 exists and that u + b0 − ub0 is a left quasi-inverse
for a + b.
6.4.15. Proposition. If an algebra A is not unital and a ∈ A, then σ̆A (a) = σ̆Ae(a), where A
e is
the unitization of A.
6.5. POSITIVE ELEMENTS IN C ∗ -ALGEBRAS 57
(i) c ≥ 0;
(ii) there exists b ≥ 0 such that c = b2 ; and
(iii) there exists a ∈ A such that c = a∗ a,
6.5.15. Example. If T is an operator on a Hilbert space H, then T is a positive member of the
C ∗ -algebra B(H) if and only if hT x, xi ≥ 0 for all x ∈ H.
Hint for proof . Showing that if T ≥ 0 in B(H), then hT x, xi ≥ 0 for all x ∈ H is easy: use
proposition 6.5.14 to write T as S ∗ S for some operator S.
For the converse suppose that hT x, xi ≥ 0 for all x ∈ H. It is easy to see that this implies
that T is self-adjoint. Use the Jordan decomposition theorem6.5.11 to write T as T + − T − . For
arbitrary u ∈ H let x = T − u and verify that 0 ≤ hT x, xi = −h(T − )3 u, ui. Now (T − )3 is a positive
element of B(H). (Why?) Conclude that (T − )3 = 0 and therefore T − = 0. (For additional detail
see [8], page 37.)
√
6.5.16. Definition. For an arbitrary element a of a C ∗ -algebra we define |a| to be a∗ a.
6.5.17. Proposition. If a is a self-adjoint element of a C ∗ -algebra, then
|a| = a+ + a− .
6.5.18. Example. The absolute value in a C ∗ -algebra need notbe subadditive; that is, |a + b|
1 1 0 0
need not be less than |a| + |b|. For example, in M2 (C) take a = and b = . Then
1 1 0 −2
√
0 0 2 √0
|a| = a, |b| = , and |a + b| = . If |a + b| − |a| − |b| were positive, then, according
0 2 0 2
to example 6.5.15, h (|a + b| − |a| − |b|) x , x i would be positive for every vector x ∈ C2 . But this is
not true for x = (1, 0).
6.5.19. Proposition. If φ : A → B is a ∗ -homomorphism between C ∗ -algebras, then φ(a) ∈ B +
whenever a ∈ A+ . If φ is a ∗ -isomorphism, then φ(a) ∈ B + if and only if a ∈ A+ .
6.5.20. Proposition. Let a be a self-adjoint element of a C ∗ -algebra A and f a continuous complex
valued function on the spectrum of a. Then f ≥ 0 in C(σ(a)) if and only if f (a) ≥ 0 in A.
6.5.21. Proposition. If a is a self-adjoint element of a C ∗ -algebra A, then kak1A ± a ≥ 0.
6.5.22. Proposition. If a and b are self-adjoint elements of a C ∗ -algebra A and a ≤ b, then
x∗ ax ≤ x∗ bx for every x ∈ A.
6.5.23. Proposition. If a and b are elements of a C ∗ -algebra with 0 ≤ a ≤ b, then kak ≤ kbk.
6.5.24. Proposition. Let A be a unital C ∗ -algebra and c ∈ A+ . Then c is invertible if and only
if c ≥ 1 for some > 0.
6.5.25. Proposition. Let A be a unital C ∗ -algebra and c ∈ A. If c ≥ 1, then c is invertible and
0 ≤ c−1 ≤ 1.
6.5.26. Proposition. If a is a positive invertible element in a unital C ∗ -algebra, then a−1 is
positive.
1
Next we show that the notation a− 2 is unambiguous.
1
6.5.27. Proposition. Let a ∈ A+ where A is a unital C ∗ -algebra. If a is invertible, so is a 2 and
1 −1 1
a2 = a−1 2 .
6.5.28. Proposition. Let a and b be elements of a C ∗ -algebra. If 0 ≤ a ≤ b and a is invertible,
then b is invertible and b−1 ≤ a−1 .
√ √
6.5.29. Proposition. If a and b are elements of a C ∗ -algebra and 0 ≤ a ≤ b, then a ≤ b.
6.6. APPROXIMATE IDENTITIES 59
is short exact.
6.6.9. Proposition. The range of a ∗-homomorphism between C ∗ -algebras is closed (and therefore
itself a C ∗ -algebra).
6.6.10. Definition. A C ∗ -subalgebra B of a C ∗ -algebra A is hereditary if a ∈ B whenever
a ∈ A, b ∈ B, and 0 ≤ a ≤ b.
6.6.11. Proposition. Suppose x∗ x ≤ a in a C ∗ -algebra A. Then there exists b ∈ A such that
1 1
x = ba 4 and kbk ≤ kak 4 .
Proof. See [7], page 13.
60 6. SURVIVAL WITHOUT IDENTITY
φ
π
A/J /B
φe
Furthermore, φe is injective if and only if ker φ = J; and φe is surjective if and only if φ is.
7.1.10. Proposition. Let E be an orthonormal set in a Hilbert space H. Then the following are
equivalent.
(a) E is maximal (that is, E is a basis).
(b) E is total (that is, if x ⊥
W E, then x = 0).
(c) E is X
complete (that is, E = H).
(d) x = hx, eie for all x ∈ H. (Fourier expansion)
e∈E X
(e) hx, yi = hx, eihe, yi for all x, y ∈ H. (Parseval’s identity)
e∈E
X
(f) kxk2 = |hx, ei|2 for all x ∈ H. (Parseval’s identity)
e∈E
The coefficients hx, ei in the Fourier expansion of the vector x (in part (d) of proposition 7.1.10)
are the Fourier coefficients of x with respect to the basis E. It is easy to see that these
coefficients are unique.
P
7.1.11. Proposition. Let E be a basis for a Hilbert space H and x ∈ H. If x = e∈E αe e, then
αe = hx, ei for each e ∈ E.
7.1.12. Example. By writing out the Fourier expansion of the identity function f : x 7→ x in the
Hilbert space L2 ([0, 2π]) with
P∞ respect to the basis given in example 7.1.5, we demonstrate that the
1 1 2
sum of the infinite series k=1 k2 is 6 π .
7.1.13. Definition. A mapping T : V → W between vector spaces is conjugate linear if T (u +
v) = T u + T v and T (αv) = α T v for all u, v ∈ V and all α ∈ C. A bijective conjugate linear map
between vector spaces is an anti-isomorphism.
7.1.14. Example. Every inner product is conjugate linear in its second variable.
7.1.15. Example. For a Hilbert space H, the mapping ψ : H → H ∗ : a 7→ ψa defined in exam-
ple 2.2.22 is an isometric anti-isomorphism between H and its dual space.
7.1.16. Definition. A conjugate linear mapping C : V → V from a vector space into itself which
satisfies C 2 = idV is called a conjugation on V .
7.1.17. Example. Complex conjugation z 7→ z is an example of a conjugation in the vector
space C.
7.1.18.
P Example. Let H be a Hilbert space and B be a basis for H. Then the map x 7→
e∈B he, xie is a conjugation and an isometry on H.
The term “anti-isomorphism” is a bit misleading. It suggests something entirely different from
an isomorphism. In fact, an anti-isomorphism is nearly as good as an isomorphism. The next
proposition says in essence that a Hilbert space and its dual are isomorphic because they are anti-
isomorphic.
7.1.19. Proposition. Let H be a Hilbert space, ψ : H → H ∗ be the anti-isomorphism defined
in 2.2.22 (see 7.1.15), and C be the conjugation defined in 7.1.18. Then the composite Cψ −1 is an
isometric isomorphism from H ∗ onto H.
7.1.20. Corollary. Every Hilbert space is isometrically isomorphic to its dual.
7.1.21. Proposition. Let H be a Hilbert space and ψ : H → H ∗ be the anti-isomorphism defined
in 2.2.22 (see 7.1.15). If we define hf, gi := hψ −1 g, ψ −1 f i for all f , g ∈ H ∗ , then H ∗ becomes a
Hilbert space isometrically isomorphic to H. The resulting norm on H ∗ is its usual norm.
7.1.22. Corollary. Every Hilbert space is reflexive.
7.2. PROJECTIONS AND PARTIAL ISOMETRIES 63
In example 4.2.4 we defined the adjoint of a linear operator between Banach spaces and in
proposition 5.1.6 we defined the adjoint of a Hilbert space operator. In the case of an operator on a
Hilbert space (which is also a Banach space) what is the relationship between these two “adjoints”?
They certainly are not equal since the former acts between the dual spaces and the latter between
the original spaces. In the next proposition we make use of the anti-isomorphism ψ defined in 2.2.22
(see 7.1.15) to demonstrate that the two adjoints are “essentially” the same.
7.1.23. Proposition. Let T be an operator on a Hilbert space H and ψ be the anti-isomorphism
defined in 2.2.22. If we denote (temporarily) the Banach space dual of T by T 0 : H ∗ → H ∗ , then
T 0 = ψ T ∗ ψ −1 . That is, the following diagram commutes.
T0 / H∗
HO ∗ O
ψ ψ
H /H
T∗
7.1.24. Definition. Let A be a subset of a Banach space B. Then the annihilator of A, denoted
by A⊥ , is {f ∈ B ∗ : f (a) = 0 for every a ∈ A}.
It is possible for the conflict in notations between 1.2.25 and 7.1.24 to cause confusion. To see
that the annihilator of a subspace and its orthogonal complement are “essentially” the same thing,
use the isometric anti-isomorphism ψ between H and H ∗ discussed in 7.1.15 to identify them.
7.1.25. Proposition. Let M be a subspace of a Hilbert space H. If we (temporarily) denote the
annihilator of M by M a , then M a = ψ → M ⊥ .
(iii) qp = −pq;
(iv) p + q is a projection.
7.2.7. Definition. Let p and q be projections in a ∗ -algebra. If any of the conditions in the
preceding result holds, then we say that p and q are orthogonal and write p ⊥ q. (Thus for
operators on a Hilbert space we would correctly speak of orthogonal orthogonal projections!)
7.2.8. Proposition. Let P and Q be projections on a Hilbert space H. Then P ⊥ Q if and only
if ran P ⊥ ran Q. In this case P + Q is an orthogonal projection whose kernel is ker P ∩ ker Q and
whose range is ran P + ran Q.
7.2.9. Example. On a Hilbert space (orthogonal) projections need not commute. For example let
P be the projection of the (real) Hilbert space R2 onto the line y = x and Q be the projection of
R2 onto the x-axis. Then P Q 6= QP .
7.2.10. Proposition. Let p and q be projections in a ∗ -algebra. Then pq is a projection if and
only if pq = qp.
7.2.11. Proposition. Let P and Q be projections on a Hilbert space H. If P Q = QP , then P Q
is a projection whose kernel is ker P + ker Q and whose range is ran P ∩ ran Q.
7.2.12. Proposition. Let p and q be projections in a ∗ -algebra. Then the following are equivalent:
(i) pq = p;
(ii) qp = p;
(iii) q − p is a projection.
7.2.13. Definition. Let p and q be projections in a ∗ -algebra. If any of the conditions in the
preceding result holds, then we write p q.
7.2.14. Proposition. Let P and Q be projections on a Hilbert space H. Then the following are
equivalent:
(i) P Q;
(ii) kP xk ≤ kQxk for all x ∈ H; and
(iii) ran P ⊆ ran Q.
In this case Q − P is a projection whose kernel is ran P + ker Q and whose range is ran Q ran P .
Notation: Let H, M , and N be subspaces of a Hilbert space. The assertion H = M ⊕ N , may be
rewritten as M = H N (or N = H M ).
7.2.15. Proposition. The operation defined in 7.2.13 for projections on a ∗ -algebra A is a
partial ordering on P(A). If p is a projection in A, then 0 p 1.
7.2.16. Proposition. Suppose p and q are projections on a ∗ -algebra A. If pq = qp, then the
infimum of p and q, which we denote by p f q, exists with respect to the partial ordering and
p f q = pq. The infimum p f q may exist even when p and q do not commute. A necessary and
sufficient condition that p ⊥ q hold is that both p f q = 0 and pq = qp hold.
7.2.17. Proposition. Suppose p and q are projections on a ∗ -algebra A. If p ⊥ q, then the
supremum of p and q, which we denote by p g q, exists with respect to the partial ordering and
p g q = p + q. The supremum p g q may exist even when p and q are not orthogonal.
7.2.18. Proposition. Let A be C ∗ -algebra, a ∈ A+ , and 0 < ≤ 21 . If ka2 − ak < /2, then there
exists a projection p in A such that kp − ak < .
7.2.19. Definition. An element v of a C ∗ -algebra A is a partial isometry if v ∗ v is a projection
in A. (Since v ∗ v is always self-adjoint, it is enough to require that v ∗ v be idempotent.) The element
v ∗ v is the initial (or support) projection of v and vv ∗ is the final (or range) projection
of v. (It is an obvious consequence of the next proposition that if v is a partial isometry, then vv ∗
is in fact a projection.)
7.3. FINITE RANK OPERATORS 65
7.3.4. Proposition. For a self-adjoint operator T on a Hilbert space the following are equivalent:
(i) σ(T ) ⊆ [0, ∞);
(ii) there exists an operator S on H such that T = S ∗ S; and
(iii) T is positive.
7.3.5. Notation. For vectors x and y in a Hilbert space H let
x ⊗ y : H → H : z 7→ hz, yix .
7.3.6. Proposition. If x and y are nonzero vectors in a Hilbert space H, then x ⊗ y is a rank 1
operator in B(H).
Let us extend the terminology of definition 2.1.29 from complex valued functions to functions
whose codomain is a ∗ -algebra.
7.3.7. Definition. Suppose V is a vector space and A is a ∗ -algebra. A function φ : V × V → A is
sesquilinear if it is linear
∗ in its first variable and conjugate linear in its second. It is conjugate
symmetric if φ(x, y) = φ(y, x) for all x, y ∈ V .
If A is a C ∗ -algebra we say that the mapping φ is positive semidefinite if φ(x, x) ≥ 0 for
every x ∈ V .
7.3.8. Proposition. If H is a Hilbert space, then
φ : H × H → B(H) : (x, y) 7→ x ⊗ y
is a positive semidefinite, conjugate symmetric, sesquilinear mapping.
7.3.9. Proposition. If H is a Hilbert space, x and y are elements of H, and T ∈ B(H), then
T (x ⊗ y) = (T x) ⊗ y
and therefore
(x ⊗ y)T = x ⊗ (T ∗ y) .
7.3.10. Lemma. If x is a vector in a Hilbert space H, then x ⊗ x is a rank 1 projection if and
only if x is a unit vector.
7.3.11. Proposition. Let M = span{e1 , . . . , en } where {e1 , . . . , en } is an orthonormal subset of a
Hilbert space H. Then
Xn
PM = ek ⊗ ek .
k=1
7.3.12. Proposition. Every finite rank Hilbert space operator is a linear combination of rank one
projections.
Proof. See [20], theorem 2.4.6.
8.2. Representations
8.2.1. Definition. Let A be a C ∗ -algebra.
A representation of A is a pair (π, H) where H
is a Hilbert space and π : A → B(H) is a ∗ -homomorphism. Usually one says simply that π is a
representation of A. When we wish to emphasize the role of the particular Hilbert space we say
that π is a representation of A on H. Depending on context we may write either πa or π(a) for
69
70 8. THE GELFAND-NAIMARK-SEGAL CONSTRUCTION
the Hilbert space operator which is the image of the algebra element a under π. A representation
π of A on H is nondegenerate if π(A)H is dense in H.
8.2.2. Convention. We add to the preceding definition the following requirement: if the C ∗ -
algebra A is unital, then a representation of A must be a unital ∗ -homomorphism.
8.2.3. Definition. A representation π of a C ∗ -algebra A on a Hilbert space H is faithful if it is
injective. If there exists a vector x ∈ H such that π → (A)x = {πa (x) : a ∈ A} is dense in H, then
we say that the representation π is cyclic and that x is a cyclic vector for π.
8.2.4. Example. Let (S, A, µ) be a σ-finite measure space and L∞ = L∞ (S, A, µ) be the C ∗ -algebra
of essentially bounded µ-measurable functions on S. As we saw in example 5.1.8 for each φ ∈ L∞
the corresponding multiplication operator Mφ is an operator on the Hilbert space L2 = L2 (S, A, µ).
The mapping M : L∞ → B(L2 ) : φ 7→ Mφ is a faithful representation of the C ∗ -algebra L∞ on the
Hilbert space L2 .
8.2.5. Example. Let C([0, 1]) be the C ∗ -algebra of continuous functions on the interval [0, 1]. For
each φ ∈ C([0, 1]) the corresponding multiplication operator Mφ is an operator on the Hilbert space
L2 = L2 ([0, 1]) of functions on [0, 1] which are square-integrable with respect to Lebesgue measure.
The mapping M : C([0, 1]) → B(L2 ) : φ 7→ Mφ is a faithful representation of the C ∗ -algebra C([0, 1])
on the Hilbert space L2 .
8.2.6. Example. Suppose that π is a representation of a unital C ∗ -algebra A on a Hilbert space H
and x is a unit vector in H. If ωx is the corresponding vector state of B(H), then ωx ◦ π is a state
of A.
8.2.7. Exercise. Let X be a locally compact Hausdorff space. Find an isometric (therefore faithful)
representation (π, H) of the C ∗ -algebra C0 (X) on some Hilbert space H..
8.2.8. Definition. Let ρ be a state of a C ∗ -algebra A. Then
Lρ := {a ∈ A : ρ(a∗ a) = 0}
is called the left kernel of ρ.
Recall that as part of the proof of Schwarz inequality 8.1.8 for positive linear functionals we
verified the following result.
8.2.9. Proposition. If ρ is a state of a C ∗ -algebra A, then ha, bi0 := ρ(b∗ a) defines a semi-inner
product on A.
8.2.10. Corollary. If ρ is a state of a C ∗ -algebra A, then its left kernel Lρ is a vector subspace of
A and h [a], [b] i := ha, bi0 defines an inner product on the quotient algebra A/Lρ .
8.2.11. Proposition. Let ρ be a state of a C ∗ -algebra A and a ∈ Lρ . Then ρ(b∗ a) = 0 for every
b ∈ A.
8.2.12. Proposition. If ρ is a state of a C ∗ -algebra A, then its left kernel Lρ is a closed left ideal
in A.
MULTIPLIER ALGEBRAS
were a bit different since instead of reversing multiplication (which Hilbert spaces don’t have) they
conjugate scalars. In either case an anti-isomorphism is not something terribly different from an
isomorphism but actually something quite similar.
The notion of an A-module (where A is an algebra) was defined in 9.1.3. You may prefer
the following alternative definition, which is more in line with the definition of vector space given
in 9.1.4.
9.1.8. Definition. Let A be an algebra. An A-module is an ordered quadruple (V, +, M, Φ)
where (V, +, M ) is a vector space and Φ : A → L(V ) is an algebra homomorphism. If A is unital
we require also that Φ be unital.
9.1.9. Exercise. Check that the definitions of A-module given in 9.1.3 and 9.1.8 are equivalent.
We now say precisely what it means,when A is a C ∗ -algebra, to give an A-module an A-valued
inner product.
9.1.10. Definition. Let A be a C ∗ -algebra. A semi-inner product A-module is an A-module
V together with a mapping
β : V × V → A : (x, y) 7→ hx| yi
which is linear in its second variable and satisfies
(i) hx| yai = hx| yia,
(ii) hx| yi = hy| xi∗ , and
(iii) hx| xi ≥ 0
for all x, y ∈ V and a ∈ A. It is an inner product A-module (or a pre-Hilbert A-module)
if additionally
(iv) hx| xi = 0 implies that x = 0
when x ∈ V . We will refer to the mapping β as an A-valued (semi-)inner product on V .
9.1.11. Example. Every inner product space is an inner product C-module.
9.1.12. Proposition. Let A be a C ∗ -algebra and V be a semi-inner product A-module. The semi-
inner product h | i is conjugate linear in its first variable both literally and in the sense that
hva| wi = a∗ hv| wi for all v, w ∈ V and a ∈ A.
9.1.13. Proposition (Schwarz inequality—for inner product A-modules). Let V be an inner prod-
uct A-module where A is a C ∗ -algebra. Then
hx| yi∗ hx| yi ≤ khx| xik hy| yi
for all x, y ∈ V .
Hint for proof . Show that no generality is lost in assuming that khx| xik = 1. Consider the
positive element hxa − y| xa − yi where a = hx| yi. Use propositions 6.5.21 and 6.5.22.
9.1.14. Definition. For every element v of an inner product A-module (where A is a C ∗ -algebra)
define
kvk := khv| vik1/2 .
9.1.15. Proposition (Yet another Schwarz inequality). Let A be a C ∗ -algebra and V be an inner
product A-module. Then for all v, w ∈ V
khv| wik ≤ kvk kwk.
9.1.16. Corollary. If v and w are elements of an inner product A-module (where A is a C ∗ -
algebra), then kv + wk ≤ kvk + kwk and the map x 7→ kxk is a norm on V .
9.1. HILBERT MODULES 75
Notice that in the preceding proposition no claim is made that the algebraic ideal Ic must be
proper. It may well be the case that Ic = A (as, for example, when c is an invertible element of a
unital algebra).
9.2.5. Definition. Let c be an element of a C ∗ -algebra A. Define Jc , the principal ideal
containing c, to be the intersection of the family of all (closed) ideals of A which contain c. Clearly,
Jc is the smallest ideal containing c.
9.2.6. Proposition. In a C ∗ -algebra the closure of an algebraic ideal is an ideal.
9.2.7. Example. The closure of a proper algebraic ideal in a C ∗ -algebra need not be a proper
ideal. For example, lc , the set of sequences of complex numbers which are eventually zero, is dense
in the C ∗ -algebra l0 = C0 (N). (But recall proposition 3.2.1.)
9.2.8. Proposition. If c is an element of a C ∗ -algebra, then Jc = Ic .
9.2.9. Notation. We adopt a standard notational convention. If A and B are nonempty subsets
of an algebra. By AB we mean the linear span of products of elements in A and elements in B;
that is, AB = span{ab : a ∈ A and b ∈ B}. (Note that in definition 2.3.17 it makes no difference
whether we take AJ to mean the set of products of elements in A with elements in J or the span
of that set.)
9.2.10. Proposition. If I and J are ideals in a C ∗ -algebra, then IJ = I ∩ J.
Hausdorff space in which X is an open subset and let B = C(Y ). Then B is a unital commutative
C ∗ -algebra. Regard A as embedded as an ideal in B by means of the map ι : A → B : f 7→ fe where
f (y), if y ∈ X;
f (y) =
e
0, otherwise.
Notice that the closed set X c is {Zfe : f ∈ C0 (X)}.
T
9.2.19. Proposition. Let the notation be as in the preceding paragraph. Then the ideal A is
essential in B if and only if the open subset X is dense in Y .
Thus the property of an ideal being essential in the context of unitizations of nonunital com-
mutative C ∗ -algebras corresponds exactly with the property of an open subspace being dense in
the context of compactifications of noncompact locally compact Hausdorff spaces.
9.3.3. Definition. Let X and Y be Hausdorff topological spaces. We say that X is embedded
in Y if there exists a homeomorphism j from X to a subspace of Y . The homeomorphism j is a
embedding of X into Y . As in C ∗ -algebras it is common practice to identify the range of j with
the space X. The pair (Y, j) is a compactification of X if Y is a compact Hausdorff space and
j : X → Y is an embedding. The compactification (Y, j) is essential if the range of j is dense
in Y .
We have discussed the smallest unitization of a C ∗ -algebra and the smallest compactification
of a locally compact Hausdorff space. Now what about a largest, or even maximal, unital algebra
containing A? Clearly there is no such thing, for if B is a unital algebra containing A, then so is
B ⊕ C where C is any unital C ∗ -algebra. Similarly, there is no largest compact space containing X:
if Y is a compact space containing X, then so is the topological disjoint union Y ] K where K
is any nonempty compact space. However, it does make sense to ask whether there is a maximal
essential unitization of a C ∗ -algebra or a maximal essential compactification of a locally compact
Hausdorff space. The answer is yes in both cases. The well-known Stone-Čech compactification
β(X) is maximal among essential compactifications of a noncompact locally compact Hausdorff
space X. Details can be found in any good topology text. One readable standard treatment is [30],
items 19.3–19.12. More sophisticated approaches make use of some functional analysis—see, for
example, [5], chapter V, section 6. There turns out also to be a maximal essential unitization of a
nonunital C ∗ -algebra A—it is called the multiplier algebra of A.
We say that an essential unitization M of a C ∗ -algebra A is maximal if any C ∗ -algebra that
contains A as an essential ideal embeds in M . Here is a more formal statement.
9.3.4. Definition. An essential unitization (M, j) of a C ∗ -algebra A is said to be maximal if for
every embedding ι : A → B whose range is an essential ideal in B there exists a ∗ -homomorphism
φ : B → M such that φ ◦ ι = j.
9.3.5. Proposition. In the preceding definition the ∗ -homomorphism φ, if it exists must be injec-
tive.
9.3.6. Proposition. In the preceding definition the ∗ -homomorphism φ, if it exists must be unique.
Compare the following definition with 8.2.1.
9.3.7. Definition. Let A and B be C ∗ -algebras and V be a Hilbert A-module. A ∗ -homomorphism
φ : B → L(V ) is nondegenerate if φ→ (B) V is dense in V .
9.3.8. Proposition. Let A, B, and J be C ∗ -algebras, V be a Hilbert B-module, and ι : J → A be
an injective ∗ -homomorphism whose range is an ideal in A. If φ : J → L(V ) is a nondegenerate
∗ -homomorphism, then there exists a unique extension of φ to a ∗ -homomorphism φ : A → L(V )
which satisfies φ ◦ ι = φ.
9.3.9. Proposition. If A is a nonzero C ∗ -algebra, then (L(A), L) is a maximal essential unitization
of A. It is unique in the sense that if (M, j) is another maximal essential unitization of A, then
there exists a ∗ -isomorphism φ : M → L(A) such that φ ◦ j = L.
9.3.10. Definition. Let A be a C ∗ -algebra. We define the multiplier algebra of A, to be the
family L(A) of adjointable operators on A. From now on we denote this family by M (A).
CHAPTER 10
FREDHOLM THEORY
81
82 10. FREDHOLM THEORY
have solutions f and h for every given g and j, respectively, the solutions being unique, in which
case the corresponding homogeneous equations
T f = 0 and (3’)
∗
T h=0 (4’)
have only the trivial solution; —or else—
the homogeneous equations (3’) and (4’) have the same (nonzero) finite number of linearly indepen-
dent solutions f1 , . . . , fn and h1 , . . . , hn , respectively, in which case the nonhomogeneous equations
(1’) and (2’) have a solution if and only if g and j satisfy
hk ⊥ g and (5’)
j ⊥ fk (6’)
for k = 1, . . . , n.
Notice that by making use of a few elementary facts concerning kernels and ranges of operators
and orthogonality in Hilbert spaces, we can compress the statement of 10.1.2 quite a bit.
10.1.3. Proposition (Fredholm Alternative IIIa). If T = λI − K where K is a compact Hilbert
space operator and λ ∈ C, then
(1) T is injective if and only if it is surjective,
(2) ran T ∗ = (ker T )⊥ , and
(3) dim ker T = dim ker T ∗ .
Also, conditions (1) and (2) hold for T ∗ as well as T .
Although it is incidental to our present purposes this is a convenient place to note the fact that
the sum of two subspaces of a Hilbert space need not be a subspace.
10.2.6. Example. Let T be the operator on the Hilbert space l2 defined in example 10.2.5, M =
l2 ⊕ {0}, and N be the graph of T . Then M and N are both (closed) subspaces of the Hilbert
space l2 ⊕ l2 but M + N is not.
Proof. Verification of example 10.2.6 follows easily from the following result and example 10.2.5.
10.2.7. Proposition. Let H be a Hilbert space, T ∈ B(H), M = H ⊕ {0}, and N be the graph
of T . Then
(a) The set N is a subspace of H ⊕ H.
(b) The operator T is injective if and only if M ∩ N = { (0, 0) }.
(c) The range of T is dense in H if and only if M + N is dense in H ⊕ H.
(d) The operator T is surjective if and only if M + N = H ⊕ H.
The good news for the theory of Fredholm operators is that operators with finite dimensional
cokernels automatically have closed range.
10.2.8. Proposition. If a bounded linear map A : H → K between Hilbert spaces has finite di-
mensional cokernel, then ran A is closed in K.
We observe in Fredholm alternative IIIb 10.2.2 that condition (1) is redundant and also that (2)
holds for any Banach space operator with closed range. This enables us to rephrase 10.2.2 more
economically.
10.2.9. Proposition (Fredholm alternative IV). If T is a Riesz-Schauder operator on a Banach
space, then
(1) T has closed range and
(2) dim ker T = dim ker T ∗ < ∞.
10.3.6. Example. The Fredholm index of the unilateral shift operator is −1.
10.3.7. Example. Every linear map T : V → W between finite dimensional vector spaces is
Fredholm and ind T = dim V − dim W .
10.3.8. Example. If T is a Fredholm operator on a Hilbert space, then ind T ∗ = − ind T .
10.3.9. Example. The index of any normal Fredholm operator is 0.
10.3.10. Lemma. Let S : U → V and T : V → W be linear transformations between vector spaces.
Then (there exist linear mappings such that) the following sequence is exact.
0 / ker S / ker T S / ker T / coker S / coker T S / coker T / 0.
10.3.11. Lemma. If V0 , V1 , . . . , Vn are finite dimensional vector spaces and the sequence
0 / V0 / V1 / ... / Vn /0
is exact, then
n
X
(−1)k dim Vk = 0 .
k=0
10.3.12. Proposition. Let H be a Hilbert space. Then the set F(H) of Fredholm operators on H
is a semigroup under composition and the index function ind is an epimorphism from F(H) onto
the additive semigroup Z of integers.
Proof. Hint. Use 10.3.10, 10.3.11, 10.3.5, 10.3.6, and 10.3.8.
EXTENSIONS
σe (T ) = σQ(H) (π(T )) .
σe (T ) = {λ ∈ C : T − λI ∈
/ F(H)} .
11.1.3. Proposition. The essential spectrum of a self-adjoint Hilbert space operator T is the union
of the accumulation points of the spectrum of T with the eigenvalues of T having infinite multiplicity.
The members of σ(T ) \ σe (T ) are the isolated eigenvalues of finite multiplicity.
11.1.4. Theorem (Weyl). If S and T are operators on a Hilbert space whose difference is compact,
then their spectra agree except perhaps for eigenvalues.
11.1.6. Definition. Let H and K be Hilbert spaces. Operators S ∈ B(H) and T ∈ B(K) are
essentially unitarily equivalent (or compalent) if there exists a unitary map U : H → K
such that S − U T U ∗ is a compact operator on H. (We extend definitions 1.2.35 and 1.2.36 in the
obvious way: U ∈ B(H, K) is unitary if U ∗ U = IH and U U ∗ = IK ; and S and T are unitarily
equivalent if there exists a unitary map U : H → K such that S = U T U ∗ .)
11.1.7. Proposition. Self-adjoint operators S and T on separable Hilbert spaces are essentially
unitarily equivalent if and only if they have the same essential spectrum.
11.1.9. Example. The unilateral shift operator S (see example 2.2.15) is essentially normal (but
not normal).
87
88 11. EXTENSIONS
phism defined in the preceding proposition 11.2.20). See [1], remark 4.3.3; [7], theorem V.1.5; or
[16], page 35.
11.2.23. Definition. The short exact sequence in the preceding proposition 11.2.22 is the Toeplitz
extension of C(T) by K(H 2 ).
11.2.24. Remark. A version of the diagram for the Toeplitz extension which appears frequently
looks something like
β
/ K(H 2 ) /To / /0
0 C(T) (1)
T
where β is as in 11.2.22 and T is the mapping φ 7→ Tφ . It is possible to misinterpret this diagram.
It may suggest to the unwary that this is a split extension especially in as much as it is certainly
true that β ◦ T = IC(T) . The trouble, of course, is that this is not a diagram in the category CSA
of C ∗ -algebras and ∗ -homomorphisms. We have already seen in example 11.2.4 that the mapping
T is not a ∗ -homomorphism since it does not always preserve multiplication. Invertible elements
in C(T) need not lift to invertible elements in the Toeplitz algebra T; so the ∗ -epimorphism β does
not have a right inverse in the category CSA.
Some authors deal with the problem by saying that the sequence (1) is semisplit (see, for
example, Arveson [1], page 112). Others borrow a term from category theory where the word
“section” means “right invertible”. Davidson [7], for example, on page 134, refers to the mapping
β as a “continuous section” and Douglas [9], on page 179, says it is an “isometrical cross section”.
While it is surely true that β has T as a right inverse in the category SET of sets and maps and
even in the category of Banach spaces and bounded linear maps, it has no right inverse in CSA.
90 11. EXTENSIONS
(where ι is the inclusion mapping) is exact. In fact, we will be concerned almost exclusively with
the case where A = C(X) for some compact metric space X. We will say that two extensions (E, φ)
and (E 0 , φ 0 ) are equivalent if there exists an isomorphism ψ : E → E 0 which makes the following
diagram commute.
φ
0 /K ι /E /A /0 (11.2)
ψ|K) ψ
φ0
0 /K ι / E0 /A /0
Notice that this differs slightly from the definition of strong equivalence given in 6.2.9. We denote
the family of all equivalence classes of such extensions by Ext A. When A = C(X) we write Ext X
rather than Ext C(X).
11.3.5. Definition. If U : H1 → H2 is a unitary mapping between Hilbert spaces, then the mapping
is called conjugation by U .
It is clear that AdU is an isomorphism between the C ∗ -algebras B(H1 ) and B(H2 ). In partic-
ular, if U is a unitary operator on H1 , then AdU is an automorphism of both K(H1 ) and B(H1 ).
Furthermore, conjugations are the only automorphisms of the C ∗ -algebra K(H).
11.3.7. Proposition. If H is a Hilbert space and A is a C ∗ -algebra, then extensions (E, φ) and
(E 0 , φ 0 ) in Ext A are equivalent if and only if there exists a unitary operator U in B(H) such that
E 0 = U EU ∗ and φ = φ 0 AdU .
11.3.8. Example. Suppose that T is an essentially normal operator on a Hilbert space H. Let ET
be the unital C ∗ -algebra generated by T and K(H). Since π(T ) is a normal element of the Calkin
algebra, the unital C ∗ -algebra ET /K(H) that it generates is commutative. Thus the abstract spectral
theorem 5.4.7 gives us a C ∗ -algebra isomorphism Ψ : C(σe (T )) → ET /K(H). Let φT = Ψ−1 ◦ π E .
T
Then the sequence
φ
0 / K(H) ι / ET T / C(σe (T )) /0
11.3.9. Proposition. Let T and T 0 be essentially normal operators on a Hilbert space H. These
operators are essentially unitarily equivalent if and only if the extensions they determine are equiv-
alent.
11.3.10. Proposition. If E is a C ∗ -algebra such that K(H) ⊆ E ⊆ B(H) for some Hilbert space H,
X is a nonempty compact subset of C, and (E, φ) is an extension of K(H) by C(X), then every
element of E is essentially normal.
π1 φ2
A1 /B
φ1
commutes and
(ii) if ρ1 : Q → A1 and ρ2 : Q → A2 are unital ∗ -homomorphisms of C ∗ -algebras such that the
diagram
ρ2
Q / A2
ρ1 φ2
A1 /B
φ1
)/
P π2 A2
ρ1
π1 φ2
A1 /B
φ1
commute.
11.3.12. Proposition. Let H be a Hilbert space, A be a unital C ∗ -algebra, and τ : A → Q(H) be
a unital ∗ -monomorphism. Then there exists (uniquely up to isomorphism) a pullback (E, π1 , π2 )
of A and B(H) along τ and π such that (E, π2 ) is an extension of K(H) by A which makes the
following diagram commute.
π2
0 / K(H) ι /E /A /0 (11.3)
π1 τ
0 / K(H) / B(H) π / Q(H) /0
π⊕π π
Q(H) ⊕ Q(H) / Q(H)
ρ
11.3.24. Proposition. Suppose that A is a unital C ∗ -algebra and H is a Hilbert space. Then a
unital ∗ -monomorphism τ : A → Q(H) is semisplit if and only if it is unitarily equivalent to an
abstract Toeplitz extension.
Proof. See [16], proposition 2.7.10.
The isomorphism is the obvious one: just delete the inner brackets. For example, the isomorphism
from M2 (M2 ) to M4 is given by
a11 a12 b11 b12
a11 a12 b11 b12
a21 a22 b21 b22
a21 a22 b21 b22
7→
c11 c12 d11 d12
c11 c12 d d
11 12 c21 c22 d21 d22
c21 c22 d21 d22
It is easy to see that if φ preserves multiplication, then so does each φ( n) and if φ preserves
involution, then so does each φ( n) . Positivity, however, is not always preserved as the next exam-
ple 11.5.8 shows.
11.5.7. Definition. In Mn the standard matrix units are the matrices ejk (1 ≤ j, k ≤ n)
whose entry in the j th row and k th column is 1 and whose other entries are all 0.
11.5.8. Example. Let A = M2 . Then φ : A → A : a 7→ at (where at is the transpose of a) is a
positive mapping. The map φ(2) : M2 (A) → M2 (A) is not positive. To see this let e11 , e12 , e21 ,
and e22 be the standard matrix units for A. Then
11 12 11 1 0 0 0
φ(e ) φ(e12 ) e11 e21
e e 0 0 1 0
φ(2) 21 22 = 21 22 = 12 22 = 0 1 0 0
e e φ(e ) φ(e ) e e
0 0 0 1
which is not positive.
11.5.9. Definition. A linear mapping φ : A → B between unital C ∗ -algebras is n-positive (for
n ∈ N) if φ( n) is positive. It is completely positive if it is n-positive for every n ∈ N.
11.5.10. Example. Every ∗ -homomorphism between C ∗ -algebras is completely positive.
∗
Proposition. Let A be a unital C -algebra and a ∈ A. Then kak ≤ 1 if and only if
11.5.11.
1 a
is positive in M2 (A).
a∗ 1
∗ ∗
Proposition. Let A be a unital C -algebra and a, b ∈ A. Then a a ≤ b if and only if
11.5.12.
1 a
is positive in M2 (A).
a∗ b
11.5.13. Proposition. Every unital 2-positive map between unital C ∗ -algebras is contractive.
11.5.14. Proposition (Kadison’s inequality). If φ : A → B is a unital 2-positive map between
unital C ∗ -algebras, then ∗
φ(a) φ(a) ≤ φ(a∗ a)
for every a ∈ A.
96 11. EXTENSIONS
A / Q(H)
τ
commutes.
Proof. See [16], theorem 3.1.5.
11.5.23. Definition. A C ∗ -algebra A is nuclear if for every C ∗ -algebra B there is a unique
C ∗ -norm on the algebraic tensor product A B.
11.5.24. Example. Every finite dimensional C ∗ -algebra is nuclear.
11.5. COMPLETELY POSITIVE MAPS 97
A / B/J
φ
commute.
Proof. See [16], theorem 3.3.6.
11.5.30. Corollary. If A is a nuclear separable unital C ∗ -algebra, then Ext A is an Abelian group.
CHAPTER 12
K-THEORY
Saying that the Grothendieck map is well defined means that its definition is independent of
the choice of a. We frequently write just γ for γS .
12.2.8. Example. Both N = {1, 2, 3, . . . } and Z+ = {0, 1, 2, . . . } are commutative semigroups
under addition. They have the same Grothendieck group
G(N) = G(Z+ ) = Z .
12.2.9. Example. Let S be the commutative additive semigroup Z+ ∪ {∞} (see example 12.1.36).
Then G(S) = {0}.
12.2.10. Example. Let Z0 be the (commutative) multiplicative semigroup of nonzero integers.
Then G(Z0 ) = Q0 , the Abelian multiplicative group of nonzero rational numbers.
12.2.11. Proposition. If S is a commutative semigroup, then
G(S) = {γ(x) − γ(y) : x, y ∈ S} .
12.2.12. Proposition (Universality of the Grothendieck map). Let S be a commutative (additive)
semigroup and G(S) be its Grothendieck group. If H is an Abelian group and φ : S → H is
an additive map, then there exists a unique group homomorphism ψ : G(S) → H such that the
following diagram commutes.
γ
S / G(S) G(S)
ψ ψ
φ
H H (12.1)
In the preceding diagram G and H are just G and H regarded as semigroups and ψ is the
corresponding semigroup homomorphism. In other words, the forgetful functor “forgets” only
about identities and inverses but not about the operation of addition. Thus the triangle on the left
is a commutative diagram in the category of semigroups and semigroup homomorphisms.
12.2.13. Proposition. Let φ : S → T be a homomorphism of commutative semigroups. Then the
map γT ◦ φ : S → G(T ) is additive. By proposition 12.2.12 there exists a unique group homomor-
phism G(φ) : G(S) → G(T ) such that the following diagram commutes.
φ
S /T
γ γ
S T
G(S) / G(T )
G(φ)
12.2.14. Proposition (Functorial property of the Grothendieck construction). The pair of maps
S 7→ G(S), which takes semigroups to their corresponding Grothendieck groups, and φ 7→ G(φ),
which takes semigroup homomorphisms to group homomorphism (as defined in 12.2.13) is a co-
variant functor from the category of commutative semigroups and semigroup homomorphisms to
the category of Abelian groups and group homomorphisms.
One slight advantage of the rather pedantic inclusion of a forgetful functor in diagram (12.1) is
that it makes it possible to regard the Grothendieck map γ : S 7→ γS as a natural transformation
of functors.
12.2.15. Corollary (Naturality of the Grothendieck map). Let be the forgetful functor on
Abelian groups which “forgets” about identities and inverses but not the group operation as in 12.2.12.
Then G is a covariant functor from the category of semigroups and semigroup homomorphisms to
12.3. K0 (A)—THE UNITAL CASE 103
itself. Furthermore, the Grothendieck map γ : S 7→ γS is a natural transformation from the identity
functor to the functor G.
φ
S /T
γ γ
S T
G(S) / G(T )
G(φ)
ν ν
e νe
$
G G
104 12. K-THEORY
[ ]0 [ ]0
K0 (A) / K0 (B)
K0 (φ)
12.3.14. Notation. For C ∗ -algebras A and B let Hom(A, B) be the family of all ∗ -homomorphisms
from A to B.
12.3.15. Definition. Let A and B be C ∗ -algebras and a ∈ A. For φ, ψ ∈ Hom(A, B) let
da (φ, ψ) = kφ(a) − ψ(a)k .
Then da is a pseudometric on Hom(A, B). The weak topology generated by the family {da : a ∈ A}
is the point-norm topology on Hom(A, B).
12.3.16. Definition. Let A and B be C ∗ -algebras and φ0 , φ1 ∈ Hom(A, B). A homotopy from
φ0 to φ1 is a ∗ -homomorphism from A to C([0, 1], B) such that φ0 = E0 ◦ φ and φ1 = E1 ◦ φ. (Here,
Et is the evaluation functional at t ∈ [0, 1].) We say that φ0 and φ1 are homotopic if there exists
a homotopy from φ0 to φ1 , in which case we write φ0 ∼h φ1 .
12.3.17. Proposition. Two ∗ -homomorphisms φ0 , φ1 : A → B between C ∗ -algebras are homotopic
if and only if there exists a point-norm continuous path from φ0 to φ1 in Hom(A, B).
12.3.18. Definition. We say that C ∗ -algebras are homotopically equivalent if there exist ∗ -
homomorphisms φ : A → B and ψ : B → A such that ψ ◦ φ ∼h idA and φ ◦ ψ ∼h idB . A C ∗ -algebra
is contractible if it is homotopically equivalent to {0}.
12.3.19. Proposition. Let A and B be unital C ∗ -algebras. If φ ∼h ψ in Hom(A, B), then K0 (φ) =
K0 (ψ).
Proof. See [27], proposition 3.2.6.
12.3.20. Proposition. If unital C ∗ -algebras A and B are homotopically equivalent, then K0 (A) ∼
=
K0 (B).
Proof. See [27], proposition 3.2.6.
12.3.21. Proposition. If A is a unital C ∗ -algebra, then the split exact sequence
/A ι /A
eo
π / /0
0 C (12.2)
λ
12.4. K0 (A)—THE NONUNITAL CASE 105
is exact.
12.4.4. Proposition. For both unital and nonunital C ∗ -algebras the group K0 (A) is (isomorphic
to) ker(K0 (π)).
12.4.5. Proposition. If φ : A → B is a ∗ -homomorphism between C ∗ -algebras, then there exists
a unique ∗ -homomorphism K0 (φ) which makes the following diagram commute.
K0 (πA )
K0 (A) / K0 (A)
e / K0 (C)
K0 (φ) K0 (φ)
e
K0 (B) / K0 (B)
e / K0 (C)
K0 (πB )
12.4.6. Proposition. The pair of maps A 7→ K0 (A), φ 7→ K0 (φ) is a covariant functor from the
category CSA of C ∗ -algebras and ∗ -homomorphisms to the category of Abelian groups and group
homomorphisms.
In propositions 12.3.19 and 12.3.20 we asserted the homotopy invariance of the functor K0 for
unital C ∗ -algebras. We now extend the result to arbitrary C ∗ -algebras.
12.4.7. Proposition. Let A and B be C ∗ -algebras. If φ ∼h ψ in Hom(A, B), then K0 (φ) = K0 (ψ).
Proof. See [27], proposition 4.1.4.
12.4.8. Proposition. If C ∗ -algebras A and B are homotopically equivalent, then K0 (A) ∼
= K0 (B).
Proof. See [27], proposition 4.1.4.
106 12. K-THEORY
12.4.9. Definition. Let π and λ be the ∗ -homomorphisms in the split exact sequence (12.2) for
the unitization of as C ∗ -algebra A. Define the scalar mapping s : Ae→A e by s := λ ◦ π.
e for A
Every member of A e can be written in the form a + α1 e for some a ∈ A and α ∈ C. Notice that
A
s(a + α1Ae) = α1Ae and that x − s(x) ∈ A for every x ∈ A.e For each natural number n the scalar
mapping induces a corresponding map s = sn : Mn (A) e → Mn (A).
e An element x ∈ Mn (A) e is a
scalar element of Mn (A) e if s(x) = x.
12.4.10. Proposition (Standard Picture of K0 (A) for arbitrary A). If A is a C ∗ -algebra, then
K0 (A) = { [p ]0 − [s(p)]0 : p, q ∈ P∞ (A)}
e .
is exact in A, then
F (j) F (k)
F (A1 ) / F (A2 ) / F (A3 )
is exact in B.
12.5.2. Proposition. The functor K0 is half exact.
Proof. See [27], proposition 4.3.2.
12.5.3. Proposition. The functor K0 is split exact.
Proof. See [27], proposition 4.3.3.
12.5.4. Proposition. The functor K0 preserves direct sums. That is, if A and B are C ∗ -algebras,
then K0 (A ⊕ B) = K0 (A) ⊕ K0 (B).
Despite being both split exact and half exact the functor K0 is not exact. Each of the next two
examples is sufficient to demonstrate this.
12.5.6. Example. The sequence
ψ
/ C0 (0, 1) ι / C [0, 1] /C⊕C /0
0
where ψ(f ) = f (0), f (1) , is clearly exact; but K0 (ψ) is not surjective.
12.5.7. Example. If H is a Hilbert space the exact sequence
associated with the Calkin algebra Q(H) is exact but K0 (ι) is not injective. (This example requires
a fact we have not yet derived: K0 (K(H)) ∼
= Z.)
Next is an important stability property of the functor K0 .
12.5.8. Proposition. If A is a C ∗ -algebra, then K0 (A) ∼
= K0 (Mn (A)).
Proof. See [27], proposition 4.3.8.
12.6. INDUCTIVE LIMITS 107
58 L
µj µj+1
φj
··· / Aj / Aj+1 / ··· ψ
λj λj+1
&)
M
12.6.2. Proposition. Inductive limits (if they exist in a category) are unique (up to isomorphism).
12.6.3. Proposition. Every inductive sequence (A, φ) of C ∗ -algebras has an inductive limit. (And
so does every inductive sequence of Abelian groups.)
Proof. See [3], II.8.2.1; [27], proposition 6.2.4; or [29], appendix L.
12.6.4. Proposition. If (L, µ) is the inductive limit of an inductive sequence (A, φ) of C ∗ -algebras,
then L = µn → (An ) and kµm (a)k = limn→∞ kφn,m (a)k = inf n≥m kφn,m (a)k for all m ∈ N and
S
a ∈ A.
Proof. See [27], proposition 6.2.4.
12.6.5. Definition. An approximately finite dimensional C ∗ -algebra (an AF-algebra for
short) is the inductive limit of a sequence of finite dimensional C ∗ -algebras.
12.6.6. Example. If (An ) is an increasing sequence of C ∗ -subalgebras of a C ∗ -algebra D (with
ιn : An → An+1 being the inclusion map for each n), ∗
S∞then (A, ι) is an inductive sequence of C -
algebras whose inductive limit is (B, j) where B = n=1 An and jn : An → B is the inclusion map
for each n.
φ1 φ2 φ3
12.6.7. Example. The sequence M1 = C / M2 / M3 / . . . (where φn (a) = diag(a, 0)
∗
for each n ∈ N and a ∈ Mn ) is an inductive sequence of C -algebras whose inductive limit is the
C ∗ -algebra K(H) of compact operators on a Hilbert space H.
108 12. K-THEORY
is an inductive sequence of Abelian groups whose inductive limit is the set Q of rational numbers.
2 / 2 / 2 / 2 /
12.6.9. Example. The sequence Z Z Z Z . . . is an inductive sequence of Abelian
groups whose inductive limit is the set of dyadic rational numbers.
The next result is referred to the continuity property of K0 .
12.6.10. Proposition. If (A, φ) is an inductive sequence of C ∗ -algebras, then
K0 lim
−→ An = − lim
→ K0 (An ) .
Proof. See [27], theorem 6.3.2.
12.6.11. Example. If H is a Hilbert space, then K0 (K(H)) ∼
= Z.
12.7. Bratteli Diagrams
12.7.1. Proposition. Nonzero algebra homomorphisms from Mk into Mn exist only if n ≥ k, in
which case they are precisely the mappings of the form
a 7→ u diag(a, a, . . . , a, 0) u∗
where u is a unitary matrix. Here there are m copies of a and 0 is the r × r zero matrix where
n = mk + r. The number m is the multiplicity of φ.
Proof. See [24], corollary 1.3.
12.7.2. Example. An example of a homomorphism from M2 into M7 is
a b 0 0 0 0 0
c d 0 0 0 0 0
0 0 a b 0 0 0
a b
A= 7→
0 0 c d 0 0 0 = diag(A, A, 0)
c d 0 0 0 0 0
0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
where m = 2 and r = 3.
/
1 3
12.7.6. Example. Suppose φ : C ⊕ M2 → M3 ⊕ M5 ⊕ M2 is given by
λ 0 0
λ 0
(λ, b) 7→ , 0 b 0 , b .
0 b
0 0 b
Then m11 = 1, m12 = 1, m21 = 1, m22 = 2, m31 = 0, and m32 = 1. Notice that
1 1 3
1
m k = 1 2 = 5 = n.
2
0 1 2
4*4 5
*2
a 0 0 a 0 0
(a, b, c) 7→ 0 a 0 , 0 b 0 .
0 0 0 0 0 c
110 12. K-THEORY
Then m11 = 2, m12 = 0, m13 = 0, m21 = 1, m22 = 1, and m23 = 1. Notice that this time something
peculiar happens: we have m k 6= n:
3
2 0 0 6 9
mk = 2 = ≤ = n.
1 1 1 7 7
2
What’s the problem here? Well, the result stated earlier was for unital ∗ -homomorphisms, and this
φ is not unital. In general, this will be the best we can expect: m k ≤ n.
Here is the resulting Bratteli diagram for φ:
++ 9
2
"+
37
12.7.8. Example. If K is the Cantor set, then C(K) is an AF -algebra. To see this write K as
the intersection of a decreasing family of closed subsets Kj each of which consists of 2j disjoint
closed subintervals of [0, 1]. For each j ≥ 0 let Aj be the subalgebra of functions in C(K) which are
j
constant on each of the intervals making up Kj . Thus Aj ' C2 for each j ≥ 0. The imbedding
splits each minimal projection into the sum of two minimal projections. For φ0 thecorresponding
1 0
1 1 0
matrix m0 of “partial multiplicities” is ; the matrix m1 corresponding to φ1 is
0 1; and so
1
0 1
on. Thus the Bratteli diagram for the inductive limit C(K) = − lim
→ jA is:
51
:1
)1
D1
51
$
1
)1
1 ···
51
:1
)1
1
51
$
1
)1
12.7. BRATTELI DIAGRAMS 111
1 // 2 // 4 //
8 // 16 //
32 //
···
1 1 2j−1
j
2
m k(j) = = j = n(j).
1 1 2j−1 2
Then, since C = −
lim
→ Aj = −
lim
→Bj , we have a second (quite different) Bratteli diagram for the
AF-algebra C.
81 /2 /4 /8 / 16 / ···
B B B A @
1
&
1 /2 /4 /8 / 16 / ···
12.7.10. Example. This is the Fibonacci algebra. For j ∈ N define sequences pj and qj
by the familiar recursion relations:
p1 =q1 = 1,
pj+1 =pj + qj , and
qj+1 = pj .
For all j ∈ N let Aj = Mpj ⊕ Mqj and
a 0
φj : Aj → Aj+1 : (a, b) 7→ ,a .
0 b
1 1
The multiplicity matrix m for each φj is and for each j
1 0
1 1 pj p j + qj pj+1
m k(j) = = = = n(j).
1 0 qj pj qj+1
112 12. K-THEORY
1
&
1 1 2 3 5 ···
Bibliography
1. William Arveson, A Short Course on Spectral Theory, Springer-Verlag, New York, 2002, [A2]. 26, 83, 88, 89, 90
2. Larry Baggett and Watson Fulks, Fourier Analysis, Anjou Press, Boulder, Colorado, 1979, [BF]. 32, 33
3. Bruce Blackadar, Operator Algebras: Theory of C ∗ -algebras and von Neumann Algebras, Springer-Verlag, Berlin,
2006, [B2]. 51, 83, 97, 107
4. A. L. Brown and A. Page, Elements of Functional Analysis, Van Nostrand Reinhold, London, 1970, [BP]. 67
5. John B. Conway, A Course in Functional Analysis, second ed., Springer-Verlag, New York, 1990, [Cw1]. 8, 18,
29, 65, 67, 71, 79, 82
6. , A Course in Operator Theory, American Mathematical Society, Providence, Rhode Island, 2000, [Cw2].
87
7. Kenneth R. Davidson, C*-Algebras by Example, American Mathematical Society, Providence, R.I., 1996, [Da].
51, 59, 65, 87, 88, 89, 90, 91, 108
8. Robert S. Doran and Victor A. Belfi, Characterizations of C*-Algebras, Marcel Dekker, New York, 1986, [DB].
58
9. Ronald G. Douglas, Banach Algebra Techniques in Operator Theory, Academic Press, New York, 1972, [Do1]. 47,
65, 67, 83, 84, 88, 89, 90
10. , C*-Algebra Extensions and K-Homology, Princeton University Press, Princeton, 1980, [Do2]. 93
11. John M. Erdman, A Companion to Real Analysis, 2007,
http://web.pdx.edu/~erdman/CRA/CRAlicensepage.html, [E]. 50, 55
12. Peter A. Fillmore, A User’s Guide to Operator Algebras, John Wiley and Sons, New York, 1996, [Fi]. 51, 59, 97
13. Michael Frank, Hilbert C*-modules and related subjects—a guided reference overview, 2010,
http://www.imn.htwk-leipzig.de/~mfrank/mlit.pdf, [Fr]. 76
14. Paul R. Halmos, A Hilbert Space Problem Book, second ed., Springer-Verlag, New York, 1982, [H]. 90
15. Edwin Hewitt and Karl Stromberg, Real and Abstract Analysis, Springer-Verlag, New York, 1965, [HS]. 25, 32,
33
16. Nigel Higson and John Roe, Analytic K-Homology, Oxford University Press, Oxford, 2000, [HR]. 59, 83, 85, 87,
89, 90, 93, 94, 96, 97
17. Kenneth Hoffman and Ray Kunze, Linear Algebra, second ed., Prentice Hall, Englewood Cliffs,N.J., 1971, [HK].
5, 11
18. Richard V. Kadison and John R. Ringrose, Fundamentals of the Theory of Operator Algebras, volumes I–IV,
Academic Press, New York, 1983, [KR]. 65, 69, 71
19. William S. Massey, Algebraic Topology: An Introduction, Harcourt, Brace, and World, New York, 1967, [Ma]. 90
20. Gerard J. Murphy, C*-Algebras and Operator Theory, Academic Press, San Diego, 1990, [Mu]. 51, 65, 66, 88, 89,
90, 97
21. Theodore W. Palmer, Banach Algebras and the General Theory of ∗ -Algebras I–II, Cambridge University Press,
Cambridge, 1994/2001, [Pm]. 35
22. Vern Paulsen, Completely Bounded Maps and Operator Algebras, Cambridge University Press, Cambridge, 2002,
[Pl]. 96, 97
23. Gert K. Pedersen, Analysis Now: Revised Printing, Springer-Verlag, New York, 1995, [Pe1]. 65, 84, 85
24. Stephen C. Power, Limit Algebras: an Introduction to Subalgebras of C*-algebras, Longman Scientific and Tech-
nical, Harlow, Essex, 1992, [Po]. 108
25. Iain Raeburn and Dana P. Williams, Morita Equivalence and Continuous-Trace C*-Algebras, American Mathe-
matical Society, Providence, R.I., 1998, [RW]. 76
26. Steven Roman, Advanced Linear Algebra, second ed., Springer, New York, 2005, [R]. 11
27. M. Rørdam, F. Larsen, and N. J. Laustsen, An Introduction to K-Theory for C*-Algebras, Cambridge University
Press, Cambridge, 2000, [RLL]. 99, 100, 103, 104, 105, 106, 107, 108
28. Andrew H. Wallace, Algebraic Topology: Homology and Cohomology, W. A. Benjamin, New York, 1970, [Wa]. 90
29. N. E. Wegge-Olsen, K-Theory and C*-Algebras, Oxford University Press, Oxford, 1993, [W-O]. 51, 83, 84, 85,
94, 97, 107
30. Stephen Willard, General Topology, Addison-Wesley, Reading, Mass., 1968, [Wi]. 79, 90
113
Index
A
φ
/ B (φ is a morphism from A to B), 36 p ∼u q (unitary equivalence of projections), 99
−1
α (inverse of a morphism α), 36 A∗ (adjoint of a Hilbert space morphism), 43
C0 (the objects in category C), 36 S ∗ (set of adjoints of elements of S), 45
C1 (the morphisms in category C), 36 T ∗ (adjoint of a Banach space morphism), 36
Aop (opposite algebra of A), 73 V ∗ (dual of a normed linear space), 17
AB (in algebras the span of products of elements), 77 V # (algebraic dual space), 9
[p ]D + [q ]D (addition in D(A)), 101 τ ? (conjugate of τ (a∗ )), 69
p g q (supremum of projections), 64 x∗∗ (image of x under the natural injection), 28
p f q (infimum of projections), 64 4 (closed linear subspace of), 14
p ⊥ q (orthogonality of projections), 64 H N , 64
a ◦ b (operation in an algebra), 56 p ⊕ q (binary operation in P∞ (A)), 100
x ⊗ y, 66 J ⊥ (annihilator of J), 77
A ⊕ B (direct sum of C ∗ -algebras), 52 A
We (unitization of a Banach algebra), 49
a ⊕ b (an element of A ⊕ B), 52 A (closed linear span of A), 14
A ./ C (unitization of an algebra), 49
A∼ absolute value
= B (A and B are isomorphic), 36
in a C ∗ -algebra, 58
X ≈ Y (X and Y are homeomorphic), 30
∗ A C(T) (absolutely convergent Fourier series, 33
A∼ = B (A and B are ∗ -isomorphic), 47 absolutely convergent, 33
x ⊥ y (orthogonal vectors), 8 absolutely summable
[T, T ∗ ] (commutator of T ), 87 bilateral sequence, 24
[p ]0 (image under the Grothendieck map of [p ]D ), 103 sequence, 13
[p]D (∼◦ equivalence class of p), 101 abstract
ha, bi (pair in Grothendieck construction), 101 spectral theorem, 48
hx, yi (inner product), 6 Topelitz
hx | yi (inner product), 73 extension, 93
V + (positive cone in an ordered vector space), 57 operator, 93
|a| (absolute value in a C ∗ -algebra), 58 addition
kxk (norm of x), 7 on the Grothendieck group, 101
k kcb (norm of completely bounded map), 96 adjoint, 75
k ku (uniform norm), 14 as a functor, 44
A⊥ (annihilator of a set), 63 index of, 84
A⊥ (orthogonal complement of a set), 9 of a bounded linear Banach space map, 36
A+ (positive cone), 57 of a bounded linear Hilbert space map, 43
c+ (positive part of c), 57 of a multiplication operator, 44
c− (negative part of c), 57 of an integral operator, 44
≤ (notation for a partial ordering), 57 of an operator on an inner product space, 9
a ≤ b (order relation in a C ∗ -algebra), 57 existence in HS, 43
g . f (relation between operator valued maps), 96 of the unilateral shift, 44
p q (ordering of projections in a ∗ -algebra), 64 adjointable, 75
(a, b) ∼ (c, d) (equivalence in Grothendieck AdU (conjugation by U ), 91
construction), 101 AF -algebra
p ∼ q (Murray-von Neumann equivalence), 99 C(K) is an (K is the Cantor set), 110
φ0 ∼h φ1 (homotopy between ∗ -homomorphisms), AF-algebra, 107
104 Alaoglu’s theorem, 29
p ∼h q (homotopy equivalence of points), 85, 99 algebra, 19, 73
p ∼◦ q (Murray-von Neumann equivalence), 100 C ∗ -, 46
p ∼s q (stable equivalence), 103 Banach, 23
115
116 INDEX
Calkin, 83 space, 13
commutative, 19 L1 (S) as a, 14
homomorphism, 19 C(X) as a, 14
multiplier, 79 C0 (X) as a, 14
norm, 23 Cb (X) as a, 14
normed, 23 l1 as a, 13
opposite, 73 basis
quotient, 21 for a Hilbert space, 61
simple, 20 orthonormal, 61
Toeplitz, 89 Bessel’s inequality, 61
unital, 19 bilinear, 73
algebraic bounded
∗-ideal, 51 completely, 96
dual space, 9 linear map, 16
ideal, 51 unitary, 87
A-linear, 75 sesquilinear functional, 43
A-module, 73, 74 B(V )
Hilbert, 75 as a C ∗ -algebra, 46
A as a, 75 as a Banach space, 17
inner product, 74 as a unital algebra, 20
morphism, 75 as a unital Banach algebra, 23
pre-Hilbert, 74 operators on a normed linear space, 16
semi-inner product, 74 B(V, W )
analytic, 25 bounded linear maps between normed spaces, 16
annihilator, 63, 77 Bratteli diagram, 108
anti-isomorphism, 62, 73
antihomomorphism, 73 Calkin algebra, 83
approximate CAR-algebras, 111
identity, 59 category, 35
existence of an, 59 N, 35
sequential, 59 BAN1 , 35
unit, 59 BAN∞ , 35
approximately finite dimensional C ∗ -algebra, 107 BA1 , 35
Atkinson’s theorem, 83 BA∞ , 35
A-valued (semi-)inner product, 74 CBA1 , 35
CBA∞ , 35
BA1 CMS1 , 42
Banach algebras and contractive homomorphisms, CSA, 47
35 CpH, 35
products in, 41 HIL, 35
BA∞ MS1 , 42
Banach algebras and continuous homomorphisms, SET, 35
35 UCBA1 , 35
products in, 41 UCBA∞ , 35
ball UCSA, 47
closed, 14 concrete, 37
open, 14 geometric, 35
BAN1 monoid, 35
Banach spaces and contractive linear maps, 35 topological, 35
coproducts in, 41 C2 (pairs of objects and morphisms), 38
BAN∞ CBA1
Banach spaces and continuous linear maps, 35 commutative Banach algebras and contractive
coproducts in, 40 homomorphisms, 35
Banach CBA∞
algebra, 23 commutative Banach algebras and continuous
C(X) as a, 23 homomorphisms, 35
C0 (X) as a, 23 character, 27
Cb (X) as a, 23 space, 29
B(V ) as a, 23 of C(X), 30
direct sum, 23 of l1 (Z), 29
homomorphism of, 23 closed
INDEX 117
ball, 14 as a C ∗ -algebra, 46
inverse, 60 as a Banach algebra, 23
linear span, 14 as a Banach space, 14
CMS1 as a nuclear C ∗ -algebra, 97
complete metric spaces and contractive maps, 42 as a unital algebra, 20
co-universal continuous functions on X, 14
morphism, 41 Cb (X)
Coburn’s theorem, 90 as a Banach algebra, 23
codimension, 4, 82 as a Banach space, 14
cofunctor, 36 bounded continuous function on X, 14
cokernel, 82 C0 (X)
commutative, 19 as a C ∗ -algebra, 46
commutator, 87 as a Banach algebra, 23
compact as a Banach space, 14
operator, 66 as a nonunital algebra, 20
compactification, 78, 79 continuous functions vanishing at infinity, 14
essential, 78, 79 contractible
compalent, 87 C ∗ -algebras, 104
compatibility topological space, 105
of orderings with operations, 57 contravariant functor, 36
complement conventions
of a vector subspace, 4 A is a subset of A ⊕ B, 76
orthogonal, 9 about dual space, 17
complete about homomorphisms
order, 36 between Banach algebras, 23
orthonormal set, 62 about morphisms
completely between Banach algebras, 23
bounded, 96 about sesquilinear, 73
positive, 95 about universality, 41
lifting property, 97 algebras are complex, 19
completion categories are concrete, 37
of a metric space, 41 Ext A consists of extensions or morphisms, 93
of an inner product space, 42 Hilbert spaces are separable and infinite
universality of, 42 dimensional (after section 9.2), 90
C ideals are closed, 51
as a C ∗ -algebra, 46 ideals are self-adjoint, 51
Cn ideals are two-sided, 20
as a Hilbert space, 13 in an algebra AB denotes the span of products, 77
components projections in Hilbert spaces are orthogonal, 63
path, 85, 99 representations of unital C∗-algebras are unital, 70
composition subspaces are closed, 14
of morphisms, 35 vector spaces are complex, 3
concatenation, 39 convergence
concrete category, 37 in the w∗ -topology, 28
cone, 57 of a series, 61
positive, 57 convex, 14
proper, 57 convolution
conjugate in L1 (R), 24
linear, 6, 62 in l1 (Z), 24
symmetric, 15, 66 coproduct, 40
conjugation, 62, 91 in BAN1 , 41
connected in BAN∞ , 40
by a path, 85 in HIL, 40
construction in SET, 40
Gelfand-Naimark-Segal, 70 covariant functor, 36
continuity CpH
functor, 37 compact Hausdorff spaces and continuous maps, 35
of K0 , 108 CSA
C (the functor), 30, 37 C ∗ -algebras and ∗-homomorphisms, 47
C(X) products in, 52
118 INDEX
ideal left, 36
∗ -, 45 of a morphism, 36
algebraic, 51 right, 36
algebraic ∗-, 51 invertible, 36
essential, 77 in an algebra, 19
in C(X), 27 left
in a C ∗ -algebra, 51 in an algebra, 19
in an algebra, 20 linear map
left, 20 between vector spaces, 3
left modular, 50 operator
maximal, 20 index of, 83
minimal, 20 polar decomposition of, 65
modular, 50 right
principal, 20, 77 in an algebra, 19
proper, 20 inv A (invertible elements in an algebra), 19
right, 20 involution, 44
right modular, 50 is an isometry, 46
trivial, 20 isometry
idempotent, 22, 27 in a unital C ∗ -algebra, 100
identity proper, 90
approximate, 59 isomorphism
left in a category, 36
with respect to an ideal, 50 natural, 38
operator, 17 of vector spaces, 3
idV or IV or I (identity operator), 3 order, 59
index
of a Fredholm operator, 83 JB (natural injection), 28
of a normal Fredholm operator, 84 Jc (principal (closed) ideal containing c), 77
JC (continuous functions vanishing on C), 27
of an invertible operator, 83
Jordan decomposition, 57
of the adjoint of an operator, 84
of the unilateral shift, 84
K(H)
induced
as a nuclear C ∗ -algebra, 97
partial ordering, 57 as minimal ideal in B(H), 67
inductive compact operators on a Hilbert space, 66
limit, 107 K(V, W ), K(V ), 76
sequence, 107 K0
inequality as a covariant functor, 105
Bessel, 61 continuity of, 108
Kadison, 95 homotopy invariance of, 104
Schwarz, 7, 16, 69 universal property of, 103
infimum K0 (A)
of projections in a ∗ -algebra, 64 for nonunital A, 105
initial for unital A, 103
projection, 64 is 0 when A = B(H), 105
space, 65 is Z when A = C(X), X contractible, 105
inner product, 6 is Z when A = Mn , 105
A-module, 74 is Z when A = K(H), 108
A-valued, 74 K0 (φ)
semi-, 16 nonunital case, 105
space unital case, 104
l as a, 13 Kadison’s inequality, 95
integrable, 14 kernel
integral operator, 18 left, 70
adjoint of an, 44 of a linear map, 3
compactness of, 66 of an integral operator, 18
internal
direct sum, 4 L(V )
orthogonal direct sum, 8 as a C ∗ -algebra, 75
inverse L(V, W ), L(V ) (adjointable maps), 75
closed, 60 l1
INDEX 121
natural, 38 vector
trivial space, 3, 73
ideal, 20 free, 39
normed, 7
U0 (A) (component of unitaries containing 1), 99 ordered, 57
UCBA1 state, 69
unital commutative Banach algebras and unit, 7
contractive homomorphisms, 35 VEC
UCBA∞ vector spaces and linear maps, 37
unital commutative Banach algebras and vector decomposition theorem, 15
continuous homomorphisms, 35 Voiculescu’s theorem, 96
UCSA Volterra operator, 18
unital C ∗ -algebras and unital ∗-homomorphisms, and integral equations, 26
47 compactness of, 66
uniform
norm, 9, 14 w(φ) (winding number of φ), 90
unilateral shift, 17 Weyl’s theorem, 87
adjoint of, 44 Weyl-von Neumann theorem, 87
essential normality of, 87 Wiener’s theorem, 33
index of, 84 winding number, 90
is a Toeplitz operator, 88 Wold decomposition, 90
unit, 19 word, 39
approximate, 59 empty, 39
vector, 7 w∗ -star topology
unital is Hausdorff, 28
∗ -homomorphism, 45 is weaker than the norm topology, 28
algebra, 19 w∗ -topology, 28
homomorphism, 19 base for, 28
subalgebra, 19
zero
unitarily
operator, 17
diagonalizable, 10
Zf (zero set of a function), 24
unitary
zero set, 24, 77
bounded linear map, 87
ζ (identity function on the unit circle), 88
element, 45
Zf (zero set of f ), 77
spectrum of, 47
equivalence
essential, 87
of ∗ -monomorphisms, 92
of operators, 10, 87
of projections, 99
operator, 10
U(A) (unitary elements of a ∗ -algebra), 45
unitization, 78
essential, 78
maximal essential, 79
of a ∗ -algebra, 49
of a C ∗ -algebra, 54
of a Banach ∗ -algebra, 49
of a normed algebra, 49
units
matrix, 95
universal
morphism, 39
object, 39
uniqueness of, 41
property, 39
of K0 (unital case), 103
of completions, 42
of the Grothendieck map, 102
vanish at infinity, 14