Lie NOTES
Lie NOTES
Course Notes
Alberto Elduque
Departamento de Matemáticas
Universidad de Zaragoza
50009 Zaragoza, Spain
c 2005-2015 Alberto Elduque
These notes are intended to provide an introduction to the basic theory of finite
dimensional Lie algebras over an algebraically closed field of characteristic 0 and their
representations. They are aimed at beginning graduate students in either Mathematics
or Physics.
The basic references that have been used in preparing the notes are the books in the
following list. By no means these notes should be considered as an alternative to the
reading of these books.
• N. Jacobson: Lie algebras, Dover, New York 1979. Republication of the 1962
original (Interscience, New York).
• W.A. De Graaf: Lie algebras: Theory and Algorithms, North Holland Mathemat-
ical Library, Elsevier, Amsterdan 2000.
Contents
2 Lie algebras 17
§ 1. Theorems of Engel and Lie . . . . . . . . . . . . . . . . . . . . . . . . . 17
§ 2. Semisimple Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
§ 3. Representations of sl2 (k) . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
§ 4. Cartan subalgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
§ 5. Root space decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 34
§ 6. Classification of root systems . . . . . . . . . . . . . . . . . . . . . . . . 38
§ 7. Classification of the semisimple Lie algebras . . . . . . . . . . . . . . . . 51
§ 8. Exceptional Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
v
Chapter 1
This chapter is devoted to give a brief introduction to the relationship between Lie
groups and Lie algebras. This will be done in a concrete way, avoiding the general
theory of Lie groups.
It is based on the very nice article by R. Howe: “Very basic Lie Theory”, Amer.
Math. Monthly 90 (1983), 600–623.
Lie groups are important since they are the basic objects to describe the symmetry.
This makes them an unavoidable tool in Geometry (think of Klein’s Erlangen Program)
and in Theoretical Physics.
A Lie group is a group endowed with a structure of smooth manifold, in such a way
that both the algebraic group structure and the smooth structure are compatible, in
the sense that both the multiplication ((g, h) 7→ gh) and the inverse map (g 7→ g −1 ) are
smooth maps.
To each Lie group a simpler object may be attached: its Lie algebra, which almost
determines the group.
Definition. A Lie algebra over a field k is a vector space g, endowed with a bilinear
multiplication
[., .] : g × g −→ g
(x, y) 7→ [x, y],
satisfying the following properties:
[x, x] = 0 (anticommutativity)
[[x, y], z] + [[y, z], x] + [[z, x], y] = 0 (Jacobi identity)
for any x, y, z ∈ g.
1
2 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
As for any algebraic structure, one can immediately define in a natural way the
concepts of subalgebra, ideal, homomorphism, isomorphism, ..., for Lie algebras.
The most usual Lie groups and Lie algebras are “groups of matrices” and their Lie
algebras. These concrete groups and algebras are the ones that will be considered in
this chapter, thus avoiding the general theory.
One-parameter groups
A one-parameter group of transformations of V is a continuous group homomorphism
φ : (R, +) −→ GL(V ).
Any such one-parameter group φ satisfies the following properties:
1.1 Properties.
(i) φ is differentiable.
Rt
Proof. Let F (t) = 0 φ(u)du. Then F 0 (t) = φ(t) for any t and for any t, s:
Z t+s
F (t + s) = φ(u)du
0
Z t Z t+s
= φ(u)du + φ(u)du
0 t
Z t Z t+s
= φ(u)du + φ(t)φ(u − t)du
0
Z st
= F (t) + φ(t) φ(u)du
0
= F (t) + φ(t)F (s).
§ 1. ONE-PARAMETER GROUPS AND THE EXPONENTIAL MAP 3
But
F (s)
F 0 (0) = lim = φ(0) = I
s s→0
is differentiable, since so is F .
An
(Note that the series exp(A) = ∞ n n
P
n=0 n! converges absolutely, since kA k ≤ kAk ,
and uniformly on each bounded neighborhood of 0, in particular on Bs (0) = {A ∈
EndR (V ) : kAk < s}, for any 0 < s ∈ R, and hence it defines a smooth, in fact
analytic, map from EndR (V ) to itself.) Besides, A = φ0 (0).
Proof. For any 0 6= v ∈ V , let v(t) = φ(t)v. In this way, we have defined a map
R → V , t 7→ v(t), which is differentiable and satisfies
v(t + s) = φ(s)v(t)
(iii) Conversely, for any A ∈ EndR (V ), the map t 7→ etA is a one-parameter group.
with
X Ap B q
Rn (A, B) = ,
p! q!
1≤p,q≤n
p+q>n
so r
2n
X kAkp kBkq X kAk + kBk
kRn (A, B)k ≤ ≤ ,
p! q! r!
1≤p,q≤n r=n+1
p+q>n
(iv) There exists a positive real number r and an open set U in GL(V ) contained in
Bs (I), with s = er − 1, such that the “exponential map”:
exp : Br (0) −→ U
A 7→ exp(A) = eA
is a homeomorphism.
Proof. exp is differentiable because of its uniform convergence. Moreover, its dif-
ferential at 0 satisfies:
etA − e0
d exp(0)(A) = lim = A,
t→0 t
so that
d exp(0) = id (the identity map on EndR (V ))
and the Inverse Function Theorem applies.
P∞ An P∞ kAkn
Moreover, eA − I = A
n=1 n! , so ke − Ik ≤ n=1 n! = ekAk − 1. Thus
U ⊆ Bs (I).
Note that for V = R (dim V = 1), GL(V ) = R \ {0} and exp : R → R \ {0} is not
onto, since it does not take negative values.
Also, for V = R2 , identify EndR (V ) with Mat2 (R). Then, with A = 10 −1
0 , it
2 −1 0 3 0 1 4 tA cos t − sin t
follows that A = 0 −1 , A = −1 0 and A = I. It follows that e = sin t cos t .
In particular, etA = e(t+2π)A and, therefore, exp is not one-to-one.
Adjoint maps
1. For any g ∈ GL(V ), the linear map Ad g : EndR (V ) → EndR (V ), A 7→ gAg −1 , is
an inner automorphism of the associative algebra EndR (V ).
The continuous group homomorphism
Ad : GL(V ) −→ GL(EndR (V ))
g 7→ Ad g,
is called the adjoint map of GL(V ).
§ 2. MATRIX GROUPS 5
2. For any A ∈ EndR (V ), the linear map adA (or ad A) : EndR (V ) → EndR (V ),
B 7→ [A, B] = AB −BA, is an inner derivation of the associative algebra EndR (V ).
The linear map
exp(tA)B exp(−tA) − B
A(B) = lim
t→0 t
d
= exp(tA)B exp(−tA) |t=0
dt
= ABI − IBA = adA (B).
Therefore, A = adA and Ad exp(tA) = exp(t adA ) for any t ∈ R.
§ 2. Matrix groups
2.1 Definition. Given a real vector space V , a matrix group on V is a closed subgroup
of GL(V ).
Any matrix group inherits the topology of GL(V ), which is an open subset of the
normed vector space EndR (V ).
2.2 Examples. 1. GL(V ) is a matrix group, called the general linear group. For
V = Rn , we denote it by GLn (R).
6 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
Cn −→ R2n
(x1 + iy1 , . . . , xn + iyn ) 7→ (x1 , . . . , xn , y1 , . . . , yn ).
10. Given any matrix group on V , its normalizer N (G) = {g ∈ GL(V ) : gGg −1 = G}
is again a matrix group.
§ 3. THE LIE ALGEBRA OF A MATRIX GROUP 7
Our purpose is to prove that g is a Lie algebra, called the Lie algebra of G.
1
3.1 Technical Lemma. (i) Let A, B, C ∈ gl(V ) such that kAk, kBk, kCk ≤ 2 and
exp(A) exp(B) = exp(C). Then
1
C = A + B + [A, B] + S
2
3
with kSk ≤ 65 kAk + kBk .
∞ ∞
X kCkn−2 X 1
(3.2) kR1 (C)k ≤ kCk2 ≤ kCk2 ≤ kCk2 ,
n! n!
n=2 n=2
Hence,
∞
X (kAk + kBk)n
(3.3) kR1 (A, B)k ≤ ≤ (kAk + kBk)2 ,
n!
n=2
1
• Therefore, C = A + B + R1 (A, B) − R1 (C) and, since kCk ≤ 2 and kAk + kBk ≤ 1,
equations (3.2) and (3.3) give
2 1
kCk ≤ kAk + kBk + kAk + kBk + kCk2 ≤ 2 kAk + kBk + kCk,
2
and thus
(3.4) kCk ≤ 4 kAk + kBk .
Moreover,
C2
• Let us take one more term now, thus exp(C) = I +C + +R2 (C). The arguments
2
1
in (3.2) give, since e − 2 − 2 < 13 ,
1
(3.6) kR2 (C)k ≤ kCk3 .
3
1
exp(A) exp(B) = I + A + B + (A2 + 2AB + B 2 ) + R2 (A, B),
(3.7) 2
1 1
= I + A + B + [A, B] + (A + B)2 + R2 (A, B),
2 2
with
1
(3.8) kR2 (A, B)k ≤ (kAk + kBk)3 .
3
1
(A + B)2 − C 2 − R2 (C)
S = R2 (A, B) +
2
and, because of (3.4), (3.5), (3.6) and (3.8),
1
kSk ≤ kR2 (A, B)k + k(A + B)(A + B − C) + (A + B − C)Ck + kR2 (C)k
2
1 3 1 1
≤ kAk + kBk + kAk + kBk + kCk)kA + B − Ck + kCk3
3 2 3
1 3 5 2 1 3 3
≤ kAk + kBk + kAk + kBk · 17 kAk + kBk + 4 kAk + kBk
3 2 3
65 85 3 3
= + kAk + kBk ≤ 65 kAk + kBk .
3 2
§ 3. THE LIE ALGEBRA OF A MATRIX GROUP 9
In other words,
1
A B A+B
exp exp = exp +O 2 .
n n n n
Therefore,
n
A B
exp exp = exp(Cn )n = exp(nCn ) −−−−−→ exp(A + B),
n n n→∞
3.2 Theorem. Let G be a matrix group on the vector space V and let g = {A ∈ gl(V ) :
exp(tA) ∈ G ∀t ∈ R}. Then:
Proof. By its own definition, g is closed under multiplication by real numbers. Now,
given any A, B ∈ g and t ∈ R, since G is closed, the Technical Lemma shows us that
n
tA tB
exp t(A + B) = lim exp exp ∈ G,
n→∞ n n
n2
tA B
exp t[A, B] = lim exp : exp ∈ G.
n→∞ n n
Hence g is closed too under addition and Lie brackets, and so it is a Lie subalgebra of
gl(V ).
To prove the second part of the Theorem, let us first check that, if (An )n∈N is a
sequence in exp−1 (G) with limn→∞ kAn k = 0, and (sn )n∈N is a sequence of real numbers,
then any cluster point B of the sequence (sn An )n∈N lies in g:
Actually, we may assume that limn→∞ sn An = B. Let t ∈ R. For any n ∈ N, take
mn ∈ Z such that |mn − tsn | ≤ 1. Then,
kmn An − tBk = k(mn − tsn )An + t(sn An − B)k
≤ |mn − tsn |kAn k + |t|ksn An − Bk.
Since both kAn k and ksn An − Bk converge to 0, it follows that limn→∞ mn An = tB.
Also, An ∈ exp−1 (G), so that exp(mn An ) = exp(An )mn ∈ G. Since exp is continuous
and G is closed, exp(tB) = limn→∞ exp(mn An ) ∈ G for any t ∈ R, and hence B ∈ g, as
required.
Let now m be a subspace of gl(V ) with gl(V ) = g ⊕ m, and let πg and πm be the
associated projections onto g and m. Consider the analytical function:
E : gl(V ) −→ GL(V )
A 7→ exp πg (A) exp πm (A) .
Then,
d
exp πg (tA) exp πm (tA) |t=0
dt
d d
= exp πg (tA) |t=0 exp(0) + exp(0) exp πm (tA) |t=0
dt dt
= πg (A) + πm (A) = A.
Hence, the differential of E at 0 is the identity and, thus, E maps homeomorphically a
neighborhood of 0 in gl(V ) onto a neighborhood of 1 in GL(V ). Let us take r > 0 and
a neighborhood V of 1 in GL(V ) suchthat E|Br (0) : Br(0) → V is a homeomorphism. It
is enough to check that exp Br (0) ∩ g = E Br (0) ∩ g contains a neighborhood of 1 in
G.
Otherwise, there would exist a sequence (Bn )n∈N ∈ exp−1 (G) with Bn 6∈ Br (0)∩g and
such that limn→∞ Bn = 0. For large enough n, exp(Bn ) = E(An ), with limn→∞ An = 0.
−1
Hence exp πm (An ) = exp πg (An ) exp(Bn ) ∈ G.
Since limn→∞ An = 0, limn→∞ πm (An ) = 0 too and, for large enough m, πm (Am ) 6= 0,
as Am 6∈ g (note that if Am ∈ g, then exp(Bm ) = E(Am ) = exp(Am ) and since exp is a
bijection on a neighborhood
would have Bm = Am ∈ g, a contradiction).
of 0, we
1
The sequence kπm (An )k πm (An ) is bounded, and hence has cluster points, which
n∈N
are in m (closed in gl(V ), since it is a subspace). We know that these cluster points are
in g, so in g ∩ m = 0. But the norm of all these cluster points is 1, a contradiction.
§ 3. THE LIE ALGEBRA OF A MATRIX GROUP 11
3.3 Remark. Given any A ∈ gl(V ), the set {exp(tA) : t ∈ R} is the continuous image
of the real line, and hence it is connected. Therefore, if g is the Lie algebra of the matrix
group G, exp(g) is contained in the connected component Go of the identity. Therefore,
the Lie algebra of G equals the Lie algebra of Go .
Also, exp(g) contains an open neighborhood U of 1 in G. Thus, Go contains the open
neighborhood xU of any x ∈ Go . Hence Go is open in G but, as a connected component,
it is closed too: Go is open and closed in G.
3.4 Examples. 1. The Lie algebra of GL(V ) is obviously the whole general linear
Lie algebra gl(V ).
2. For any A ∈ gl(V ) (or any square matrix A), det eA = etrace(A) . This is better
checked for matrices. Since any real matrix can be considered as a complex matrix,
it is well known that given any such matrix there is a regular complex matrix
P such that J = P AP −1 is upper triangular. Assume that λ1 , . . . , λn are the
eigenvalues of A (or J), counted A −1 =
J A J
Qn according
λ
Pn to their multiplicities. Then P e P
λ trace(J) trace(A)
e and det e = det e = i=1 e = e i=1 = e
i i =e .
Hence, for any t 6= 0, det etA = 1 if and only if trace(A) = 0. This shows that
the Lie algebra of the special linear group SL(V ) is the special linear Lie algebra
sl(V ) = {A ∈ gl(V ) : trace(A) = 0}.
5. The Lie algebra of an intersection of matrix groups is the intersection of the cor-
responding Lie algebras.
6. The Lie algebra of G = G1 × G2 ⊆ GL(V1 ) × GL(V2 ) is the direct sum g1 ⊕ g2
of the corresponding Lie algebras. This follows from the previous items because,
inside GL(V1 × V2 ), GL(V1 ) × GL(V2 ) = P (V1 ) ∩ P (V2 ).
12 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
7. Given T1 , . . . Tn ∈ EndR (V ). With similar arguments, one checks that the Lie
algebra of G = {g ∈ GL(V ) : gTi = Ti g, i = 1, . . . , n} is g = {A ∈ gl(V ) : ATi =
Ti A, i = 1, . . . , n}. In particular, the Lie algebra of GLn (C) is gln (C) (the Lie
algebra of complex n × n matrices).
In the remainder of this chapter, the most interesting properties of the relationship
between matrix groups and their Lie algebras will be reviewed.
3.5 Proposition. Let G be a matrix group on a real vector space V , and let Go be its
connected component of 1. Let g be the Lie algebra of G. Then Go is the group generated
by exp(g).
Proof. We already know that exp(g) ⊆ Go and that there exists an open neighborhood
U of 1 ∈ G with 1 ∈ U ⊆ exp(g). Let V = U ∩ U −1 , which is again an open neighborhood
of 1 in G contained in exp(g). It is enough to prove that Go is generated, as a group,
by V.
Let H = ∪n∈N V n , H is closed under multiplication and inverses, so it is a subgroup
of G contained in Go . Actually, it is the subgroup of G generated by V. Since V is open,
so is V n = ∪v∈V vV n−1 for any n, and hence H is an open subgroup of G. But any open
subgroup is closed too, as G \ H = ∪x∈G\H xH is a union of open sets. Therefore, H is
an open and closed subset of G contained in the connected component Go , and hence it
fills all of Go .
3.6 Theorem. Let G and H be matrix groups on the real vector space V with Lie
algebras g and h.
(i) If H is a normal subgroup of G, then h is an ideal of g (that is, [g, h] ⊆ h).
3.7 Theorem. Let G be a matrix group on the real vector space V with Lie algebra g,
and let H be a matrix group on the real vector space W with Lie algebra h.
If ϕ : G → H is a continuous homomorphism of groups, then there exists a unique
Lie algebra homomorphism d ϕ : g → h that makes the following diagram commutative:
dϕ
g −−−−→ h
expy
exp
y
ϕ
G −−−−→ H
§ 3. THE LIE ALGEBRA OF A MATRIX GROUP 13
3.9 Remark. The converse of the Corollary above is false, even if G and H are con-
nected. Take, for instance,
cos θ − sin θ
G = SO(2) = :θ∈R
sin θ cos θ
(the special orthogonal group on R2 , which is homeomorphic to the unit circle). Its Lie
algebra is
0 −α
g= :α∈R
α 0
(2 × 2 skew-symmetric matrices). Also, take
α 0
H= : α ∈ R>0
0 1
which is isomorphic to the multiplicative group of positive real numbers, whose Lie
algebra is
α 0
h= :α∈R .
0 0
Both Lie algebras are one-dimensional vector spaces with trivial Lie bracket, and hence
they are isomorphic as Lie algebras. However, G is not isomorphic to H (inside G one
may find many finite order elements, but the identity is the only such element in H).
(One can show that G and H are ‘locally isomorphic’.)
14 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
Ad : G −→ GL(g),
3.10 Corollary. Let G be a matrix group on the real vector space V and let Ad : G →
GL(g) be the adjoint map. Then d Ad = ad : g → gl(g).
Remember that, given a group G, its center Z(G) is the normal subgroup consisting
of those elements commuting with every element: Z(G) = {g ∈ G : gh = hg ∀h ∈ G}.
3.11 Corollary. Let G be a connected matrix group with Lie algebra g. Then Z(G) =
ker Ad, and this is a closed subgroup of G with Lie algebra the center of g: Z(g) = {X ∈
g : [X, Y ] = 0 ∀Y ∈ g} (= ker ad).
Proof. With g ∈ Z(G) and X ∈ g, exp(tX) = g exp(tX) g −1 = exp t Ad g(X) for any
3.12 Corollary. Let G be a connected matrix group with Lie algebra g. Then G is
commutative if and only if g is abelian, that is, [g, g] = 0.
Finally, the main concept of this course will be introduced. Groups are important
because they act as symmetries of other structures. The formalization, in our setting,
of this leads to the following definition:
3.14 Corollary. Let G be a matrix group with Lie algebra g and let ρ : G → GL(W )
be a representation of G. Then there is a unique representation d ρ : g → gl(W ) such
that the following diagram is commutative:
dρ
g −−−−→ gl(W )
expy
exp
y
ρ
G −−−−→ GL(W )
The great advantage of dealing with d ρ above is that this is a Lie algebra homomor-
phism, and it does not involve topology. In this sense, the representation d ρ is simpler
than the representation of the matrix group, but it contains a lot of information about
the latter. The message is that in order to study the representations of the matrix
groups, we will study representations of Lie algebras.
Chapter 2
Lie algebras
The purpose of this chapter is to present the basic structure of the finite dimensional
Lie algebras over fields, culminating in the classification of the simple Lie algebras over
algebraically closed fields of characteristic 0.
17
18 CHAPTER 2. LIE ALGEBRAS
ϕ : S −→ gl(L/S)
x 7→ ϕ(x) : L/S → L/S
y + S 7→ [x, y] + S
(L is a module for S through ad, and L/S is a quotient module) satisfies the hypotheses
of the Theorem, but with dimk S < n. By the induction hypothesis, there exists an
element z ∈ L \ S such that [x, z] ∈ S for any x ∈ S. Therefore, S ⊕ kz is a subalgebra
of L which, by maximality of S, is the whole L. In particular S is an ideal of L.
Again, by induction, we conclude that the subspace W = {v ∈ V : x.v = 0 ∀x ∈ S}
is nonzero. But for any x ∈ S, x.(z.W ) ⊆ [x, z].W + z.(x.W ) = 0 ([x, z] ∈ S). Hence
z.W ⊆ W , and since z is a nilpotent endomorphism, there is a nonzero v ∈ W such that
z.v = 0. Hence x.v = 0 for any x ∈ S and for z, so x.v = 0 for any x ∈ L.
(iii) The descending central series of a Lie algebra L is the chain of ideals L = L1 ⊇
L2 k · · · k Ln k · · · , where Ln+1 = [Ln , L] for any n ∈ N. The Lie algebra is said
to be nilpotent if there is an n ∈ N such that Ln = 0. Moreover, if n = 2, L is said
to be abelian. Then
Theorem. (Engel) A Lie algebra L is nilpotent if and only if adx is nilpotent for
any x ∈ L.
1.4 Exercise. The ascending central series of a Lie algebra L is defined as follows:
Z0 (L) = 0, Z1 (L) = Z(L) = {x ∈ L : [x, L] = 0} (the center of L) and Zi+1 (L)/Zi (L) =
Z (L/Zi (L)) for any i ≥ 1. Prove that this is indeed an ascending chain of ideals and
that L is nilpotent if and only if there is an n ∈ N such that Zn (L) = L.
§ 1. THEOREMS OF ENGEL AND LIE 19
1.5 Definition. Let L be a Lie algebra and consider the descending chain of ideals
defined by L(0) = L and L(m+1) = [L(m) , L(m) ] for any m ≥ 0. Then the chain L =
L(0) ⊇ L(1) ⊇ L(2) ⊇ · · · is called the derived series of L. The Lie algebra L is said to
be solvable if there is an n ∈ N such that L(n) = 0.
1. Any nilpotent Lie algebra is solvable. However, show that L = kx + ky, with
[x, y] = y, is a solvable but not nilpotent Lie algebra.
4. Let I be an ideal of L such that both I and L/I are solvable. Then L is solvable.
Give an example to show that this is no longer valid with nilpotent instead of
solvable.
Therefore the action of ρ(x) on U is given by an upper triangular matrix with λ(x) on
the diagonal and, hence, trace ρ(x)|U = λ(x) dimk U for any x ∈ S. In particular,
(
λ([x, z]) dimk U
trace ρ([x, z])|U =
trace ρ(x)|U , ρ(z)|U = 0
20 CHAPTER 2. LIE ALGEBRAS
(the trace of any commutator is 0), and since the characteristic of k is 0 we conclude
that λ([S, L]) = 0.
But then, for any 0 6= w ∈ W and x ∈ S,
x.(z.w) = [x, z].w + z.(x.w) = λ([x, z])w + z. λ(x)w = λ(x)z.w,
and this shows that W is invariant under ρ(z). Since k is algebraically closed, there is a
nonzero eigenvector of ρ(z) in W , and this is a common eigenvector for any x ∈ S and
for z, and hence for any y ∈ L.
1.8 Remark. Note that the proof above is valid even if k is not algebraically closed, as
long as the characteristic polynomial of ρ(x) for any x ∈ L splits over k. In this case ρ
is said to be a split representation.
(ii) Let ρ : L → gl(V ) be a split representation of a solvable Lie algebra. Then there
is a basis of V such that the coordinate matrix of any ρ(x), x ∈ L, is upper
triangular.
(iii) Let L be a solvable Lie algebra such that its adjoint representation ad : L → gl(L)
is split. Then there is a chain of ideal 0 = L0 ⊆ L1 ⊆ · · · ⊆ Ln = L with dim Li = i
for any i.
(iv) Let ρ : L → gl(V ) be a representation of a Lie algebra L. Then [L, R(L)] acts
nilpotently on V ; that is, ρ(x) is nilpotent for any x ∈ [L, R(L)]. The same is true
of [L, L] ∩ R(L). In particular, with the adjoint representation, we conclude that
[L, R(L)] ⊆ [L, L] ∩ R(L) ⊆ N (L) and, therefore, L is solvable if and only if [L, L]
is nilpotent.
for any x, y ∈ L, that appears in Cartan’s criterion for solvability, plays a key role in
studying Lie algebras over fields of characteristic 0. It is called the Killing form of the
Lie algebra L.
Note that κ is symmetric and invariant (i.e., κ([x, y], z) = κ(x, [y, z]) for any x, y, z ∈
L).
Proof. The invariance of the Killing form κ of such a Lie algebra L implies that the
subspace I = {x ∈ L : κ(x, L) = 0} is an ideal of L. By Proposition 1.11, ad I is a
solvable subalgebra of gl(L), and this shows that I is solvable. (ad I ∼
= I/Z(L) ∩ I).
Hence, if L is semisimple I ⊆ R(L) = 0, and thus κ is nondegenerate. (Note
that this argument is valid had we started with a Lie subalgebra L of gl(V ) for some
vector space V , and had we replaced κ by the trace form of V : B : L × L → k,
(x, y) 7→ B(x, y) = trace(xy).)
Conversely, assume that κ is nondegenerate, that is, that I = 0. If J were an abelian
ideal of L, then for any x ∈ J and y ∈ L, adx ady (L) ⊆ J and adx ady (J) = 0. Hence
adx ady )2 = 0 and κ(x, y) = trace adx ady = 0. Therefore, J ⊆ I = 0. Thus, L
does not contain proper abelian ideals, so it does not contain proper solvable ideals and,
hence, R(L) = 0 and L is semisimple.
(i) L is semisimple if and only if L is a direct sum of simple ideals. In particular, this
implies that L = [L, L].
(ii) Let K/k be a field extension, then L is semisimple if and only if so is the scalar
extension K ⊗k L.
(iii) If L is semisimple and I is a proper ideal of L, then both I and L/I are semisimple.
(iv) Assume that L is a Lie subalgebra of gl(V ) and that the trace form B : L × L → k,
(x, y) 7→ B(x, y) = trace(xy) is nondegenerate. Then L = Z(L) ⊕ [L, L] and [L, L]
is semisimple (recall that the center Z(L) is abelian). Moreover, the ideals Z(L)
and [L, L] are orthogonal relative to B, and hence the restriction of B to both
Z(L) and [L, L] are nondegenerate.
Proof. Let d be any derivation and consider the linear form L → k, x 7→ trace(d adx ).
Since κ is nondegenerate, there is a z ∈ L such that κ(z, x) = trace(d adx ) for any
x ∈ L. But then, for any x, y ∈ L,
κ d(x), y = trace add(x) ady )
= trace [d, adx ] ady (since d is a derivation)
= trace d[adx , ady ]
= trace d ad[x,y]
= κ(z, [x, y]) = κ([z, x], y).
Let V and W be two modules for a Lie algebra L. Then both Homk (V, W ) and
V ⊗k W are L-modules too by means of:
2.3 Proposition. Let L be a Lie algebra over an algebraically closed field k of char-
acteristic 0. Then any irreducible module for L is, up to isomorphism, of the form
V = V0 ⊗k Z, with V0 and Z modules such that dimk Z = 1 and V0 is irreducible and
annihilated by R(L). (Hence, V0 is a module for the semisimple Lie algebra L/R(L).)
Proof. By the proof of Consequence 1.9.(iv), we know that there is a linear form λ :
R(L) → k such that x.v = λ(x)v for any x ∈ R(L) and v ∈ V . Moreover, λ [L, R(L)] =
0 = λ [L, L] ∩ R(L) . Thus
we may extend λ to a form L → k, also denoted by λ, in
such a way that λ [L, L] = 0.
Let Z = kz be a one dimensional vector space, which is a module for L by means of
x.z = λ(x)z and let W = V ⊗k Z ∗ (Z ∗ is the dual vector space to Z), which is also an
L-module. Then the linear map
W ⊗k Z −→ V
(v ⊗ f ) ⊗ z 7→ f (z)v
This Proposition shows the importance of studying the representations of the semisim-
ple Lie algebras.
Recall the following definition.
2.5 Weyl’s Theorem. Any representation of a semisimple Lie algebra over a field of
characteristic 0 is completely reducible.
Proof. Let L be a semisimple Lie algebra over the field k of characteristic 0, and let
ρ : L → gl(V ) be a representation and W a submodule of V . Does there exist a
submodule W 0 such that V = W ⊕ W 0 ?
We may extend scalars and assume that k is algebraically closed, because the exis-
tence of W 0 is equivalent to the existence of a solution to a system of linear equations:
does there exist π ∈ EndL (V ) such that π(V ) = W and π|W = IW (the identity map on
W )?
Now, assume first that W is irreducible and V /W trivial (that is, L.V ⊆ W ). Then
we may change L by its quotient ρ(L), which is semisimple too (or 0, which is a trivial
§ 2. SEMISIMPLE LIE ALGEBRAS 25
case), and hence assume that 0 6= L ≤ gl(V ). Consider the trace form bV : L × L → k,
(x, y) 7→ trace(xy). By Cartan’s criterion for solvability, ker bV is a solvable ideal of L,
hence 0, and thus bV is nondegenerate. Take dual bases {x1 , . . . , xn } and {y1 , . . . , yn }
of L relative to bV (that is, bV (xi , yj ) = δij for any i, j).
Then the element cV = ni=1 xi yi ∈ Endk (V ) is called the Casimir element and
P
n
X n
X
trace(cV ) = trace(xi yi ) = bV (xi , yi ) = n = dimk L.
i=1 i=1
Pn
Moreover,
Pn for any x ∈ L, there are scalars such that [xi , x] = j=1 αij xj and [yi , x] =
i=1 βij yj for any i. Since
bV [xi , x], yj + bV xi , [yj , x] = 0
dimk L
cV |W = IW
dimk W
(i) Let V be a vector space over k and let L be a semisimple Lie subalgebra of gl(V ).
For any x ∈ L, consider its Jordan decomposition x = xs + xn . Then xs , xn ∈ L.
Proof. Since ρ(L) ∼= L/ ker ρ is a quotient of L, ρ(xs ) = ρ(x)s and ρ(xn ) = ρ(x)n
(this is because adρ(L) ρ(xs ) is semisimple and adρ(L) ρ(xn ) is nilpotent). Here
adρ(L) denotes the adjoint map in the Lie algebra ρ(L), to distinguish it from
the adjoint map of gl(V ). By item (i), if ρ(x) = s + n is the Jordan decomposi-
tion of ρ(x), s, n ∈ ρ(L) and we obtain two Jordan decompositions in gl ρ(L) :
adρ(L) ρ(x) = adρ(L) s + adρ(L) n = adρ(L) ρ(xs ) + adρ(L) ρ(xn ). By uniqueness,
s = ρ(xs ) and n = ρ(xn ).
§ 2. SEMISIMPLE LIE ALGEBRAS 27
There are other important consequences that can be drawn from Weyl’s Theorem:
where µv (x) = x.v for any x ∈ L and v ∈ V . Moreover, for any x, y ∈ L and
v ∈V,
(x.µv )(y) = x. µv (y) − µv ([x, y]) = x.(y.v) − [x, y].v = y.(x.v) = µx.v (y).
0 = (x.f )(y) = x.f (y) − f ([x, y]) = µf (y) (x) − f ([x, y])
= (y.f )(x) − f ([x, y]) = −f ([x, y]).
Otherwise, [L, R(L)] = R(L). Consider then the module gl(L) for L (x.f = [adx , f ]
for any x ∈ L and f ∈ gl(L)). Let ρ be the associated representation. Then the
subspaces
M = {f ∈ gl(L) : f (L) ⊆ R(L) and there exists λf ∈ k such that f |R(L) = λf id},
N = {f ∈ gl(L) : f (L) ⊆ R(L) and f R(L) = 0},
are submodules of gl(L), with ρ(L)(M ) ⊆ N $ M . Moreover, for any x ∈ R(L),
f ∈ M and z ∈ L:
(2.2) [adx , f ](z) = [x, f (z)] − f [x, z] = −λf adx (z),
since [x, f (z)] ∈ [R(L), R(L)] = 0. Hence, ρ R(L) (M ) ⊆ {adx : x ∈ R(L)} ⊆ N .
Write R = {adx : x ∈ R(L)}. Therefore, M/R is a module for the semisimple
Lie algebra L/R(L) and, by Weyl’s Theorem, there is another submodule Ñ with
R $ Ñ ⊆ M such that M/R = N/R ⊕ Ñ /R. Take g ∈ Ñ \ N with λg = −1. Since
ρ(L)(M ) ⊆ N , ρ(L)(g) ⊆ R, so for any y ∈ L, there is an element α(y) ∈ R(L)
such that
[ady , g] = adα(y) ,
and α : L → R(L) is linear. Equation (2.2) shows that α|R(L) = id, so that
L = R(L) ⊕ ker α and ker α = {x ∈ L : ρ(x)(g) = 0} is a subalgebra of L.
Moreover, if T is a semisimple subalgebra of L, let us prove that there is a suitable
automorphism of L that embeds T into S. Since T is semisimple, T = [T, T ] ⊆
[L, L] = [L, R(L)] ⊕ S ⊆ N (L) ⊕ S. If N (L) = 0, the result is clear. Otherwise,
let I be a minimal ideal of L contained in N (L) (hence I is abelian). Arguing
by induction on dim L, we may assume that there are elements z1 , . . . , zr in N (L)
such that
T 0 = exp adz1 · · · exp adzr (T ) ⊆ I ⊕ S.
Now, it is enough to prove that there is an element z ∈ I such that exp adz (T 0 ) ⊆ S.
Therefore, it is enough to prove the result assuming that L = R ⊕ S, where R is an
abelian ideal of L. In this case, let ϕ : T → R and ψ : T → S be the projections
of T on R and S respectively (that is, for any t ∈ T , t = ϕ(t) + ψ(t)). For any
t1 , t2 ∈ T ,
[t1 , t2 ] = [ϕ(t1 ) + ψ(t1 ), ϕ(t2 ) + ψ(t2 )]
= [ϕ(t1 ), t2 ] + [t1 , ϕ(t2 )] + [ψ(t1 ), ψ(t2 )],
since [R, R] = 0. Hence ϕ([t1 , t2 ]) = [ϕ(t1 ), t2 ] + [t1 , ϕ(t2 )]. Withehead’s Lemma
shows the existence of an element z ∈ R such that ϕ(t) = [t, z] for any t ∈ T . But
then, since (adz )2 = 0 because R is abelian,
exp adz (t) = t + [z, t] = t − ϕ(t) = ψ(t) ∈ S,
for any t ∈ T . Therefore, exp adz (T ) ⊆ S.
(iii) Let L be a Lie algebra over a field k of characteristic 0, then [L, R(L)] = [L, L] ∩
R(L).
and
ρ(x)ρ(y)i (v) = ρ [x, y] ρ(y)i−1 (v) + ρ(y)ρ(x)ρ(y)i−1 (v)
= λ − 2(i − 1) ρ(y)i−1 (v) + ρ(y) ρ(x)ρ(y)i−1 (v)
30 CHAPTER 2. LIE ALGEBRAS
3.3 Remark. Actually, the result above can be proven easily without using Weyl’s
Theorem. For k algebraically closed of characteristic 0, let 0 6= v ∈ V be an eigenvector
for ρ(h): h.v = λv. Then, with the same arguments as before, h.ρ(x)n (v) = (λ +
2n)ρ(x)n v and, since the dimension is finite and the characteristic 0, there is a natural
number n such that ρ(x)n (v) = 0. This shows that W = {w ∈ V : x.w = 0} = 6 0. In the
m
same vein, for any w ∈ W there is a natural number m such that ρ(y) (w) = 0. This
is all we need for the proof above.
3.4 Corollary. Let k be a field of characteristic 0 and let ρ : sl2 (k) → gl(V ) be a
representation. Consider the eigenspaces V0 = {v ∈ V : h.v = 0} and V1 = {v ∈ V :
h.v = v}. Then V is a direct sum of dimk V0 + dimk V1 irreducible modules.
Proof. By Weyl’s Theorem, V = ⊕N i i
i=1 W , with W irreducible for any i. Now, for
any i, there is an ni ∈ Z≥0 such that W ∼ i
= V (ni ), and hence ρ(h) has eigenvalues
ni , ni − 2, . . . , −ni , all with multiplicity 1, on W i . Hence dimk W0i + dimk W1i = 1 for
any i, where W0i = W i ∩ V0 , W1i = W i ∩ V1 . Since V0 = ⊕N i N i
i=1 W0 and V1 = ⊕i=1 W1 , the
result follows.
V (n) ⊗k V (m) ∼
= V (n + m) ⊕ V (n + m − 2) ⊕ · · · ⊕ V (n − m).
and dimk Vn+m−2p − dimk Vn+m−2(p−1) = 1 for any p = 1, . . . , m, while dimk Vn+m−2p −
dimk Vn+m−2(p−1) = 0 for p = m + 1, . . . , n+m
2 .
§ 4. Cartan subalgebras
In the previous section, we have seen the importance of the subalgebra kh of sl2 (k). We
look for similar subalgebras in any semisimple Lie algebra.
4.4 Lemma. (i) Let f, g be two endomorphisms of a nonzero vector space V . Let
µ ∈ k be an eigenvalue of f , and let W = {v ∈ V : (f − µI)n (v) = 0 for some n}
be the corresponding generalized eigenspace. (I denotes the identity map.) If there
exists a natural number m > 0 such that (ad f )m (g) = 0, then W is invariant
under g.
(iii) Any toral subalgebra of a semisimple Lie algebra over an algebraically closed field
k of characteristic 0 is abelian.
32 CHAPTER 2. LIE ALGEBRAS
Proof. For (i) denote by lf and rf the left and right Pmultiplication by f in Endk (V ).
Then, for any n > 0, f n g = lfn (g) = (ad f +rf )n (g) = ni=0 ni (ad f )i (g)f n−i , and hence,
since ad(f − µI) = ad f , we obtain also (f − µI)n g = ni=0 ni (ad f )i (g)(f − µI)n−i .
P
4.5 Theorem. Let L be a semisimple Lie algebra over an algebraically closed field k of
characteristic 0, and let H be a subalgebra of L. Then H is a Cartan subalgebra of L if
and only if it is a maximal toral subalgebra of L.
Proof. Assume first that H is a Cartan subalgebra of L so, by the previous lemma,
L = ⊕λ∈H ∗ Lλ , where Lλ = {x ∈ L : ∀h ∈ H (ad h − λ(h)I)n (x) = 0 for some n} for any
λ. But then H acts by nilpotent endomorphisms on L0 , and hence on L0 /H. If H 6= L0 ,
Engel’s Theorem shows that there is an element x ∈ L0 \ H such that [h, x] ∈ H for any
h ∈ H, that is, x ∈ NL (H) \ H, a contradiction with H being self-normalizing. Hence
we have L = H ⊕ ⊕06=λ∈H Lλ . ∗
One checks immediately that [Lλ , Lµ ] ⊆ Lλ+µ and, thus, κ Lλ , Lµ = 0 if λ 6= −µ,
where κ is the Killing form of L. Since κ is nondegenerate and κ H, Lλ = 0 for any
0 6= λ ∈ H ∗ , the restriction of κ to H is nondegenerate too.
Now, H is nilpotent,
and hence solvable. By Proposition 1.11 applied to ad H ⊆
gl(L), κ [H, H], H = 0 and, since κ|H is nondegenerate, we conclude that [H, H] = 0,
that is, H is abelian.
For any x ∈ H, [x, H] = 0 implies that [xs , H] = 0 = [xn , H]. Hence xn ∈ H
and ad xn is nilpotent. Thus, for any y ∈ H, [xn , y] = 0, so adxn ady is a nilpotent
endomorphism of L. This shows that κ(xn , H) = 0 and hence xn = 0. Therefore H is
toral. On the other hand, if H ⊆ S, for a toral subalgebra S of L, then S is abelian, so
[S, H] = 0 and S ⊆ NL (H) = H. Thus, H is a maximal toral subalgebra of L.
Conversely, let T be a maximal toral subalgebra of L. Then T is abelian. Let
{x1 , . . . , xm } be a basis of T . Then ad x1 , . . . , ad xm are commuting diagonalizable en-
domorphisms of L, so they are simultaneously diagonalizable. This shows that L =
⊕λ∈T ∗ Lλ (T ), where T ∗ is the dual vector space to T and Lλ (T ) = {y ∈ L : [t, y] =
λ(t)y ∀t ∈ T }. As before, [Lλ (T ), Lµ (T )] ⊆ Lλ+µ (T ) for any λ, µ ∈ T ∗ and L0 (T ) =
CL (T ) (= {x ∈ L : [x, T ] = 0}), the centralizer of T .
For any x = xs + xn ∈ CL (T ), both xs , xn ∈ CL (T ). Hence T + kxs is a toral
subalgebra. By maximality, xs ∈ T . Then ad x|CL (T ) = ad xn |CL (T ) is nilpotent, so by
Engel’s Theorem, H = CL (T ) is a nilpotent subalgebra. Moreover, for any x ∈ NL (H)
and t ∈ T , [x, t] ∈ [x, H] ⊆ H, so [[x, t], t] = 0 and, since t is semisimple, we get [x, t] = 0,
so x ∈ CL (T ) = H. Thus NL (H) = H and H is a Cartan subalgebra of L. By the first
§ 4. CARTAN SUBALGEBRAS 33
4.6 Corollary. Let L be a semisimple Lie algebra over a field k of characteristic 0 and
let H be a subalgebra of L. Then H is a Cartan subalgebra of L if and only if it is a
maximal subalgebra among the subalgebras which are both abelian and toral.
Proof. The properties of being nilpotent and self normalizing are preserved under ex-
tension of scalars. Thus, if k̄ is an algebraic closure of k and H is nilpotent and self
normalizing, so is k̄ ⊗k H. Hence k̄ ⊗k H is a Cartan subalgebra of k̄ ⊗k L. By the
previous proof, it follows that k̄ ⊗k H is abelian, toral and self centralizing, hence so is
H. But, since H = CL (H), H is not contained in any bigger abelian subalgebra.
Conversely, if H is a subalgebra which is maximal among the subalgebras which are
both abelian and toral, the arguments in the previous proof show that CL (H) is a Cartan
subalgebra of L, and hence abelian and toral and containing H. Hence H = CL (H) is
a Cartan subalgebra.
4.7 Exercises.
(i) Let L = sl(n) be the Lie algebra of n × n trace zero matrices, and let H be
the subalgebra consisting of the diagonal matrices of L. Prove that H is a Cartan
subalgebra of L and that L = H⊕ ⊕1≤i6=j≤n Li −j (H) , where i ∈ H ∗ is the linear
form that takes any diagonal matrix to its ith entry. Also show that Li −j (H) =
kEij , where Eij is the matrix with 1 in the (i, j) position and 0’s elsewhere.
(ii) Check that R3 is a Lie algebra under the usual vector cross product. Prove that
it is toral but not abelian.
4.8 Engel subalgebras. There is another approach to Cartan subalgebras with its
own independent interest.
Let L be a Lie algebra over a field k, and let x ∈ L, the subalgebra
The arguments in the previous section show that there is a finite set Φ ⊆ H ∗ \ {0}
of nonzero linear forms on H, whose elements are called roots, such that
(5.3) L = H ⊕ ⊕α∈Φ Lα ,
(iii) Φ spans H ∗ .
(v) For any α ∈ H ∗ , let tα ∈ H such that κ(tα , . ) = α ∈ H ∗ . Then for any α ∈ Φ,
xα ∈ Lα and yα ∈ L−α ,
[xα , yα ] = κ(xα , yα )tα .
Proof. Take xα ∈ Lα and yα ∈ L−α such that κ(xα , yα ) = 1. By the previous item
[xα , yα ] = tα . In case α(tα ) = 0, then [tα , xα ] = 0 = [tα , yα ], so S = kxα +ktα +kyα
is a solvable subalgebra of L. By Lie’s Theorem ktα = [S, S] acts nilpotently on
L under the adjoint representation. Hence tα is both semisimple (H is toral) and
nilpotent, hence tα = 0, a contradiction since α 6= 0.
36 CHAPTER 2. LIE ALGEBRAS
Proof. With xα , yα and tα as above, S = kxα + ktα + kyα is isomorphic to sl2 (k),
under an isomorphism that takes h to α(t2α ) tα , x to xα , and y to α(t2α ) yα .
Now, V = H ⊕ ⊕06=µ∈k Lµα is a module for S under the adjoint representation,
and hence it is a module for sl2 (k) through the isomorphism above. Besides,
V0 = {v ∈ V : [tα , v] = 0} coincides with H. The eigenvalues taken by h = α(t2α ) tα
are µα(h) = 2µα(t α) 1
α(tα ) = 2µ and, thus, µ ∈ 2 Z, since all these eigenvalues are
integers. On the other hand, ker α is a trivial submodule of V , and S is another
submodule. Hence ker α ⊕ S is a submodule of V which exhausts the eigenspace of
ad h with eigenvalue 0. Hence by Weyl’s Theorem, V is the direct sum of ker α ⊕ S
and a direct sum of irreducible submodules for S in which 0 is not an eigenvalue
for the action of h. We conclude that the only even eigenvalues of the action of h
are 0, 2 and −2, and this shows that 2α 6∈ Φ. That is, the double of a root is never
a root. But then 21 α cannot be a root neither, since otherwise α = 2 12 α would not
be a root. As a consequence, 1 is not an eigenvalue of the action of h on V , and
hence V = ker α⊕S. In particular, Lα = kxα , L−α = kyα and kα∩Φ = {±α}.
(viii) For any α ∈ Φ, let hα = α(t2α ) tα , which is the unique element h in [Lα , L−α ] = ktα
such that α(h) = 2, and let xα ∈ Lα and yα ∈ L−α such that [xα , yα ] = hα . Then,
for any β ∈ Φ, β(hα ) ∈ Z.
More precisely, consider the Sα -module V = ⊕m∈Z Lβ+mα . The eigenvalues of the
adjoint action of hα on V are {β(hα ) + 2m : m ∈ Z such that Lβ+mα 6= 0}, which
form a chain of integers:
β(hα ) + 2q, β(hα ) + 2(q − 1), . . . , β(hα ) − 2r,
with r, q ∈ Z≥0 and β(hα ) + 2q = − β(hα ) − 2r . Therefore, β(hα ) = r − q ∈ Z.
The chain (β + qα, . . . , β − rα) is called the α-string through β. It is contained in
Φ ∪ {0}.
5.2 Remark. Since the restriction of κ to H is nondegenerate, it induces a nonde-
generate symmetric bilinear form (. | .) : H ∗ × H ∗ → k, given by (α|β) = κ(tα , tβ )
(where, as before, tα is determined by α = κ(tα , . ) for any α ∈ H ∗ ). Then for any
α, β ∈ Φ, β(tα ) = κ(tβ , tα ) = (β|α). Hence
2(β|α)
β(hα ) = .
(α|α)
(β|α)
(ix) For any α ∈ Φ, consider the linear map σα : H ∗ → H ∗ , β 7→ β − 2 (α|α) α. (This
is the reflection through α, since σα (α) = −α and if β is orthogonal to α, that is,
(β|α) = 0, then σα (β) = β. Hence σα2 = 1.)
Then σα (Φ) ⊆ Φ. In particular, the group W generated by {σα : α ∈ Φ} is a finite
subgroup of GL(H ∗ ), which is called the Weyl group.
§ 5. ROOT SPACE DECOMPOSITION 37
and this gives a system of linear equations on the µj ’s with a regular integral
matrix. Solving by Crammer’s rule, one gets that the µj ’s belong to Q.
X (β|β)2 X
(β|β) = κ(tβ , tβ ) = trace (ad tβ )2 = α(tβ )2 = α(hβ )2 ,
4
α∈Φ α∈Φ
and, therefore,
4
(β|β) = P 2
∈ Q>0 .
α∈Φ α(hβ )
Besides (β|β) = 0 if and only if α(tβ ) = 0 for any α ∈ Φ, if and only if tβ = 0 since
Φ spans H ∗ , if and only if β = 0.
(R3) For any α ∈ Φ, the reflection on the hyperplane (Rα)⊥ leaves Φ invariant (i.e., for
any α, β ∈ Φ, σα (β) ∈ Φ).
38 CHAPTER 2. LIE ALGEBRAS
(β|α)
(R4) For any α, β ∈ Φ, hβ|αi = 2 ∈ Z.
(α|α)
(ii) ∆ is a basis of E.
(v) If ν 0 ∈ E is a vector such that (ν 0 |α) 6= 0 for any α ∈ Φ and ∆0 is the associated
system of simple roots, then there is an element σ ∈ W such that σ(∆) = ∆0 .
are strictly lower than (ν|α). Now, we proceed in the same way with β and γ. They are
either simple or a sum of “smaller” positive roots. Eventually we end up showing that
α is a sum of simple roots, which gives (iii).
In particular, this shows that ∆ spans E. Assume that ∆ were not a basis, then
there would P nonempty subsetsPI, J ⊆ {1, .P
P exist disjoint . . , n} and positive scalars µi
P that i∈I µi αi = j∈J µj αj . Let γ = i∈I µi αi = j∈J µj αj . Then 0 ≤ (γ|γ) =
such
i∈I µi µj (αi |αj ) ≤ 0 (because of (i)). Thus γ = 0, but this would imply that 0 <
Pj∈J
i∈I µi (ν|αi ) = (ν|γ) = 0, a contradiction that proves (ii).
+
Pn (iv), we may assume that α = α1 . Let α 6= β ∈ Φ , then (iii)
In order to prove
shows that β = i=1 mi αi , with mi ∈ Z≥0 for any i. Since β 6= α, there is a j ≥ 2 such
that mj > 0. Then σα (β) = β − hβ|αiα = (m1 − hβ|αi)α1 + m2 α2 + · · · + mn αn ∈ Φ, and
one of the coefficients, mj , is > 0. Hence α 6= σα (β) 6∈ Φ− , so that σα (β) ∈ Φ+ \ {α}.
Finally, let us prove (v). We know that Φ = Φ+ ∪ Φ− = Φ0 + ∪ Φ0 − (with obvious
1P
notation). Let ρ = 2 α∈Φ+ α (which is called the Weyl vector ), and let σ ∈ W such
that (σ(ν 0 )|ρ) is maximal. Then, for any α ∈ ∆:
(σ(ν 0 )|ρ) ≥ (σα σ(ν 0 )|ρ)
= σ(ν 0 )|σα (ρ) (since σα2 = 1 and σα ∈ O(E))
This matrix Ĉ is symmetric and receives the name of Coxeter matrix of the root
system. It is nothing else but the √ coordinate matrix of the inner product ( | ) in
the basis {α̂1 , . . . , α̂n } with α̂i = √ 2αi . Note that det C = det Ĉ.
(αi |αi )
40 CHAPTER 2. LIE ALGEBRAS
6.2 Exercise. What are the possible Cartan and Coxeter matrices for n = 2?
Here ∆ = {α, β}, and you may assume that (α|α) ≤ (β|β).
• The Dynkin diagram of Φ, which is the graph which consists of a node for each
simple root α. The nodes associated to α 6= β ∈ ∆ are connected by Nαβ (=
0, 1, 2 or 3) arcs. Moreover, if Nαβ = 2 or 3, then α and β have different length
and an arrow is put pointing from the long to the short root. For instance,
2 −1 0 0
−1 2 −2 0
C= 0 −1 2 −1
7−→ ◦ ◦>◦ ◦
α1 α2 α3 α4
0 0 −1 2
• The Coxeter graph is the graph obtained by omitting the arrows in the Dynkin
diagram.
In our previous example it is ◦ ◦ ◦ ◦.
Because of item (v) in Proposition 6.1, these objects depend only on Φ and not on
∆, up to the same permutation of rows and columns in C and up to the numbering of
the vertices in the graphs.
6.3 Theorem.
(a) A root system Φ is irreducible if and only if its Dynkin diagram (or Coxeter graph)
is connected.
(b) Let L be a semisimple Lie algebra over an algebraically closed field k of character-
istic 0. Let H be a Cartan subalgebra of L and let Φ be the associated root system.
Then Φ is irreducible if and only if L is simple.
the coefficients of σ(β) are also either all nonnegative or all nonpositive, we conclude
that either m1 = · · · = mr = 0 or mr+1 = · · · = mn = 0, that is, either β ∈ Φ1 or
β ∈ Φ2 .
For (b), assume first that Φ is reducible,
so Φ = Φ1 ∪ Φ2 with Φ1 |Φ2 ) = 0 and
P
Φ1 6= ∅ =
6 Φ2 . Then the subspace α∈Φ+ Lα + L−α + [Lα , L−α ] is a proper ideal of L,
1
since
(
=0 if α + β 6∈ Φ ∪ {0}, in particular if α ∈ Φ1 and β ∈ Φ2
[Lα , Lβ ]
⊆ Lα+β otherwise.
6.4 Remark. The proof above shows that the decomposition of the semisimple Lie
algebra L into a direct sum of simple ideals gives the decomposition of its root system
Φ into an orthogonal sum of irreducible root systems.
(An ) ◦ ◦ ◦ ◦ ◦ ◦ , n ≥ 1.
(Bn ) ◦ ◦ ◦ ◦ ◦ > ◦ , n ≥ 2.
(Cn ) ◦ ◦ ◦ ◦ ◦ < ◦ , n ≥ 3.
......◦
.
.....
(Dn ) ◦ ◦ ◦ ◦ ◦.................... , n ≥ 4.
◦
...
◦ ◦ ◦ ◦ ◦
(E6 ) .
◦
◦ ◦ ◦ ◦ ◦ ◦
(E7 ) .
◦
◦ ◦ ◦ ◦ ◦ ◦ ◦
(E8 ) .
◦
42 CHAPTER 2. LIE ALGEBRAS
(F4 ) ◦ ◦>◦ ◦.
(G2 ) ◦ < ◦.
Most of the remainder of this section will be devoted to the proof of this Theorem.
First, it will be shown that the ‘irreducible Coxeter graphs’ are the ones correspond-
ing to (An ), (Bn = Cn ), (D n ), (E6,7,8 ), (F4 ) and (G2 ).pAny Coxeter graph determines
the symmetric matrix aij with aii = 2 and aij = − Nij for i 6= j, where Nij is the
number of lines joining the vertices i and j. We know that this matrix is the coordinate
matrix of a positive definite quadratic form on a real vector space.
Any graph formed by nodes and lines connecting these nodes will be called a ‘Coxeter
type graph’. For each such graph we will take the symmetric matrix aij defined as
before and the associated quadratic form on Rn , which may fail to be positive definite,
such that q(ei , ej ) = aij , where {e1 , . . . , en } denotes the canonical basis of Rn .
6.6 Lemma. Let V be a real vector space with a basis {v1 , . . . , vn } and a positive definite
quadratic form q : V → R such thatq(vi , vj ) ≤ 0 for any i 6= j, and q(v1 , v2 ) < 0. (Here
q(v, w) = 21 q(v + w) − q(v) − q(w) gives the associated symmetric bilinear form.)
Let q̃ : V → R be a quadratic form such that its associated symmetric bilinear form
satisfies q̃(vi , vj ) = q(vi , vj ) for any (i, j) 6= (1, 2), i ≤ j, and 0 ≥ q̃(v1 , v2 ) > q(v1 , v2 ).
Then q̃ is positive definite too and det q̃ > det q (where det denotes the determinant of
the quadratic form in any fixed basis).
Proof. We apply a Gram-Schmidt process to obtain a new suitable basis of Rv2 +· · ·+Rvn
as follows:
wn = v n
wn−1 = vn−1 + λn−1,n wn
.. ..
. .
w2 = v2 + λ2,3 w3 + · · · + λ2,n wn
where the λ’s are determined by imposing that q(wi , wj ) = 0 for any i > j ≥ 2.
Note that q(wi , wj ) = q̃(wi , wj ) for any i > j ≥ 2, and that this process gives that
λi,j ≥ 0 for any 2 ≤ i < j ≤ n and q(vi , wj ) ≤ 0 for any 1 ≤ i < j ≤ n. Now
take w1 = v1 + λ1,3 w3 + · · · + λ1,n wn , and determine the coefficients by imposing that
q(w1 , wi ) = 0 for any i ≥ 3. Then q(w1 , w2 ) = q(v1 , w2 ) ≤ q(v1 , v2 ) < 0, q̃(w1 , w2 ) =
q̃(v1 , w2 ) ≤ q̃(v1 , v2 ) ≤ 0, and 0 ≥ q̃(w1 , w2 ) > q(w1 , w2 ).
In the basis {w1 , . . . , wn }, the coordinate matrices of q and q̃ present the form
α1 β 0 · · · 0 α1 β̃ 0 · · · 0
β α2 0 · · · 0 β̃ α2 0 · · · 0
0 0 α3 · · · 0 0 0 α ··· 0
and 3
.. .. .. . . .. . . . . .
.. .. .. . . ..
. . . . .
0 0 0 ··· αn 0 0 0 ··· αn
with 0 ≥ β̃ > β. Since q is positive definite, αi ≥ 0 for any i and α1 α2 − β 2 > 0. Hence
α1 α2 − β̃ 2 > α1 α2 − β 2 > 0 and the result follows.
§ 6. CLASSIFICATION OF ROOT SYSTEMS 43
Note that by suppressing a line connecting nodes i and j in a Coxeter type graph,
with associated quadratic form q, the quadratic form q̃ associated to the new graph
obtained differs only in that 0 > q̃(ei , ej ) > q(ei , ej ). Hence the previous Lemma imme-
diately implies the following result:
6.7 Corollary. If some lines connecting two nodes on a Coxeter type graph with positive
definite quadratic form are suppressed, then the new graph obtained is a new Coxeter
type graph with positive definite quadratic form.
Let us compute now the matrices associated to some Coxeter type graphs, as well
as their determinants.
An (n ≥ 1) ◦ ◦ ◦ ◦ ◦ ◦. Here the associated matrix is
2 −1 0 · · · 0 0
−1 2 −1 · · · 0 0
.. .
.. .
.. .. .
.. ..
M An = . . .
0 0 0 · · · 2 −1
0 0 0 · · · −1 2
whose determinant can be computed recursively by expanding along the first row:
det MAn = 2 det MAn−1 − det MAn−2 , obtaining that det MAn = n + 1 for any
n ≥ 1.
Bn = Cn (n ≥ 2) ◦ ◦ ◦ ◦ ◦ ◦. Here
2 −1 0 · · · 00
−1 2 −1 · · · 00
. .. .. ..
..
MBn = .. ..
. . . ..
√
0 0 0 ··· 2
√ − 2
0 0 0 ··· − 2 2
and, by expanding along the last row, det MBn = 2 det MAn−1 − 2 det MAn−2 , so
that det MBn = 2.
.....◦
.
......
Dn (n ≥ 4) ◦ ◦ ◦ ◦ ◦.................... . The associated matrix is
◦
...
2 −1 0 · · · 0 0 0
−1 2 −1 · · · 0 0 0
.. .. .. .. .. ..
..
MDn = .
. . . . . .
0
0 0 ··· 2 −1 −1
0 0 0 ··· −1 2 0
0 0 0 ··· −1 0 2
so that det MD4 = 4 = det MD5 and by expanding along the first row, det MDn =
2 det MDn−1 − det MDn−2 . Hence det MDn = 4 for any n ≥ 4.
◦ ◦ ◦ ◦ ◦
E6 . Here det ME6 = 2 det MD5 − det MA4 = 8 − 5 = 3 (expansion
◦
along the row corresponding to the leftmost node).
44 CHAPTER 2. LIE ALGEBRAS
◦ ◦ ◦ ◦ ◦ ◦
E7 . Here det ME7 = 2 det ME6 − det MD5 = 6 − 4 = 2.
◦
◦ ◦ ◦ ◦ ◦ ◦ ◦
E8 . Here det ME8 = 2 det ME7 − det ME6 = 4 − 3 = 1.
◦
F4 ◦ ◦ ◦ ◦. Here det MF4 = det MB3 − det MA2 = 4 − 3 = 1.
√
2 − 3
G2 ◦ ◦. Here det MG2 = det √ = 1.
− 3 2
◦
.. ..
........ ...............
........ ........
........ ........
........
Ãn .
...
.
.
........
.......
..
...
.. ........
........
........
........
(n + 1 nodes, n ≥ 2). Then
.
....... ........
...
.. .....
◦ ..
◦ ◦ ◦ ◦ ◦
2 −1 0 · · · 0 −1
−1 2 −1 · · · 0 0
MÃn = ... .. .. .. .. ..
. . . . .
0 0 0 ··· 2 −1
−1 0 0 ··· −1 2
so the sum of the rows is the zero row. Hence det MÃn = 0.
◦...............
......◦ ◦ ◦ ◦ ◦ (n + 1 nodes, n ≥ 3) Let us number the nodes so that
...
B̃n .....
.
◦.....
the leftmost nodes are nodes 1 and 2, and node 3 is connected to both of them.
Then we may expand det MB̃n = 2 det MBn −det MA1 det MBn−1 = 4−4 = 0. (For
n = 3, det MB̃3 = 2 det MB2 − det MA2 1 = 4 − 4 = 0.)
D̃n ...
.
.....
......
◦ ◦ ◦ ◦.................... (n + 1 nodes, n ≥ 4). Here
.....
◦ ◦
...
3
2 det MD4 − det MA1 = 8 − 8 = 0,
if n = 4,
det MD̃n = 2 det MD5 − det MA1 det MA3 = 8 − 8 = 0, if n = 5,
2 det MDn − det MA1 det MDn−2 = 8 − 8 = 0, otherwise.
◦ ◦ ◦ ◦ ◦
Ẽ6 ◦ . Here det MẼ6 = 2 det ME6 − det MA5 = 6 − 6 = 0.
◦
◦ ◦ ◦ ◦ ◦ ◦ ◦
Ẽ7 . Here det MẼ7 = 2 det ME7 − det MD6 = 4 − 4 = 0.
◦
◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦
Ẽ8 . Here det MẼ8 = 2 det ME8 − det ME7 = 2 − 2 = 0.
◦
F̃4 ◦ ◦ ◦ ◦ ◦. Here det MF̃4 = 2 det MF4 − det MB3 = 2 − 2 = 0.
Now, if G is a connected Coxeter graph and we suppress some of its nodes (and
the lines connecting them), a new Coxeter type graph with positive definite associated
quadratic form is obtained. The same happens, because of the previous Lemma 6.6, if
only some lines are suppressed. The new graphs thus obtained will be called subgraphs.
• If G contains a cycle, then it has a subgraph (isomorphic to) Ãn , and this is a
contradiction since det MÃn = 0, so its quadratic form is not positive definite.
• If G contains a node which is connected to four different nodes, then it contains a
subgraph of type D̃4 , a contradiction.
• If G contains a couple of nodes (called ‘triple nodes’) connected to three other
nodes, then it contains a subgraph of type D̃n , a contradiction again.
• If G contains two couples of nodes connected by at least two lines, then it contains
a subgraph of type C̃n , which is impossible.
• If G contains a triple node and two nodes connected by at least two lines, then it
contains a subgraph of type B̃n .
• If G contains a ‘triple link’, then either it is isomorphic to G2 or contains a subgraph
of type G̃2 , this latter possibility gives a contradiction.
• If G contains a ‘double link’ and this double link is not at a extreme of the graph,
then either G is isomorphic to F4 or contains a subgraph of type F̃4 , which is
impossible.
• If G contains a ‘double link’ at one extreme, then the Coxeter graph is Bn = Cn .
• Finally, if G contains only simple links, then it is either An or it contains a unique
triple node. Hence it has the form:
..◦
..
q........
..
..
..
◦.
.....
p ....
.....
.....
◦ ◦ ◦.....
.....
.....
.....
◦....
..
..
..
r ..
..
..
..
◦
.
Therefore, the only possible connected Coxeter graphs are those in Theorem 6.5.
What remains to be proven is to show that for each Dynkin diagram (A)–(G), there
exists indeed an irreducible root system with this Dynkin diagram.
For types (A)–(D) we will prove a stronger statement, since we will show that there
are simple Lie algebras, over an algebraically closed field of characteristic 0, such that
their Dynkin diagrams of their root systems relative to a Cartan subalgebra and a set
of simple roots are precisely the Dynkin diagrams of types (A)–(D).
46 CHAPTER 2. LIE ALGEBRAS
(An ) Let L = sln+1 (k) be the Lie algebra of n + 1 trace zero square matrices. Let H
be the subspace of diagonal matrices in L, which is an abelian subalgebra, and let
i : H → k the linear form such that i diag(α1 , . . . , αn+1 ) = αi , i = 1, . . . , n + 1.
Then 1 + · · · + n+1 = 0. Moreover,
(6.4) L = H ⊕ ⊕1≤i6=j≤n+1 kEij
where Eij is the matrix with a 1 in the (i, j) entry, and 0’s elsewhere. Since
[h, Eij ] = (i − j )(h)Eij for any i 6= j, it follows that H is toral and a Cartan
subalgebra of L. It also follows easily that L is simple (using that any ideal is
invariant under the adjoint action of H) and that the set of roots of L relative to
H is
Φ = {i − j : 1 ≤ i 6= j ≤ n + 1}.
The restriction of the Killing form to H is determined by
X X
κ(h, h) = (αi − αj )2 = 2 (αi2 + αj2 − 2αi αj )
1≤i6=j≤n+1 1≤i<j≤n+1
(6.5) X
= 2(n + 1) αi2 = 2(n + 1) trace(h2 )
1≤i≤n+1
1
i − j |h − k = (i − j ) th −k = δih − δjh − δik + δjk ,
2(n + 1)
where δij is the Kronecker symbol. Thus we get the euclidean vector space E =
R ⊗Q QΦ and can take the vector ν = n1 + (n − 2)2 + · · · + (−n)n+1 = n(1 −
n+1 ) + (n − 2)(2 − n ) + · · · ∈ E, which satisfies ν|i − j > 0 if and only if i < j.
For this ν we obtain the set of positive roots Φ+ = {i −j : 1 ≤ i < j ≤ n+1} and
the system of simple roots ∆ = {1 − 2 , 2 − 3 , . . . , n − n+1 }. The corresponding
Dynkin diagram is (An ).
L = so2n+1 (k)
( 1 0 0 1 0 0 )
= X ∈ gl2n+1 (k) : X t 0 0 In + 0 0 In X = 0
0 In 0 0 In 0
(0 −bt −at
(6.6)
= a A B : a, b ∈ Matn×1 (k) ,
b C −At
)
A, B, C ∈ Matn (k), B t = −B, C t = −C
where In denotes the identity n × n matrix. Number the rows and columns of
these matrices as 0, 1, . . . , n, 1̄, . . . , n̄ and consider the subalgebra H consisting
again of the diagonal matrices on L: H = {diag(0, α1 , . . . , αn , −α1 , . . . , −αn ) :
§ 6. CLASSIFICATION OF ROOT SYSTEMS 47
= (2n − 1) trace(h2 ).
1 1
Therefore, ti = 2(2n−1) (Eii − Eīī ) and i |j = i (tj ) = 2(2n−1) δij . We can take
the element ν = n1 + (n − 1)2 + · · · + n , whose inner product with any root is
never 0 and gives Φ+ = {i , i ± j : 1 ≤ i < j ≤ n} and system of simple roots
∆ = {1 − 2 , 2 − 3 , . . . , n−1 − n , n }. The associated Dynkin diagram is (Bn ).
6.8 Exercise. Prove that so3 (k) is isomorphic to sl2 (k). (k being algebraically
closed.)
L = sp2n (k)
( )
0 In 0 In
= X ∈ gl2n (k) : X t + X=0
(6.8) −In 0 −In 0
( )
A B
= : A, B, C ∈ Matn (k), B t = B, C t = C
C −At
where n ≥ 2 (for n = 1 we get sp2 (k) = sl2 (k)). Number the rows and columns
as 1, . . . , n, 1̄, . . . , n̄. As before, the subspace H of diagonal matrices is a Cartan
subalgebra with set of roots
= 2(n + 1) trace(h2 ),
1 1
ti = 4n (Eii −Eīī ), i |j = 4n δij . Besides, we can take ν = n1 +(n−1)2 +· · ·+n ,
+
which gives Φ = {2i , i ±j : 1 ≤ i < j ≤ n} and ∆ = {1 −2 , . . . , n−1 −n , 2n },
whose associated Dynkin diagram is (Cn ).
L = so2n (k)
( )
0 In 0 In
= X ∈ gl2n (k) : X t + X=0
(6.10) In 0 In 0
( )
A B t t
= : A, B, C ∈ Matn (k), B = −B, C = −C
C −At
1 1
ti = 4(n−1) (Eii − Eīī ), i |j = 4(n−1) δij . Also, we can take ν = n1 + (n − 1)2 +
+
· · · + n , which gives Φ = {i ± j : 1 ≤ i < j ≤ n} and ∆ = {1 − 2 , . . . , n−1 −
n , n−1 + n }, whose associated Dynkin diagram is (Dn ).
The remaining Dynkin diagrams correspond to the so called exceptional simple Lie
algebras, whose description is more involved. Hence, we will proceed in a different way:
(E8 ) Let E = R8 with the canonical inner product ( . | . ) and canonical P orthonormal
basis {e1 , . . . , e8 }. Take e0 = 21 (e1 + · · · + e8 ) and Q = {m0 e0 + 8i=1 mi ei : mi ∈
P8 8
Z ∀i, i=1 mi ∈ 2Z}, which is an additive subgroup of R . Consider the set
Φ = {v ∈ Q : (v|v) = 2}.
§ 6. CLASSIFICATION OF ROOT SYSTEMS 49
∆ = α1 = 21 (e1 − e2 − e3 − e4 − e5 − e6 − e7 + e8 ), α2 = e1 + e2 , α3 = e2 − e1 ,
α4 = e3 − e2 , α5 = e4 − e3 , α6 = e5 − e4 , α7 = e6 − e5 , α8 = e7 − e6
with associated Dynkin diagram
α1 α3 α4 α5 α6 α7 α8
◦ ◦ ◦ ◦ ◦ ◦ ◦
◦ α2
of type (E8 ).
(E7 ) and (E6 ) These are obtained as the ‘root subsystems’ of (E8 ) generated by ∆ \
{α8 } and ∆ \ {α7 , α8 } above.
1
(F4 ) Here consider the euclidean vector space E = R4 , e0 = 2 (e1 + e2 + e3 + e4 ),
Q = {m0 e0 + 4i=1 mi ei : mi ∈ Z}, and
P
This finishes the classification of the connected Dynkin diagrams. To obtain from
this classification a classification of the root systems, it is enough to check that any root
system is determined by its Dynkin diagram.
6.9 Definition. Let Φi be a root system in the euclidean space Ei , i = 1, 2, and let
ϕ : E1 → E2 be a linear map. Then ϕ is said to be a root system isomorphism between
Φ1 and Φ2 if ϕ(Φ1 ) = Φ2 and for any α, β ∈ Φ1 , hϕ(α)|ϕ(β)i = hα|βi.
6.10 Exercise. Prove that if ϕ is a root system isomorphism between the irreducible
ϕ(α)|ϕ(α)
root systems Φ1 and Φ2 , then ϕ is a similarity of multiplier for a fixed
(α|α)
α ∈ Φ1 .
The next result is already known for roots that appear inside the semisimple Lie
algebras over algebraically closed fields of characteristic 0, because of the representation
theory of sl2 (k).
6.11 Lemma. Let Φ be a root system, α, β ∈ Φ two roots such that β 6= ±α, let
r = max{i ∈ Z≥0 : β − iα ∈ Φ} and q = max{i ∈ Z≥0 : β + iα ∈ Φ}. Then hβ|αi = r − q,
r + q ≤ 3 and all the elements in the chain β − rα, β − (r − 1)α, . . . , β, . . . , β + qα belong
to Φ (this is called the α-chain of β).
6.12 Theorem. Each Dynkin diagram determines a unique (up to isomorphism) root
system.
Proof. First note that it is enough to assume that the Dynkin diagram is connected.
We will do it.
Let ∆ be the set of nodes of the Dynkin diagram and fix arbitrarily the length of a
‘short node’. Then the diagram determines the inner product on E = RΦ = R∆. This
is better seen with an example. Take, for instance the Dynkin diagram (F4 ), so we have
∆ = {α1 , α2 , α3 , α4 }, with
α1 α2 α3 α4
◦ ◦>◦ ◦
Fix, for simplicity, (α3 |α3 ) = 2 = (α4 |α4 ). Then
2(α3 |α4 )
• −1 = hα3 |α4 i = , so (α3 |α4 ) = −1,
(α4 |α4 )
§ 7. CLASSIFICATION OF THE SEMISIMPLE LIE ALGEBRAS 51
2(α2 |α3 )
• −2 = hα2 |α3 i = , so (α2 |α3 ) = −2.
(α3 |α3 )
2(α3 |α2 )
• −1 = hα3 |α2 i = , so (α2 |α2 ) = 4 = (α1 |α1 ).
(α2 |α2 )
• −1 = hα1 |α2 i, so (α1 |α2 ) = −1.
6.13 Remark. Actually, the proof of this Theorem gives an algorithm to obtain a root
system Φ, starting with its Dynkin diagram.
6.14 Exercise. Use this algorithm to obtain the root system associated to the Dynkin
diagram (G2 ).
6.15 Exercise. Let Φ a root system and let ∆ = {α1 , . . . , αn } be a system of simple
roots of Φ. Let α = m1 α1 +· · ·+mn αn be a positive root ofmaximal height and consider
∆1 = {αi : mi 6= 0} and ∆2 = ∆ \ ∆1 . Prove that ∆1 |∆2 = 0.
In particular, if Φ is irreducible this shows that α “involves” all the simple roots (∆ =
∆1 ).
Let L = H ⊕ ⊕α∈Φ Lα be the root space decomposition of a semisimple Lie algebra
over k, relative to a Cartan subalgebra H. We want to prove that the multiplication in
L is determined by Φ.
For any α ∈ Φ, there are elements xα ∈ Lα , yα ∈ L−α such that [xα , yα ] = hα ,
with α(hα ) = 2. Besides, Lα = kxα , L−α = kyα and Sα = Lα ⊕ L−α ⊕ [Lα , L−α ] =
kxα ⊕ kyα ⊕ khα is a subalgebra isomorphic to sl2 (k). Also, for any β ∈ Φ \ {±α}, recall
that the α-chain of β consists of roots β − rα, . . . , β, . . . , β + qα, where hβ|αi = r − q.
7.1 Lemma. Under the hypotheses above, let α, β ∈ Φ with α + β ∈ Φ, then [Lα , Lβ ] =
Lα+β . Moreover, for any x ∈ Lβ ,
(
[yα , [xα , x]] = q(r + 1)x,
[xα , [yα , x]] = r(q + 1)x.
Proof. This is a straightforward consequence of the representation theory of sl2 (k), since
⊕qi=−r Lβ+iα is a module for Sα ∼ = sl2 (k). Hence, there are elements vi ∈ Lβ+(q−i)α , i =
0, . . . , r +q, such that [yα , vi ] = vi+1 , [xα , vi ] = i(r +q +1−i)vi−1 , with v−1 = vr+q+1 = 0
(see the proof of Theorem 3.2); whence the result.
7.2 Lemma. For any α ∈ Φ+ , let J = Jα = (j1 , . . . , jr ) be another sequence such that
α = αj1 + · · · + αjr , and let xJ = ad xjr · · · ad xj2 xj1 ) and yJ = ad yjr · · · ad yj2 yj1 ).
Then there are rational numbers q, q 0 ∈ Q, determined by Φ, such that xJ = qxα ,
yJ = q 0 yα .
Proof. Since xJ ∈ Lα , the previous Lemma shows that xJ = q1 [xir , [yir , xJ ]], for some
q1 ∈ Q which depends on Φ. Let s be the largest integer with js = ir , then
7.3 Proposition. The product of any two elements in B is a rational multiple of another
element of B, determined by Φ, with the exception of the products [xα , yα ], which are
linear combinations of the hi ’s, with rational coefficients determined by Φ.
§ 7. CLASSIFICATION OF THE SEMISIMPLE LIE ALGEBRAS 53
Proof. First note that [hi , hj ] = 0, [hi , xα ] = α(hi )xα = hα|αi ixα and [hi , yα ] =
−hα|αi iyα , are all determined by Φ.
Consider now α, β ∈ Φ+ , and the corresponding fixed sequences Iα = (i1 , . . . , ir ),
Iβ = (j1 , . . . , js ).
To deal with the product [xα , xβ ], let us argue by induction on r. If r = 1, [xα , xβ ] =
0 if α + β 6∈ Φ, while [xα , xβ ] = qxα+β for some q ∈ Q determined by Φ by the
previous Lemma. On the other hand, if r > 1 and Iα0 = (i1 , . . . , ir−1 ), then [xα , xβ ] =
[[xir , xIα0 ], xβ ] = [xir , [xIα0 , xβ ]] − [xIα0 , [xir , xβ ]] and now the induction hypothesis and the
previous Lemma yield the result. The same arguments apply to products [yα , yβ ].
Finally, we will argue by induction on r too to deal with the product [xα , yβ ]. If
r = 1 and α = αi , then [xα , yβ ] = 0 if 0 6= β − α 6∈ Φ, [xα , yβ ] = hi if α = β,
while if β − α = γ ∈ Φ, then yβ = q[yi , yγ ] for some q ∈ Q determined by Φ, and
[xα , yβ ] = q[xi , [yi , yγ ]] = qq 0 yγ , determined by Φ. On the other hand, if r > 1 then, as
before, [xα , yβ ] = [xir , [xIα0 , yβ ]]−[xIα0 , [xir , yβ ]] and the induction hypothesis applies.
What remains to be done is, on one hand, to show that for each of the irreducible root
systems E6 , E7 , E8 , F4 , G2 there is a simple Lie algebra L over k and a Cartan subalgebra
H such that the corresponding root system is of this type. Since we have constructed
explicitly these root systems, the dimension of such an L must be |Φ| + rank(Φ), so
dimk L = 78, 133, 248, 52 and 14 respectively. Later on, some explicit constructions of
these algebras will be given.
On the other hand, given a simple Lie algebra L over k and two Cartan subalgebras
H1 and H2 , it must be shown that the corresponding root systems Φ1 and Φ2 are
isomorphic. The next Theorem solves this question:
7.4 Theorem. Let L be one of the Lie algebras sln (k) (n ≥ 2), son (k) (n ≥ 3), or
sp2n (k) (n ≥ 1), and let H be any Cartan subalgebra of L. Then there is an element
g of the matrix group GLn (k), On (k) or Sp2n (k) respectively, such that gHg −1 is the
subspace of diagonal matrices in L. In particular, for any two Cartan subalgebras of L,
there is an automorphism ϕ ∈ Aut(L) such that ϕ(H1 ) = H2 .
The last assertion is valid too for the simple Lie algebras containing a Cartan sub-
algebra such that the associated root system is exceptional.
Proof. For the first part, let V be the ‘natural’ module for L (V = k n (column vectors)
for sln (k) or son (k), and V = k 2n for sp2n (k)). Since H is toral and abelian, the elements
of H form a commuting space of diagonalizable endomorphisms of V . Therefore there is
a simultaneous diagonalization: V = ⊕λ∈H ∗ Vλ , where Vλ = {v ∈ V : h.v = λ(h)v ∀h ∈
H}.
If L = sln (k), then this means that there is an element g ∈ GLn (k) such that
gHg −1 ⊆ {diagonal matrices}. Now, the map x 7→ gxg −1 is an automorphism of L and
hence gHg −1 is a Cartan subalgebra too, in particular it is a maximal toral subalgebra.
Since the set of diagonal matrices in L is a Cartan subalgebra too, we conclude by
maximality that gHg −1 coincides with the space of diagonal matrices in L.
If L = son (k) or L = sp2n (k), there is a nondegenerate symmetric or skew symmetric
bilinear form b : V × V → k such that (by its own definition) L = {x ∈ gl(V ) :
b(x.v, w) + b(v, x.w) = 0 ∀v, w ∈ V }. But then, for any h ∈ H, λ, ν ∈ H ∗ and v ∈ Vλ ,
w ∈ Vµ , 0 = b(h.v, w) + b(v, h.w) = λ(h) + µ(h) b(v, w). Hence we conclude that
54 CHAPTER 2. LIE ALGEBRAS
b(Vλ , Vµ ) = 0 unless λ = −µ. This implies easily the existence of a basis of V consisting
of common eigenvectors for H in which the coordinate matrix of b is either
1 0 0
0 0 In , 0 In 0 In
or
−In 0 In 0
0 In 0
according to L being so2n+1 (k), sp2n (k) or so2n (k). Therefore, there is a g ∈ SO2n+1 (k),
Sp2n (k) or SO2n (k) (respectively) such that gHg −1 is contained in the space of diagonal
matrices of L. As before, we conclude that gHg −1 fills this space.
Finally, let L be a simple Lie algebra with a Cartan subalgebra H such that the
associated root system Φ is exceptional. Let H 0 be another Cartan subalgebra and Φ0
the associated root system. If Φ0 were classical, then Proposition 7.3 would show that L
is isomorphic to one of the simple classical Lie algebras, and by the first part of the proof,
there would exist an automorphism of L taking H 0 to H, so that Φ would be classical too,
a contradiction. Hence Φ0 is exceptional, and hence the fact that dimk L = |Φ|+rank(Φ),
and the same for Φ0 , shows that Φ and Φ0 are isomorphic. But by Proposition 7.3 again,
we can choose bases {h1 , . . . , hn , xα , yα : α ∈ Φ} and {h01 , . . . , h0n , x0α , yα0 : α ∈ Φ0 } with
the same multiplication table. Therefore, there is an automorphism ϕ of L such that
ϕ(hi ) = h0i , ϕ(xα ) = x0α and ϕ(yα ) = yα0 , for any i = 1, . . . , n and α ∈ Φ. In particular,
ϕ(H) = H 0 .
7.5 Remark. There is a more general classical result which asserts that if H1 and H2
are any two Cartan subalgebras of an arbitrary Lie algebra over k, then there is an
automorphism ϕ, in the subgroup of the automorphism group generated by {exp ad x :
x ∈ L, ad x nilpotent} such that ϕ(H1 ) = H2 . For an elementary (not easy!) proof, you
may consult the article by A.A. George Michael: On the conjugacy theorem of Cartan
subalgebras, Hiroshima Math. J. 32 (2002), 155-163.
The dimension of any Cartan subalgebra is called the rank of the Lie algebra.
Summarizing all the work done so far, and assuming the existence of the exceptional
simple Lie algebras, the following result has been proved:
7.6 Theorem. Any simple Lie algebra over k is isomorphic to a unique algebra in the
following list:
7.7 Remark. There are the following isomorphisms among different Lie algebras:
so3 (k) ∼
= sp2 (k) = sl2 (k), so4 (k) ∼
= sl2 (k) ⊕ sl2 (k), sp4 (k) ∼
= so5 (k), so6 (k) ∼
= sl4 (k).
Proof. This can be checked by computing the root systems associated to the natural
Cartan subalgebras. If the root systems are isomorphic, then so are the Lie algebras.
Alternatively, note that the Killing form on the three dimensional simple Lie alge-
bra sl2 (k) is symmetric and nondegenerate, hence the orthogonal Lie algebra so3 (k) ∼
=
so sl2 (k), κ , which has dimension 3 and contains the subalgebra ad sl2 (k) ∼= sl2 (k),
which is three dimensional too. Hence so3 (k) ∼
= sl2 (k).
§ 8. EXCEPTIONAL LIE ALGEBRAS 55
Now consider V = Mat2 (k), which is endowed with the quadratic form det and
its associated symmetric bilinear form b(x, y) = 12 det(x + y) − det(x) − det(y) =
− 21 trace(xy) − trace(x) trace(y) . Then we get the one-to-one Lie algebra homomor-
phism sl2 (k) ⊕ sl2 (k) → so(V, b) ∼
= so4 (k), (a, b) 7→ ϕa,b , where ϕa,b (x) = ax − xb. By
dimension count, this is an isomorphism.
Next, consider the vector space V = k 4 . The determinant provides a linear iso-
morphism det : Λ4 V ∼ = k, which induces a symmetric nondegenerate bilinear map
b : Λ2 V × Λ2 V → k. The Lie algebra sl(V ) acts on Λ2 (V ), which gives an embed-
ding sl4 (k) ∼
= sl(V ) ,→ so Λ2 V, b ∼
= so6 (k). By dimension count, these Lie alge-
bras are isomorphic. Finally, consider a nondegenerate skew-symmetric bilinear form
c on V . Then c may be considered as a linear map c : Λ2 V → k and the dimension
of K = ker c is 5. The embedding sl(V ) ,→ so(Λ2 V, b) restricts to an isomorphism
sp4 (k) ∼
= sp(V, c) ∼
= so(K, b) ∼
= so5 (k).
0 −2y t −2xt
L = x a ly : a ∈ sl3 (k), x, y ∈ k 3
y lx −at
0 −2y t −2xt
For any a ∈ sl3 (k), and x, y ∈ k 3 , let M(a,x,y) denote the matrix x a ly .
y lx −at
In particular we get:
Let us proceed now to give a construction, due to Freudenthal, of the simple Lie
algebra of type E8 . To do so, let V be a vector space of dimension 9 and V ∗ its dual.
Consider a nonzero alternating multilinear map det : V 9 → k (the election of det to
name this map is natural), which induces an isomorphism Λ9 V ∼ = k, and hence another
isomorphism Λ9 V ∗ ∼ = Λ 9 V )∗ ∼= k. Take a basis {e1 , . . . , e9 } of V with det(e1 , . . . , e9 ) =
1, and consider its dual basis {ε1 , . . . , ε9 } (so, under the previous isomorphisms, ε1 ∧
. . . ∧ ε9 ∈ Λ9 V ∗ corresponds to 1 ∈ k too).
Consider now the simple Lie algebra of type A8 , S = sl(V ) ∼ = sl9 (k), which acts
∗
naturally on V . Then V is a module too for S with the action given by x.ϕ(v) = −ϕ(x.v)
for any x ∈ S, v ∈ V and ϕ ∈ V ∗ . Consider W = Λ3 V , which is a module too under the
action given by x.(v1 ∧ v2 ∧ v3 ) = (x.v1 ) ∧ v2 ∧ v3 + v1 ∧ (x.v2 ) ∧ v3 + v1 ∧ v2 ∧ (x.v3 ) for any
x ∈ S and v1 , v2 , v3 ∈ V . The dual space (up to isomorphism) W ∗ = Λ3 V ∗ is likewise a
module for S. Here (ϕ1 ∧ ϕ2 ∧ ϕ3 )(v1 ∧ v2 ∧ v3 ) = det ϕi (vj ) for any ϕ1 , ϕ2 , ϕ3 ∈ V ∗
and v1 , v2 , v3 ∈ V .
The multilinear map det induces a multilinear alternating map T : W × W × W → k,
such that
T v1 ∧ v2 ∧ v3 , v4 ∧ v5 ∧ v6 , v7 ∧ v8 ∧ v9 ) = det(v1 , . . . , v9 ),
for any vi ’s in V . In the same vein we get the multilinear alternating map T ∗ : W ∗ ×W ∗ ×
W ∗ → k. These maps induce, in turn, bilinear maps W ×W → W ∗ , (w1 , w2 ) 7→ w1 w2 ∈
W ∗ , with (w1 w2 )(w) = T (w1 , w2 , w), and W ∗ × W ∗ → W , (ψ1 , ψ2 ) 7→ ψ1 ψ2 ∈ W ,
with (ψ1 ψ2 )(ψ) = T ∗ (ψ1 , ψ2 , ψ), for any w1 , w2 , w ∈ W and ψ1 , ψ2 , ψ ∈ W ∗ , and where
natural identifications have been used, like (W ∗ )∗ ∼ = W.
Take now the bilinear map Λ3 V × Λ3 V ∗ → sl(V ): (w, ψ) 7→ w ∗ ψ, given by
(v1 ∧ v2 ∧ v3 ) ∗ (ϕ1 ∧ ϕ2 ∧ ϕ3 )
1 X 1
(−1)σ (−1)τ ϕσ(1) (vτ (1) )ϕσ(2) (vτ (2) )vτ (3) ⊗ ϕσ(3) − det ϕi (vj ) 1V ,
=
2 3
σ,τ ∈S3
§ 8. EXCEPTIONAL LIE ALGEBRAS 57
where (−1)σ denotes the signature of the permutation σ ∈ S3 , v ⊗ ϕ denotes the endo-
morphism u 7→ ϕ(u)v, and 1V denotes the identity map on V . Then for any w ∈ Λ3 V ,
ψ ∈ Λ3 V ∗ and x ∈ sl(V ), the following equation holds:
trace (w ∗ ψ)x = ψ(x.w).
(It is enough to check this for basic elements eJ = ej1 ∧ ej2 ∧ ej3 , where J = (j1 , j2 , j3 )
and j1 < j2 < j3 , in W and the elements in the dual basis of W ∗ : εJ = εj1 ∧ εj2 ∧ εj3 .)
Note that this equation can be used as the definition of w ∗ ψ.
Now consider the vector space L = sl(V ) ⊕ W ⊕ W ∗ with the Lie bracket given, for
any x, y ∈ sl(V ), w, w1 , w2 ∈ W and ψ, ψ1 , ψ2 ∈ W ∗ by:
A lengthy computation with basic elements, shows that L is indeed a Lie algebra.
9
Its dimension is dimk L = 80 + 2 3 = 80 + 2 × 84 = 244.
Let H be the Cartan subalgebra of sl(V ) consisting of the trace zero endomorphisms
with a diagonal coordinate matrix in our basis {e1 , . . . , e9 }, and let δi : H → k be
the linear form such that (identifying endomorphisms with their coordinate matrices)
δi diag(α1 , . . . , α9 ) = αi . Then δ1 + · · · + δ9 = 0, H is toral in L and there is a root
decomposition
L = H ⊕ ⊕α∈Φ Lα ,
where
Φ = {δi − δj : i 6= j} ∪ {±(δi + δj + δk ) : i < j < k}.
Here Lδi −δj = kEij ⊆ sl(V ) (Eij denotes the endomorphism whose coordinate matrix
has (i, j)-entry 1 and 0’s elsewhere), Lδi +δj +δk = k(ei ∧ ej ∧ ek ) ⊆ W and L−(δi +δj +δk ) =
k(εi ∧ εj ∧ eεk ) ⊆ W ∗ . Using that sl(V ) is simple, the same argument in the proof of
Theorem 8.2 proves that L is simple:
Proof. We have shown that L is simple of rank 8. The classical Lie algebras of rank 8,
up to isomorphism, are sl9 (k), so17 (k), sp16 (k) and so16 (k), which have dimensions 80,
156, 156 and 120 respectively. Hence L is not isomorphic to any of them and hence it is
of type E8 .
Take now the simple Lie algebra L of type E8 and its generators {hi , xi , yi : i =
1, . . . , 8} as in the paragraph previous to Lemma 7.2, the indexing given by the ordering
of the simple roots given in the next diagram:
α1 α3 α4 α5 α6 α7 α8
◦ ◦ ◦ ◦ ◦ ◦ ◦
◦ α2
58 CHAPTER 2. LIE ALGEBRAS
Let κ be the Killing form of L. Then consider the subalgebra L̂ generated by {hi , xi , yi :
i = 1, . . . , 7} and its subalgebra Ĥ = ⊕7i=1 khi . Since H is toral in L, so is Ĥ in L̂ and
L̂ = Ĥ ⊕ ⊕α∈Φ∩(Zα1 +···Zα7 ) Lα .
Finally, the existence of a simple Lie algebra of type F4 will be deduced from that
of E6 . Let now L̃ be the simple Lie algebra of type E6 considered above, with canonical
generators {hi , xi , yi : i = 1, . . . , 6}. Since the multiplication in L̃ is determined by the
Dynkin diagram, there is an automorphism ϕ of L̃ such that
In particular, ϕ2 is the identity, so L̃ = L̃0̄ ⊕ L̃1̄ , with L̃0̄ = {z ∈ L̃ : ϕ(z) = z}, while
L̃1̄ = {z ∈ L̃ : ϕ(z) = −z}, and it is clear that L̃0̄ is a subalgebra of L̃, [L̃0̄ ,0 L̃1̄ ] ⊆ L̃1̄ ,
0 0 0
[L̃1̄ , L̃1̄ ] ⊆ L̃0̄ . For any z ∈ L̃0̄ and z ∈ L̃1̄ , κ(z, z ) = κ ϕ(z), ϕ(z ) = κ(z, −z ), where κ
denotes the Killing form of L̃. Hence κ(L̃0̄ , L̃1̄ ) = 0 and, thus, the restriction of κ to L̃0̄ is
nondegenerate. This means that the adjoint map gives a representation ad : L̃0̄ → gl(L̃)
with nondegenerate trace form. As before, this gives L̃0̄ = Z(L̃0̄ ) ⊕ [L̃0̄ , L̃0̄ ], and [L̃0̄ , L̃0̄ ]
is semisimple.
Consider the following elements of L̃0̄ :
Note that [x̃i , ỹi ] = h̃i for any i = 1, 2, 3, 4. The element h̃ = 10h̃1 + 19h̃2 + 27h̃3 + 14h̃4
satisfies
α1 (h̃) = α6 (h̃) = 20 − 19 = 1,
α3 (h̃) = α5 (h̃) = 38 − 10 − 27 = 1,
α4 (h̃) = 54 − 38 − 14 = 2,
α2 (h̃) = 28 − 27 = 1.
§ 8. EXCEPTIONAL LIE ALGEBRAS 59
Thus α(h̃) > 0 for any α ∈ Φ+ , where Φ is the root system of L̃. In particular, α(h̃) 6= 0
for any α ∈ Φ.
Note that H̃ = ⊕6i=1 khi is a Cartan subalgebra of L̃. Besides, ϕ(H̃) = H̃ and hence
H̃ = H̃ 0̄ ⊕ H̃ 1̄ , with H̃ 0̄ = H̃ ∩ L̃0̄ = ⊕4i=1 k h̃i and H̃ 1̄ = H̃ ∩ L̃1̄ = k(h1 −h6 )⊕k(h3 −h5 ).
Also, for any α ∈ Φ, xα + ϕ(xα ) ∈ L̃0̄ , and this vector is a common eigenvector for H̃ 0̄
with eigenvalue α|H̃ , which is not zero since α(h̃) 6= 0 for any α ∈ Φ. Hence there is a
0̄
root space decomposition
X
L̃0̄ = H̃ 0̄ ⊕ k(xα + ϕ(xα ))
α∈Φ
and it follows that Z(L̃0̄ ) ⊆ CL̃ (H̃ 0̄ ) ∩ L0̄ = H̃ ∩ L0̄ = H̃ 0̄ ⊆ [L̃0̄ , L̃0̄ ]. We conclude that
Z(L̃0̄ ) = 0, so L̃0̄ is semisimple, and H̃ 0̄ is a Cartan subalgebra of L̃0̄ .
The root system Φ̃ of L̃0̄ , relative to H̃ 0̄ , satisfies that Φ̃ ⊆ {α̃ = α|H̃ : α ∈ Φ}.
0̄
Also α̃i = αi |H̃ ∈ Φ̃, with x̃i ∈ (L̃0̄ )α̃i and ỹi ∈ (L̃0̄ )−α̃i for any i = 1, 2, 3, 4. Moreover,
0̄
[x̃i , ỹi ] = h̃i and α̃i (h̃i ) = 2 for any i. Besides, Φ̃ = Φ̃+ ∪ Φ̃− , with Φ̃+ = {α̃ ∈ Φ̃ : α̃(h̃) >
0} ⊆ {α̃ : α ∈ Φ+ } (and similarly with Φ̃− ). We conclude that ∆ ˜ = {α̃1 , α̃2 , α̃3 , α̃4 }
is a system of simple roots of L̃0̄ . We can compute the associated Cartan matrix. For
instance,
(
α̃2 (h̃1 )x̃2 = hα̃2 |α̃1 ix̃2
[h̃1 , x̃2 ] =
[h1 + h6 , x3 + x5 ] = α3 (h1 + h6 )x3 + α5 (h1 + h6 )x5 = −(x3 + x5 ) = −x̃2 ,
(
α̃3 (h̃2 )x̃3 = hα̃3 |α̃2 ix̃3
[h̃2 , x̃3 ] =
[h3 + h5 , x4 ] = α4 (h3 + h5 )x4 = −2x4 = −2x̃3 ,
which shows that hα̃2 |α̃1 i = −1 and hα̃3 |α̃2 i = −2. In this way we can compute the
whole Cartan matrix, which turns out to be the Cartan matrix of type F4 :
2 −1 0 0
−1 2 −1 0
0 −2 2 −1
0 0 −1 2
Unless otherwise stated, the following assumptions will be kept throughout the chapter:
This chapter is devoted to the study of the finite dimensional representations of such
an algebra L. By Weyl’s theorem (Chapter 2, 2.5), any representation is completely
reducible, so the attention is focused on the irreducible representations.
§ 1. Preliminaries
Let ρ : L → gl(V ) be a finite dimensional representation of the Lie algebra L. Since the
Cartan subalgebra H is toral, V decomposes as
V = ⊕µ∈H ∗ Vµ ,
61
62 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
1.2 Properties of P (V ).
(i) For any α ∈ Φ and µ ∈ P (V ), Lα .Vµ ⊆ Vα+µ .
2(µ|α)
(ii) For any µ ∈ P (V ) and α ∈ Φ, hµ|αi := (α|α) is an integer.
Proof. Let Sα = Lα ⊕ L−α ⊕ [Lα , L−α ], which is isomorphic to sl2 (k) and take
elements xα ∈ Lα and yα ∈ L−α such that [xα , yα ] = hα . Then W = ⊕m∈Z Vµ+mα
is an Sα -submodule of V . Hence the eigenvalues of the action of hα on W form an
unbroken chain of integers:
with (µ − rα)(hα ) = −(µ + qα)(hα ). But µ(hα ) = hµ|αi and α(hα ) = 2. Hence,
µ(hα ) = hµ|αi = r − q ∈ Z.
(iii) P (V ) is W-invariant.
At this point it is useful to note that det An = n+1, det Bn = det Cn = 2, det Dn = 4,
det E6 = 3, det E7 = 2 and det E8 = det F4 = det G2 = 1.
1.3 Definition.
• ΛR = Z∆ = ZΦ is called the root lattice of L.
• For any i = 1, . . . , n, let λi ∈ H ∗ such that hλi |αj i = δij for any j = 1, . . . , n.
Then λi ∈ Λ+ +
W , ΛW = Zλ1 + . . . + Zλn , and ΛW = Z≥0 λ1 + · · · + Z≥0 λn . The
weights λ1 , . . . , λn are called the fundamental dominant weights.
and the first summand is in Z by the induction hypothesis, and so is the second since
λ ∈ ΛW and hΦ|Φi ⊆ Z.
(ii) The module M is said to be a highest weight module if it contains a highest weight
vector that generates M as a module for L.
(iv) (Uniqueness) For any λ ∈ Λ+ W there is, up to isomorphism, at most one finite
dimensional irreducible L-module whose highest weight is λ.
Proof. (i) Let l ∈ QΦ such that (l|α) > 0 for any α ∈ ∆ (for instance, one can take
(l|α) = 1 for any α ∈ ∆), and let λ ∈ P (V ) such that (l|λ) is maximum. Then for any
α ∈ Φ+ , λ + α 6∈ P (V ), so that Lα .Vλ = 0. Hence L+ .Vλ = 0 and any 0 6= v ∈ Vλ is a
highest weight vector.
64 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
(ii) The subspace W is invariant under the action of L− and the action of H (since
it is spanned by common eigenvectors for H). Therefore, since L+ is generated by
{xα : α ∈ ∆}, it is enough to check that W is invariant under the action of ρ(xα ), for
α ∈ ∆. But xα .v = 0 (v is a highest weight vector) and for any α, β ∈ ∆ and w ∈ W ,
xα .(yβ .w) = [xα , yβ ].w − yβ .(xα .w), and [xα , yβ ] either is 0 or
belongs to H. An easy
induction on r argument shows that xα . ρ(yα1 ) · · · ρ(yαir )(v) ∈ W , as required.
Therefore, W is an L-submodule and P (W ) ⊆ {λ − αi1 − · · · − αir : r ≥ 0, 1 ≤
i1 , . . . , ir ≤ n}. (Note that up to now, the finite dimensionality of V has played no role.)
Moreover, Vλ ∩W = kv and if W is the direct sum of two submodules W = W 0 ⊕W 00 ,
then Wλ = kv = Wλ0 ⊕ Wλ00 . Hence either v ∈ Wλ0 or v ∈ Wλ00 . Since W is generated by
v, we conclude that either W = W 0 or W = W 00 . Now, by finite dimensionality, Weyl’s
Theorem (Chapter 2, 2.5) implies that W is irreducible.
Besides, since for any α ∈ Φ+ we have λ + α 6∈ P (W ), hλ|αi = r − q = r ≥ 0. This
shows that λ ∈ Λ+ W , completing thus the proof of (i).
(iii) If V is irreducible, then V = W , Vλ = kvλ and for any µ ∈ P (V ) \ {λ} there is
an r ≥ 0 and 1 ≤ i1 , . . . , ir ≤ n such that µ = λ − αi1 − · · · − αir . Hence (l|µ) < (l|λ).
Therefore, the highest weight is the only weight with maximum value of (l|λ).
(iv) If V 1 and V 2 are two irreducible highest weight modules with highest weight λ and
v 1 ∈ Vλ1 , v 2 ∈ Vλ2 are two highest weightP vectors, then w = (v 1 , v 2 ) is a highest weight
1 2 ∞ P
vector in V ⊕ V , and hence W = kw + r=1 1≤i1 ,...,ir ≤n k ρ(yα1 ) · · · ρ(yαir )(w) is a
submodule of V 1 ⊕ V 2 . Let π i : V 1 ⊕ V 2 → V i denote the natural projection (i = 1, 2).
Then v i ∈ π i (W ), so π i (W ) 6= 0 and, since both W and V i are irreducible by item (ii),
it follows that π i |W : W → V i is an isomorphism (i = 1, 2). Hence both V 1 and V 2 are
isomorphic to W .
there is a highest weight vector of weight λ (the basic tensor obtained with the highest
weight vectors of each copy of V (λi )), By item (ii) above this highest weight vector
generates an irreducible L-submodule of highest weight λ. Hence it is enough to deal
with the fundamental dominant weights and this can be done “ad hoc”. A more abstract
proof will be given here.
ΛR = Z∆ = ZΦ,
§ 2. PROPERTIES OF WEIGHTS AND THE WEYL GROUP 65
Λ+
W = {λ ∈ ΛW : hλ|αi ≥ 0 ∀α ∈ ∆} = Z≥0 λ1 + · · · + Z≥0 λn (the set of dominant
weights),
2.1 Properties.
−
Proof. Let s be the largest index (1− ≤ s < t − 1)+ with σβs ◦ . .+. ◦ σβt−1 (βt ) ∈ Φ .
Thus σβs σβs+1 ◦. . .◦σβt−1 (βt ) ∈ Φ . But σβs Φ \{βs } = Φ \{βs } (Chapter 2,
Proposition 6.1), so σβs+1 ◦ . . . ◦ σβt−1 (βt ) = βs and, using the argument in the
proof of (i),
σβs+1 ◦ . . . ◦ σβt−1 ◦ σβt ◦ σβt−1 ◦ . . . ◦ σβs+1 = σσβs+1 ◦...◦σβt−1 (βt ) = σβs ,
whence
σβs ◦ σβs+1 ◦ . . . ◦ σβt−1 ◦ σβt = σβs+1 ◦ . . . ◦ σβt−1 .
(iii) Given any σ ∈ W, item (i) implies that there are β1 , . . . , βt ∈ ∆ such that σ =
σβ1 ◦ . . . ◦ σβt . This expression is called reduced if t is minimum. (For σ = id,
t = 0.) By the previous item, if the expression is reduced σ(βt ) ∈ Φ− . In particular,
for any id 6= σ ∈ W, σ(∆) 6= ∆. Therefore, because of Chapter 2, Proposition 6.1,
W acts simply transitively on the systems of simple roots.
(iv) Let σ = σi1 ◦ · · · ◦ σit be a reduced expression. Write l(σ) = t. Also let n(σ) =
|{α ∈ Φ+ : σ(α) ∈ Φ− }|. Then l(σ) = n(σ).
66 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
(v) There is a unique element σ0 ∈ W such that σ0 (∆) = −∆. Moreover, σ02 = id and
l(σ0 ) = |Φ+ |.
(vii) For any µ ∈ ΛW , there is a unique λ ∈ Λ+ W ∩ Wµ. That is, for any µ ∈ ΛW , its
orbit under the action of W intersects Λ+
W in exactly one weight.
Proof. For λ ∈ Λ+
W , let Πλ = ∪µ∈Λ+ Wµ.
W
µ≤λ
If Π is saturated with highest weight λ, and ν ∈ Π, there is a σ ∈ W such that
σ(ν) ∈ Λ+ +
W . But Π is W-invariant, so µ = σ(ν) ∈ ΛW ∩ Π, and hence ν ∈ Wµ,
+
with µ ∈ ΛW and µ ≤ λ. Therefore, Π ⊆ Πλ . To check that Π = Πλ itP is enough
to check that any µ ∈ ΛW with µ < λ, belongs to Π. But, if µ = µ + ni=1 mi αi
+ 0
is any weight in Π with mi ∈ Z≥0 for any i, and µ0 6= µ (that is µ0 > µ), then
0 < (µ0 − µ|µ0 − µ), so there is an index j such that mj > 0 and (µ0 − µ|αj ) > 0.
Now, since µ ∈ Λ+ 0 00
W , hµ|αj i ≥ 0, so hµ |αj i > 0, and since Π is saturated, µ =
0 0
µ − αj ∈ Π. Starting with µ = λ and proceeding in this way, after a finite number
of steps we obtain that µ ∈ Π.
Conversely, we have to prove that for any λ ∈ Λ+W , Πλ is saturated. By its very
definition, Πλ is W-invariant. Let µ ∈ Πλ , α ∈ Φ and i ∈ Z between 0 and
hµ|αi. It has to be proven that µ − iα ∈ Πλ . Take σ ∈ W such that σ(µ) ∈ Λ+ W.
Since hσ(µ)|σ(α)i = hµ|αi, we may assume that µ ∈ Λ+ W . Also, changing if
necessary α by −α, we may assume that α ∈ Φ+ . Besides, with m = hµ|αi,
σα (µ − iα) = µ − (m − i)α, so it is enough to assume that 0 < i ≤ b m2 c. Then
hµ − iα|αi = m − 2i ≥ 0.
If hµ−iα|αi > 0 and σ ∈ W satisfies σ(µ−iα) ∈ Λ+W , then 0 < hσ(µ−iα)|σ(α)i, so
+
σ(α) ∈ Φ and σ(µ − iα) = σ(µ) − iσ(α) < σ(µ) ≤ µ, since µ is dominant. Hence
σ(µ − iα) ∈ Πλ and so does µ − iα. On the other hand, if m is even and i = m 2,
+ +
then hµ − iα|αi = 0. Take again a σ ∈ W such that σ(µ − iα) ∈ ΛW . If σ(α) ∈ Φ ,
the same argument applies and σ(µ − iα) < σ(µ) ≤ µ. But if σ(α) ∈ Φ− , take
τ = σ ◦ σα , then τ (µ − iα) = σ(µ − iα) ∈ Λ+ +
W and τ (α) = σ(−α) ∈ Φ , so again
the same argument applies.
(x) Let ρ = 12 α∈Φ+ α be the Weyl vector (see Chapter 2, 6.1), and let λ ∈ Λ+
P
W and
µ ∈ Wλ. Then (µ + ρ|µ + ρ) ≤ (λ + ρ|λ + ρ), and they are equal if and only if
µ = λ. The same happens for any µ ∈ Λ+ W with µ ≤ λ. Hence, in particular,
(µ + ρ|µ + ρ) < (λ + ρ|λ + ρ) for any µ ∈ Πλ \ {λ}.
68 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Proof. Since σi (ρ) = ρ − αi for any i = 1, . . . , n, it follows that hρ|αi i = 1 for any
i, so ρ = λ1 + · · · + λn ∈ Λ+
W . Let µ ∈ Wλ \ {λ} and let σ ∈ W such that µ = σ(λ).
Then,
But σ(λ) < λ (item (viii)) and ρ is strictly dominant, so (ρ|λ − σ(λ)) > 0, and the
first assertion follows.
Now, if µ ∈ Λ+
W with µ ≤ λ, then
Later on, it will be proven that if V is any irreducible finite dimensional module over
L, then its set of weights P (V ) is a saturated set of weights.
T (V ) = k ⊕ V ⊕ (V ⊗k V ) ⊕ · · · ⊕ V ⊗n ⊕ · · · ,
(v1 ⊗ · · · ⊗ vn )(w1 ⊗ · · · ⊗ wm ) = v1 ⊗ · · · ⊗ vn ⊗ w1 ⊗ · · · ⊗ wm .
x ⊗ y − y ⊗ x − [x, y] ∈ L ⊕ (L ⊗ L) ⊆ T (L),
U (L) = T (L)/I
3.1 Universal property. Given a unital associative algebra A over k, let A− be the Lie
algebra defined on A by means of [x, y] = xy − yx, for any x, y ∈ A. Then for any Lie
algebra homomorphism ϕ : L → A− , there is a unique homomorphism of unital algebras
φ : U (L) → A such that the following diagram is commutative
§ 3. UNIVERSAL ENVELOPING ALGEBRA 69
ι - U (L)
L
@
@
ϕ@ φ
R
@ ?
A
Remark. The universal enveloping algebra makes sense for any Lie algebra, not just for
the semisimple Lie algebras over algebraically closed fields of characteristic 0 considered
in this chapter.
with the understanding that the empty product equals 1, is a basis of U (L).
Proof. It is clear that these monomials span U (L), so we must show that they are
linearly independent.
Given a monomial xi1 ⊗ · · · ⊗ xin in T (L) define its index as the number of pairs
(j, k), with 1 ≤ j < k ≤ n, such that ij > ik . Therefore, we must prove that the image
in U (L) of the monomials of index 0 are linearly independent.
Since the monomials form a basis of T (L), a linear map T (L) → T (L) is determined
by the images of the monomials. Also, T (L) is the direct sum of the subspaces T (L)n
spanned by the monomials of degree n (n ≥ 0). Define a linear map ϕ : T (L) → T (L)
as follows:
ϕ(1) = 1, ϕ(xi ) = xi ∀i ∈ I,
ϕ(xi1 ⊗ · · · ⊗ xin ) = xi1 ⊗ · · · ⊗ xin if n ≥ 2 and the index is 0,
Ln−1with n, s ≥ 2, assuming ϕ has been defined for monomials of degree < n (hence in
and,
r=0 T (L)r ), and for monomials of degree n and index < s, define
if the index of xi1 ⊗· · ·⊗xin is s and j is the lowest index such that ij > ij+1 . (Note that
the index of xi1 ⊗ · · · ⊗ xij+1 ⊗ xij ⊗ · · · ⊗ xin is s − 1 and xi1 ⊗ · · · ⊗ [xij , xij+1 ] ⊗ · · · ⊗ xin ∈
T (L)n−1 , so the right hand side of (3.2) is well defined.)
Let us prove that ϕ satisfies the condition in (3.2) for any n ≥ 2, any monomial
xi1 ⊗ · · · ⊗ xin , and any index 1 ≤ j ≤ n − 1 with ij > ij+1 . (If this is true then, by
anticommutativity, (3.2) is satisfied for any monomial of degree n ≥ 2 and any index
1 ≤ j ≤ n − 1.)
This is trivial if the index of xi1 ⊗ · · · ⊗ xin is 1. In particular for n = 2. Assume
this is true for degree < n and for degree n and index < s, with n ≥ 3 and s ≥ 2. If
the index of xi1 ⊗ · · · ⊗ xin is s, let j be the lowest index with ij > ij+1 and let j 0 be
another index with ij 0 > ij 0 +1 .
70 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
ϕ(xi1 ⊗ · · · ⊗ xin )
= ϕ(xi1 ⊗ · · · ⊗ xij+1 ⊗ xij ⊗ · · · ⊗ xin ) + ϕ(xi1 ⊗ · · · ⊗ [xij , xij+1 ] ⊗ · · · ⊗ xin ),
= ϕ(xi1 ⊗ · · · ⊗ xij+1 ⊗ xij ⊗ · · · ⊗ xij 0 +1 ⊗ xij 0 ⊗ · · · ⊗ xin )
+ ϕ(xi1 ⊗ · · · ⊗ xij+1 ⊗ xij ⊗ · · · ⊗ [xij 0 , xij 0 +1 ] ⊗ · · · ⊗ xin )
+ ϕ(xi1 ⊗ · · · ⊗ [xij , xij+1 ] ⊗ · · · ⊗ xij 0 +1 ⊗ xij 0 ⊗ · · · ⊗ xin )
+ ϕ(xi1 ⊗ · · · ⊗ [xij , xij+1 ] ⊗ · · · ⊗ [xij 0 , xij 0 +1 ] ⊗ · · · ⊗ xin )
= ϕ(xi1 ⊗ · · · ⊗ xij 0 +1 ⊗ xij 0 ⊗ · · · ⊗ xin ) + ϕ(xi1 ⊗ · · · ⊗ [xij 0 , xij 0 +1 ] ⊗ · · · ⊗ xin )
The first equality works by definition of ϕ and the second and third because the result
is assumed to be valid for degree n and index < s and for degree < n.
Finally, if j + 1 = j 0 nothing is lost in the argument if we assume j = 1, j 0 = 2, and
n = 3. Write xi1 = x, xi2 = y, and xi3 = z. Hence we have:
by definition of ϕ and because (3.2) is valid for lower index. But since ϕ satisfies (3.2)
in degree < n,
because [[y, z], x] + [y, [x, z]] + [[x, y], z] = [[y, z], x] + [[z, x], y] + [[x, y], z] = 0. Thus,
ϕ(x ⊗ y ⊗ z)
= ϕ(z ⊗ y ⊗ x) + ϕ(z ⊗ [x, y]) + ϕ([x, z] ⊗ y) + ϕ(x ⊗ [y, z])
= ϕ(z ⊗ x ⊗ y) + ϕ([x, z] ⊗ y) + ϕ(x ⊗ [y, z])
= ϕ(x ⊗ z ⊗ y) + ϕ(x ⊗ [y, z]),
xi1 ⊗ · · · ⊗ xin − xi1 ⊗ · · · ⊗ xij+1 ⊗ xij ⊗ · · · ⊗ xin − xi1 ⊗ · · · ⊗ [xij , xij+1 ] ⊗ · · · ⊗ xin ,
it follows that ϕ(I) = 0. On the other hand, ϕ is the identity on the span of the
monomials of index 0, so the linear span of these monomials intersects I trivially, and
hence it maps bijectively on U (L), as required.
ι
If S is a subalgebra of a Lie algebra L, then the inclusion map S ,→ L → − U (L)
extends to a homomorphism U (S) → U (L), which is one-to-one by Theorem 3.2 (as any
ordered basis of S can be extended to an ordered basis of L). In this way, U (S) will be
identified to a subalgebra of U (L).
Moreover, if L = S ⊕ T for subalgebras S and T , then the union of an ordered basis
of S and an ordered basis of T becomes an ordered basis of L by imposing that the
elements of S are lower than the elements in T . Then Theorem 3.2 implies that the
linear map U (S) ⊗k U (T ) → U (L), x ⊗ y 7→ xy, is an isomorphism of vector spaces.
§ 4. Irreducible representations
By the universal property of of U (L), any representation φ : L → gl(V ) induces a
representation of U (L): φ̃ : U (L) → Endk (V ). Therefore a module for L is the same
thing as a left module for the associative algebra U (L).
Now, given a linear form λ ∈ H ∗ , consider:
• J(λ) = α∈Φ+ U (L)xα + ni=1 U (L)(hi − λ(hi )1), which is a left ideal of U (L),
P P
where hi = hαi for any i = 1, . . . , n.
Theorem 3.2 implies that J(λ) 6= U (L). Actually, L = L− ⊕ B, where B = H ⊕ L+
(B is called a BorelP (L) is linearly isomorphic to U (L− ) ⊗k U (B).
subalgebra), so UP
˜
Then with J(λ) = α∈Φ+ U (B)xα + ni=1 U (B)(hi −λ(hi )1) (a left ideal of U (B)),
˜
we get J(λ) = U (L− )J(λ). Now the Lie algebra homomorphism ρ : B → k such
that ρ(xα ) = 0, for any α ∈ Φ+ , and ρ(hi ) = λ(hi ), for any i = 1, . . . , n, extends
to a homomorphism of unital algebras ρ̃ : U (B) → k, and J(λ) ˜ ⊆ ker ρ̃. Hence
˜
J(λ) 6= U (B) and, therefore, J(λ) 6= U (L).
• M (λ) = U (L)/J(λ), which is called the associated Verma module. (It is a left
module for U (L), hence a module for L.)
dim M (λ)λ−αi1 −···−αir ≤ |{(βj1 , . . . , βjr ) ∈ ∆r : βj1 + · · · + βjr = αi1 + · · · + αir }| ≤ r!,
Proof. The vector vλ = mλ + K(λ) is a highest weight vector of V (λ) of weight λ, and
hence by Proposition 1.6, if dim V (λ) is finite, then λ ∈ Λ+ W.
Conversely, assume that λ ∈ Λ+ W . Let x i = x αi , yi = yαi and hi = hαi , i =
1, . . . , n, be the standard generators of L. Denote by φ : L → gl V (λ) the associated
representation. For any i = 1, . . . , n, mi = hλ|αi i ∈ Z≥0 , because λ is dominant. Several
steps will be followed now:
Proof. Let ui = φ(yi )mi +1 (vλ ) = yimi +1 vλ (as usual we denote by juxtaposition the
action of an associative algebra, in this case U (L), on a left module, here V (λ)).
For any j 6= i, [xj , yi ] = 0, so xj ui = yimi +1 (xj vλ ) = 0. By induction, it is checked
that in U (L), xi yim+1 = yim+1 xi + (m + 1)yim hi − m(m + 1)yim for any m ∈ Z≥0 .
Hence
2. Let Si = Lαi ⊕ L−αi ⊕ [Lαi , L−αi ] = kxi + kyi + khi , which is a subalgebra of L
isomorphic to sl2 (k). Then V (λ) is a sum of finite dimensional Si -submodules.
4. For any µ ∈ P V (λ) ⊆ {λ−αi1 −· · ·−αir : r ≥ 0, 1 ≤ i1 , . . . , ir ≤ n} ⊆ ΛW , there
is a σ ∈ W such that σ(µ) ∈ Λ+
W . Hence, by the previous item, σ(µ) ∈ P V (λ) ,
so σ(µ) ≤ λ. Therefore, P V (λ) ⊆ ∪µ∈Λ+ Wµ. Hence P V (λ) is finite, and
W
µ≤λ
since all the weight spaces of V (λ) are finite dimensional, we conclude that V (λ)
is finite dimensional.
Λ+
W → {isomorphism classes of finite dimensional irreducible L-modules}
λ 7→ the class of V (λ),
is a bijection.
and, finally,
τα φ(h) = exp ad φ(xα ) h − α(h)hα − α(h)xα
= φ h − α(h)hα − α(h)xα + [xα , h − α(h)hα − α(h)xα ]
= φ h − α(h)hα − α(h)xα − α(h)xα + 2α(h)xα
= φ h − α(h)hα ,
ηα−1 (v) = σα (µ)(h)ηα−1 (v) for any h ∈ H, so ηα−1 (v) ∈ V (λ)σα (µ) and
That is, φ(h)
ηα−1 V (λ)µ ⊆ V (λ)σα (µ) . But also, ηα−1 V (λ)σα (µ) ⊆ V (λ)σα2 (µ) = V (λ)µ . Therefore,
ηα V (λ)µ = V (λ)σα (µ) and both weight spaces have the same dimension.
(Note
that the sum above is finite since there are only finitely many weights in
P V (λ) . Also, starting with mλ = 1, and using Proposition 4.3, this formula allows
the recursive computation of all the multiplicities.)
Proof. Let φ : L → gl V (λ) be the associated representation
and denote also by φ the
representation of U (L), φ : U (L) → Endk V (λ) . Let {a1 , . . . , am } and {b1 , . . . , bm } be
dual bases of L relative to the Killing form (that is, κ(ai , bj ) = δij for any i, j). Then
j
for any z ∈ L, [ai , z] = m
P Pm i
j=1 αi aj for any i and [bj , z] = i=1 βj bi for any j. Hence,
inside U (L),
m m n
αij + βji aj bi ,
X X X
[ai bi , z] = [ai , z]bi + ai [bi , z] =
i=1 i=1 i,j=1
but
0 = κ([ai , z], bj ) + κ(ai , [bj , z]) = αij + βji ,
so [ m
P Pm
i=1 ai bi , L] = 0. Therefore, the element c = i=1 ai bi is a central element in U (L),
which is called the universal Casimir element (recall that a Casimir element was used
§ 5. FREUDENTHAL’S MULTIPLICITY FORMULA 75
in the proof of Weyl’s Theorem, Chapter 2, 2.5). By the well-known Schur’s Lemma,
φ(c) is a scalar.
Take a basis {g1 , . . . , gn } of H with κ(gi , gj ) = δij for any i, j. For any µ ∈ H ∗ , let
tµ ∈ H, such that κ(tµ , . ) = µ. Then tµ = r1 g1 + · · · + rn gn , with ri = κ(tµ , gi ) = µ(gi )
for any i. Hence,
Xn Xn
(µ|µ) = µ(tµ ) = ri µ(gi ) = µ(gi )2 .
i=1 i=1
For any α ∈ Φ+ ,take xα ∈ Lα and x−α ∈ L−α such that κ(xα , x−α ) = 1 (so that
[xα , x−α ] = tα ). Then the element
n
X X n
X X X
c= gi2 + (xα x−α + x−α xα ) = gi2 + tα + 2 x−α xα
i=1 α∈Φ+ i=1 α∈Φ+ α∈Φ+
(The argument is repeated until Vµ+jα = 0 for large enough j.) Therefore,
∞
X X
(λ|λ + 2ρ)mµ = (µ|µ + 2ρ)mµ + 2 (µ + jα|α)mµ+jα ,
α∈Φ+ j=1
5.2 Remark. Freudenthal’s multiplicity formula remains valid if the inner product is
scaled by a nonzero factor.
76 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
5.3 Example. Let L be the simple Lie algebra of type G2 and write ∆ = {α, β}.
2 −1
The Cartan matrix is −3 2 , so we may scale the inner product so that (α|α) = 2,
(β|β) = 6 and (α|β) = −3. The set of positive roots is (check it!):
Φ+ = {α, β, α + β, 2α + β, 3α + β, 3α + 2β}.
.. ..
.. ...
.
................................................. ..
.. ..
.. ... λ ...
...........
.......... ..
..
.. ..
.. ...
. .. .. . .. .. . ....... . ... . .. ..
.. ..
.. ...
. ..
.. ... ....
. . ..
.. ...
. ..
.. .. .. ..
..
..
.
.. ..
. .
.. ..
.
..
..
λ2◦
. . . . .
..
.
.. ... O .
....
................................................................
. .. .
.....
.......... .. .
..
O .. .
....
.
....... .. ....
.. ...
.
.
.
.
. ..
.. . .. .. ...
.. .
.
. .
. ... .....
.
.............. ... .
2λ1....
.. .
......... ...
.. . ..
. ..
..
.. ..
.. ...
. .. . . ..
.. ... .... ....... ....
.. ..
.. ...
. ..
..
. .. . . .. . . . . . . . ..
.
. . . . ... .. ...
. .. ... ..
.. .
◦
..........
................................................................................
.
.. ... .
.. .......................... .
. ....
. ..
.
.
.
. ... ......
. ... .......... ..
.
.
◦ .
....
. .. .
....
.. ...
.
.
.
.
. ..
.. .. .
.
β
. ... .........
.. ....... .
....
. .. ...
.. ... .. ...
.
.
.
.
λ1
.
.
. ... ......... ....
.. . .. .. . .
... .... ..
..
.. ..
.. ...
. .. ....
.. ... ............. ... .............
. .. . ..
.. ...
. ..
.. ...
. ..
..
. . . . . .. . . . . . . . . ..
. .. .. .. .. ....... . ..... .. .. .. .. ..
. .. . . .
.
..
...
O
.. ... ....
.................................................................................................
.. ... F .
.
.. ...
.
.
.
..
......................................................
.
.. ..
.. .. .. O ...
.. ...
..
..
..
..
.. ...
.
. ..
.. . ..
.
. ..
.. . ..
.
. ..
..
α .. .
.
. .. .
.. .. .
.
. ..
.. .
.
.
..
.. .. ..
..
. ..
.. ...
. ..
.. ...
. ..
..
. ..
..
.
.. ... ..
.. ... .. .. .. ..
..
.. ...
..
.. ...
.. .. .... .... .... .... ....
.
..
..
.
.. ..
. .◦
..................................................................................
..
.. ..
. ..
.. ..
. ◦ ..
.. ..
. ..
..
.. .. ... .. ... .. ... .. ... ..
.. .. .. .. .. .. .. .. .. ..
.. ... .. ... .. ... .. ... .. ...
.. .. .. .. .. .. .. .. .. ..
.. .. .. .. .. .. .. .. .. ..
.
..
..
O .
...
..................................................................
.. ...
. ..
◦ .
...
.. ...
. ..
O .
...
.. ...
. ..
..
..
...
.. . . . .
.. . .. ..
. . .. ..
. ... ..
. ...
.. . .. . .. . .. .
.. ... .. ... .. ... .. ...
.. .. .. .. .. .. .. ..
..................................................
. ... ... ...
Then
{µ ∈ Λ+
W : µ ≤ λ} = {λ, 2λ1 , λ2 , λ1 , 0},
and we conclude that m2λ1 = 28 14 = 2. Thus the multiplicity of the weight spaces, whose
weight is conjugated to 2λ1 is 2. (These are the weights marked with a O in Figure 5.1.)
In the same vein,
∞
X X
(56 − 38)mλ2 = 2 (λ2 + jγ|γ)mλ2 +jγ
γ∈Φ+ j=1
= 2 (λ2 + α|α)mλ2 +α + (λ2 + 2α|α)mλ2 +2α
+ (λ2 + α + β|α + β)mλ2 +α+β + (λ2 + 2α + β|2α + β)mλ2 +2α+β
= 2 (4α + 2β|α)2 + (5α + 2β|α) + (4α + 3β|α + β) + (5α + 3β|2α + β)
= 2 (8 − 6)2 + (10 − 6) + (8 − 12 − 9 + 18) + (20 − 15 − 18 + 18) = 36,
so mλ1 = 4. Finally,
∞
X X ∞
X X
(56 − 14)m0 = 2 (jγ|γ)mjγ = 2 (γ|γ)jmjγ
γ∈Φ+ j=1 γ∈Φ+ j=1
= 2 2(4 + 2 · 2) + 6 · 2 + 2(4 + 2 · 2) + 2(4 + 2 · 2) + 6 · 2 + 6 · 2 = 168,
so m0 = 4.
78 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Taking into account the sizes of the orbits, we get also that
In the computations above, we made use of the symmetry given by the Weyl group.
This can be improved.
Proof. ⊕j∈Z V (λ)µ+jα is a module for Sα (notation as in the proof of Proposition 4.3).
But Sα = [Sα , Sα ], since it is simple, hence the trace of the action of any of its elements
is 0. In particular,
X
0 = trace⊕j∈Z V (λ)µ+jα φ(tα ) = (µ + jα|α)mµ+jα .
j∈Z
Now, Freudenthal’s
P formula can be changed slightly using the previous Lemma and
the fact that 2ρ = α∈Φ+ α.
∞
XX
(λ|λ + 2ρ)mµ = (µ + jα|α)mµ+jα + (µ|µ)mµ .
α∈Φ j=1
We may change j = 1 for j = 0 in the sum above, since (µ|α)mµ + (µ| − α)mµ = 0
for any α ∈ Φ.
Now, if λ ∈ Λ+ +
W , µ ∈ P V (λ) ∩ ΛW and σ ∈ Wµ (the stabilizer of µ, which
is generated by the σi ’s with (µ|αi ) = 0 by 2.1), then for any α ∈ Φ and j ∈ Z,
mµ+jα = mµ+jσ(α) .
Let I be any subset of {1, . . . , n} and consider
ΦI = Φ ∩ ⊕i∈I Zαi (a root system in ⊕i∈I Rαi !)
WI , the subgroup of W generated by σi , i ∈ I,
WI− , the group generated by WI and −id.
(Moody-Patera) Let λ ∈ Λ+ +
5.6 Proposition. W and µ ∈ P V (λ) ∩ ΛW . Consider
I = i ∈ {1, . . . , n} : (µ|αi ) = 0 and the orbits O1 , . . . Or of the action of WI− on Φ.
Now, substitute this in the formula in Corollary 5.5 to get the result.
5.7 Example. In the previous Example, for µ = 0, WI = W and there are two orbits:
the orbit of λ1 (the short roots) and the orbit of λ2 (the long roots), both of size 6.
Hence,
(λ + ρ|λ + ρ) − (ρ|ρ) m0 = (56 − 14)m0
= 6 (λ1 |λ1 )mλ1 + (2λ1 |λ1 )m2λ1 + (λ2 |λ2 )mλ2
= 6 2 · 4 + 4 · 2 + 6 · 2 = 168,
so again we get m0 = 4.
A : RΛW → RΛW
X
p 7→ (−1)σ σ · p.
σ∈W
Then,
(iii) The alternating elements are precisely the linear combinations of the elements
A(eµ ), for strictly dominant µ (that is, hµ|αi > 0 for any α ∈ Φ+ ). These form a
basis of the subspace of alternating elements.
1 P
6.1 Lemma. Let ρ = 2 α∈Φ+ α be the Weyl vector, and consider the element
Y Y
q = e−ρ eα − 1 = eρ 1 − e−α
α∈Φ+ α∈Φ+
Proof. For any simple root γ ∈ ∆, σγ Φ+ \ {γ} = Φ+ \ {γ} (Proposition 6.1). Hence
σγ (ρ) = ρ − γ and
Y
σγ (q) = eρ−γ (1 − eγ ) 1 − e−α
α∈Φ+ \{γ}
Y
= eρ e−γ 1 − e−α = −q.
−1
α∈Φ+ \{γ}
§ 6. CHARACTERS. WEYL’S FORMULAE 81
Thus, q is alternating.
But, µ
P by its own definition, q is a real linear combination of elements e , with µ =
ρ − α∈Φ+ α α ≤ ρ (where α is either 0 or 1). Hence
1 X
q= A(q) = cµ A(eµ ),
|W|
µ∈Λ+ W
µ strictly dominant
hρ − µ|αi i = 1 − hµ|αi i ≤ 0,
so µ = ρ. We conclude that q = cA(eρ ) for some scalar c, but the definition of q shows
that
q = eρ + a linear combination of terms eν , with ν < ρ,
so c = 1 and q = A(eρ ).
Consider the euclidean vector space E = R⊗Q QΦ, and the RΛW -module RΛW ⊗R E.
Extend the inner product ( . | . ) on E to a RΛW -bilinear map
RΛW ⊗R E × RΛW ⊗R E → RΛW ,
χλ A(eρ ) = A(eλ+ρ ).
Now, Y Y
(eα − 1) = (eα − 1)(e−α − 1) = q 2
α∈Φ α∈Φ+
That is,
(λ|λ + 2ρ)χλ q = ∆(χλ q) − χλ ∆(q).
σ σ(ρ)
P
But q = σ∈W (−1) e by the previous Lemma, so
X
σ(ρ)|σ(ρ) (−1)σ eσ(ρ) = (ρ|ρ)q,
∆(q) =
σ∈W
so
Now, χλ q is a linear combination of some eµ+σ(ρ) ’s, with µ ∈ P V (λ) and σ ∈ W, and
equals (λ + ρ|λ + ρ) because of (6.4). This implies (Properties 2.1) that σ −1 (µ) = λ, or
µ = σ(λ) and, hence, χλ q is a linear combination of {eσ(λ+ρ) : σ ∈ W}.
Since χλ is symmetric, and q is alternating, χλ q is alternating. Also, σ(λ + ρ) is
strictly dominant if and only if σ = id. Hence χλ q is a scalar multiple of A(eλ+ρ ), and
its coefficient of eλ+ρ is 1. Hence, χλ q = A(eλ+ρ ), as required.
Y (α|λ + ρ) Y hλ + ρ|αi
dimk V (λ) = = .
+
(α|ρ) +
hρ|αi
α∈Φ α∈Φ
Proof. Let R[[t]] be the ring of formal power series on the variable t, and for any ν ∈ ΛW
consider the homomorphism of real algebras given by:
ζν : RΛW −→ R[[t]]
∞
µ
X 1 s
e →
7 exp (µ|ν)t = (µ|ν)t .
s!
s=0
For any µ, ν ∈ ΛW ,
X
ζν A(eµ ) = (−1)σ exp (σ(µ)|ν)t
σ∈W
X
(−1)σ exp (µ|σ −1 (ν))t
=
σ∈W
= ζµ A(eν ) .
α∈Φ+
Y
= exp (−ρ|µ)t exp (α|µ)t − 1
α∈Φ+
Y
1
− exp − 12 (α|µ)t .
= exp 2 (α|µ)t
α∈Φ+
Hence, Y
1
− exp − 12 (α|ρ)t ,
ζρ χλ q) = ζρ (χλ ) exp 2 (α|ρ)t
α∈Φ+
84 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
while Y
ζρ A(eλ+ρ ) = 1
+ ρ)t − exp − 12 (α|λ + ρ)t .
exp 2 (α|λ
α∈Φ+
With N = |Φ+ |,
Y Y
exp 12 (α|µ)t − exp − 12 (α|µ)t = (α|µ) tN + higher degree terms,
α∈Φ+ α∈Φ+
Two more formulae to compute multiplicities will be given. First, for any µ ∈ ΛW
consider the integer:
|Φ+ |
X
p(µ) = (rα )α∈Φ+ ∈ Z≥0 : µ = rα α .
α∈Φ+
in the natural completion of RΛW (which is naturally isomorphic to the ring of formal
Laurent series R[[X1±1 , . . . , Xn±1 ]]). Thus,
X Y
p(µ)eµ 1 − eα = 1.
µ∈ΛW α∈Φ+
Let θ : RΛW → RΛW be the automorphism given by θ(eµ ) = e−µ for Q any µ ∈ ΛW . If this
ρ ) = eρ −α =
is applied to Weyl’s character formula (recall that q = A(e α∈Φ+ 1 − e
X Y X
mµ e−µ e−ρ (1 − eα ) = (−1)σ e−σ(λ+ρ) .
µ∈ΛW α∈Φ+ σ∈W
6.10 Example. Consider again the simple Lie algebra of type G2 , and λ = ρ = λ1 + λ2 .
For the rotations 1 6= σ ∈ W one checks that (see Figure 5.1):
ρ − σ(ρ) = α + 2β, 6α + 2β, 10α + 2β, 9α + 4β, 4α + β,
while for the symmetries in W,
ρ − σ(ρ) = α, β, 4α + β, 9α + 6β, 10α + 5β, 6α + 2β.
Starting with mλ = 1 we obtain,
m2λ1 = m2λ1 +α + m2λ1 +β = mλ + mλ = 2,
since both 2λ1 + α and 2λ1 + β are conjugated, under the action of W, to λ. In the
same spirit, one can compute:
mλ2 = mλ2 +α = m2λ1 = 2,
mλ1 = mλ1 +α + mλ1 +β = mλ2 + m2λ1 = 4,
m0 = mα + mβ − mα+2β − m4α+β = mλ1 + mλ2 − mλ − mλ = 4.
P
Proof. From χλ0 χλ00 = λ∈Λ+ nλ χλ , we get
W
X
χλ0 χλ00 A(eρ ) = nλ χλ A(eρ ) ,
λ∈Λ+
W
The coefficient of eλ+ρ on the right hand side of (7.5) is nλ , since in each orbit W(λ + ρ)
there is a unique dominant weight, namely λ + ρ.
On the other hand, by Kostant’s formula, the left hand side of (7.5) becomes:
X X X 00
(−1)σ p σ(λ0 + ρ) − µ − ρ eµ (−1)τ eτ (λ +ρ)
µ∈ΛW σ∈W τ ∈W
00
X X
(−1)στ p σ(λ0 + ρ) − µ − ρ eµ+τ (λ +ρ) .
=
µ∈ΛW σ,τ ∈W
σ,τ ∈W
as required.
Proof.
X
(−1)στ p σ(λ0 + ρ) + τ (λ00 + ρ) − (λ + 2ρ)
nλ = (Steinberg)
σ,τ ∈W
X X
= (−1)τ (−1)σ p σ(λ0 + ρ) − (λ + ρ − τ (λ00 + ρ) + ρ)
τ ∈W σ∈W
X
= (−1)τ m0λ+ρ−τ (λ00 +ρ) (Kostant).
τ ∈W
88 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
To give a last formula to compute χλ0 χλ00 some more notation is needed. First, by
A(eλ+ρ )
Weyl’s character formula 6.3, for any λ ∈ Λ+ W , χλ = A(eρ ) . Let us extend this, by
defining χλ for any λ ∈ ΛW by means of this formula. For any weight µ ∈ ΛW , recall
that Wµ denotes the stabilizer of µ in W: Wµ = {σ ∈ W : σ(µ) = µ}. If this stabilizer
is trivial, then there is a unique σ ∈ W such that σ(µ) ∈ Λ+W (Properties 2.1). Consider
then (
0 if Wµ 6= 1
s(µ) =
(−1) σ if Wµ = 1, and σ(µ) ∈ Λ+ W.
Denote also by {µ} the unique dominant weight which is conjugate to µ. Let σ ∈ W
such that σ(µ) = {µ}. If {µ} is strictly dominant, then A(eµ ) = (−1)σ A(e{µ} ) =
s(µ)A(e{µ} ), otherwise there is an i = 1, . . . , n such that σi σ(µ) = σ(µ) (Properties
2.1) so σ −1 σi σ ∈ Wµ and s(µ) = 0; also A(e{µ} ) = A(σi · e{µ} ) = −A(e{µ} ) = 0 and
A(eµ ) = 0 too. Hence A(eµ ) = s(µ)A(e{µ} ) for any µ ∈ ΛW . Therefore, for any λ ∈ ΛW ,
A(eλ+ρ ) = s(λ + ρ)A(e{λ+ρ} ), and
χλ = s(λ + ρ)χ{λ+ρ}−ρ .
By Properties 2.1, if ν ∈ ΛW and s(ν) 6= 0, then {ν} is strictly dominant, and hence
{ν} − ρ ∈ Λ+ 00
W , so all the weights {µ + λ + ρ} − ρ that appear with nonzero coefficient
on the right hand side of the last formula are dominant.
00
χλ0 χλ00 A(eρ ) = χλ0 A(eλ +ρ )
X X 00
= m0µ eµ (−1)σ eσ(λ +ρ)
µ∈P 0 σ∈W
X X 00
= (−1) σ
m0µ eσ(µ) eσ(λ +ρ) (as m0µ = m0σ(µ) ∀σ ∈ W)
σ∈W µ∈P 0
00 +ρ)
X X
= m0µ (−1)σ eσ(µ+λ
µ∈P 0 σ∈W
00 +ρ
X
m0µ A eµ+λ
= .
µ∈P 0
Proof. Because of (7.6), if V (λ) is isomorphic to a submodule of V (λ0 ) ⊗k V (λ00 ) there
is a µ ∈ P V (λ0 ) such that {µ + λ00 + ρ} = λ + ρ. Take µ ∈ P V (λ0 ) and σ ∈ W such
that σ(µ + λ00 + ρ) = λ + ρ and σ has minimal length. It is enough to prove that l(σ) = 0.
If l(σ) = t ≥ 1, let σ = σβ1 ◦ · · · ◦ σβt be a reduced expression. Then σ(βt ) ∈ Φ− by
Properties 2.1 and
0 ≥ λ + ρ|σ(βt ) = σ −1 (λ + ρ)|βt
since λ + ρ and λ00 + ρ are dominant. Hence 0 ≥ hµ + λ00 + ρ|βt i ≥ hµ|βt i and µ̂ =
µ − hµ + λ00 + ρ|βt iβt ∈ P V (λ0 ) . Therefore,
7.5 Example. As usual, let L be the simple Lie algebra of type G2 . Let us decompose
V (λ1 ) ⊗k V (λ2 ) using Klymik’s formula.
Recall that λ1 = 2α + β, λ2 = 3α + 2β, so α = 2λ1 − λ2 and β = −3λ1 + 2λ2 . Scaling
so that (α|α) = 2, one gets (λ1 |α) = 1, (λ2 |β) = 3, and (λ1 |β) = 0 = (λ2 |α).
Also, P V (λ1 ) = Wλ1 ∪ W0 = {0, ±α, ±(α + β), ±(2α + β)} (the short roots and
0). The multiplicity of any short root equals the multiplicity of λ1 , which is 1.
Freudenthal’s formula gives
∞
X X X
(λ1 + ρ|λ1 + ρ) − (ρ|ρ) m0 = 2 (jγ|γ)mjγ = 2 (γ|γ) = 12,
γ∈Φ+ j=1 γ∈Φ+
γ short
since mλ1 = 1, so mγ = 1 for any short γ, as all of them are conjugate. But (λ1 + ρ|λ1 +
ρ) − (ρ|ρ) = (λ1 |λ1 + 2ρ) = (λ1 |3λ1 + 2λ2 ) = (3λ1 + 2λ2 |2α + β) = 12. Thus, m0 = 1.
Hence all the weights of V (λ1 ) have multiplicity 1, and Klymik’s formula gives then
X
χλ1 χλ2 = s(µ + λ2 + ρ)χ{µ+λ2 +ρ}−ρ .
µ∈P (V (λ1 ))
Let us compute the contribution to this sum of each µ ∈ P V (λ1 ) :
• −(2α + β) + λ2 + ρ = λ2 is stabilized by σα .
With some insight, we could have proceeded in a different way. First, the multiplicity
of the highest possible weight λ0 + λ00 in V (λ0 ) ⊗k V (λ00 ) is always 1, so V (λ0 + λ00 ) always
appears in V (λ0 ) ⊗k V (λ00 ) with multiplicity 1.
In the example above, if µ ∈ P V (λ1 ) and µ + λ2 ∈ Λ+ W , then µ + λ2 ∈ {λ1 +
λ2 , 2λ1 , λ1 , λ2 }. Hence,
V (λ1 ) ⊗k V (λ2 ) ∼
= V (λ1 + λ2 ) ⊕ pV (2λ1 ) ⊕ qV (λ1 ) ⊕ rV (λ2 ),
and dimk V (λ1 ) ⊗k V (λ2 ) = 7 × 14 = 98, dimk V (λ1 + λ2 ) = dimk V (ρ) = 26 = 64.
Weyl’s dimension formula gives
Y (2λ1 + ρ|γ) 3 · 3 · 6 · 9 · 12 · 15
dimk V (2λ1 ) = = = 27.
+
(ρ|γ) 1·3·4·5·6·9
γ∈Φ
Let L be a simple real Lie algebra. By Schur’s Lemma, the centralizer algebra EndL (L)
is a real division algebra, but for any α, β ∈ EndL (L) and x, y ∈ L,
αβ [x, y] = α x, βy] = [αx, βy] = β [αx, y] = βα [x, y] ,
and, since L = [L, L], it follows that EndL (L) is commutative. Hence EndL (L) is
(isomorphic to) either R or C.
In the latter case, L is then just a complex simple Lie algebra, but considered as a
real Lie algebra.
In the first case, EndL (L) = R, so EndLC (LC ) = C, where LC = C ⊗R L = L ⊕ iL.
Besides, LC is semisimple because its Killing form is the extension of the Killing form of
L, and hence it is nondegenerate. Moreover, if LC is the direct sum of two proper ideals
LC = L1 ⊕ L2 , then C = EndLC (LC ) ⊇ EndLC (L1 ) ⊕ EndLC (L2 ), which has dimension at
least 2 over C, a contradiction. Hence LC is simple. In this case, L is said to be central
simple and a real form of LC . (More generally, a simple Lie algebra over a field k is
said to be central simple, if its scalar extension k̄ ⊗k L is a simple Lie algebra over k̄, an
algebraic closure of k.)
Consider the natural antilinear automorphism σ of LC = C ⊗R L = L ⊕ iL given by
σ = − ⊗ id (α 7→ ᾱ is the standard conjugation in C). That is,
σ : LC → LC
x + iy 7→ x − iy.
§ 1. Real forms
1.1 Definition. Let L be a real semisimple Lie algebra.
91
92 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
1.2 Proposition. The Killing form of any compact Lie algebra is negative definite.
Proof. Let κ be the Killing form of the compact Lie algebra L with dimR L = n. For any
0 6= x ∈ L, let λ1 , . . . , λn ∈ C be the eigenvalues of adx (possibly repeated). If for some
j = 1, . . . , n, λj ∈ R \ {0}, then there exists a 0 6= y ∈ L such that [x, y] = λj y. Then the
subalgebra T = Rx + Ry is solvable and y ∈ [T, T ]. By Lie’s Theorem (Chapter 2, 1.9)
ady is nilpotent, so κ(y, y) = 0, a contradiction with κ being definite. Thus, λj 6∈ R \ {0}
for any j = 1, . . . , n. Now, if λj = α + iβ with β 6= 0, then λ̄j = α − iβ is an eigenvalue
of adx too, and hence there are elements y, z ∈ L, not both 0, such that [x, y] = αy + βz
and [x, z] = −βy + αz. Then
[x, [y, z]] = [[x, y], z] + [y, [x, z]] = 2α[y, z].
The previous argument shows that either α = 0 or [y, z] = 0. In the latter case T =
Rx + Ry + Rz is a solvable Lie algebra with 0 6= αy + βz ∈ [T, T ]. But this gives again
a contradiction.
Therefore, λ1 , . . . , λn ∈ Ri and κ(x, x) = nj=1 λ2j ≤ 0.
P
1.3 Theorem. Any complex semisimple Lie algebra contains both a split and a compact
real forms.
• κ xα + ω(xα ), xα + ω(xα ) = 2κ xα , ω(xα ) < 0, by the previous argument,
• κ i(xα − ω(xα )), i(xα − ω(xα )) = 2κ xα , ω(xα ) < 0, and
• κ xα + ω(xα ), i(xα − ω(xα )) = iκ xα + ω(xα ), xα − ω(xα ) = 0.
Hence the Killing form of K, which is obtained by restriction of κ, is negative definite,
and hence K is compact.
1.4 Remark. The signature of the Killing form of the split form L above is rank L,
while for the compact form K is − dim K.
1.5 Definition. Let S be a complex semisimple Lie algebra and let σ1 , σ2 be the con-
jugations associated to two real forms. Then:
• σ1 and σ2 are said to be equivalent if the corresponding real forms S σ1 and S σ2
are isomorphic.
1.6 Proposition. Let S be a complex semisimple Lie algebra and let σ1 , σ2 be the
conjugations associated to two real forms. Then:
(i) σ1 and σ2 are equivalent if and only if there is an automorphism ϕ ∈ AutC S such
that σ2 = ϕσ1 ϕ−1 .
94 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
hσ2 : S × S −→ C
(u, v) 7→ −κ(u, σ2 (v))
is hermitian, since κ σ2 (u), σ2 (v) = κ(u, v) for any u, v, because σ2 is an antilinear
automorphism, and it is also positive definite since the restriction of hσ2 to S σ2 × S σ2
equals −κ|S σ2 ×S σ2 , which is positive definite, since S σ2 is compact. Therefore, for any
σ1
x ∈ S− , 0 ≥ κ(x, x) = hσ2 (x, x) ≥ 0, so κ(x, x) = 0, and x = 0, since S σ1 is compact.
σ1
Hence S− = 0 and id = θ|S σ1 , so θ = id as S = S σ1 ⊕ iS σ1 and σ1 = σ2 .
1.7 Theorem. Let S be a complex semisimple Lie algebra, and let σ and τ be two
conjugations, with τ being compact. Then there is an automorphism ϕ ∈ AutC S such
that σ and ϕτ ϕ−1 (which is compact too) are compatible. Moreover, ϕ can be taken of
the form exp(i adu ) with u ∈ K = S τ .
hτ : S × S −→ C
(x, y) 7→ −κ x, τ (y)
j
For any r, s = 1, . . . , N , [xr , xs ] = N
P
j=1 crs xj for suitable structure constants. With
µj = |λj |2 = λ2j for any j = 1, . . . , N , and since θ2 is an automorphism, we get µr µs cjrs =
µj cjrs for any r, s, j = 1, . . . , N , and hence (either cjrs = 0 or µr µs = µj ) for any t ∈ R,
µtr µts cjrs = µtj cjrs , which shows that, for any t ∈ R,
t t
ϕt = diag(µ1 , . . . , µN ) = exp diag 2t log|λ1 |, . . . , 2t log|λN |
is an automorphism of S.
On the other hand, τ θ = τ στ = θ−1 τ , so τ ϕ1 = τ θ2 = θ−2 τ = ϕ−1 τ . This shows that
τ diag(µ1 , . . . , µN ) = diag(µ−1 −1
1 , . . . , µN )τ and, as before, this shows that τ ϕt = ϕ−t τ
for any t ∈ R. Let τ = ϕt τ ϕ−t . We will look for a value of t that makes στ 0 = τ 0 σ.
0
But,
1.8 Remark. Under the conditions of the proof above, for any ψ ∈ AutR S such that
ψσ = σψ and ψτ = τ ψ, one has ψθ = θψ and hence (working with the real basis
{x1 , ix1 , . . . , xN , ixN }) one checks that ψϕt = ϕt ψ for any t ∈ R so, in particular, ψϕ =
ϕψ. That is, the automorphism ϕ commutes with any real automorphism commuting
with σ and τ .
1.9 Corollary. Let S be a complex semisimple Lie algebra and let σ, τ be two compact
conjugations. Then σ and τ are equivalent. That is, up to isomorphism, S has a unique
compact form.
Proof. By Theorem 1.7, there is an automorphism ϕ such that σ and ϕτ ϕ−1 are com-
patible (and compact!). By Proposition 1.6, σ = ϕτ ϕ−1 .
96 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
1.10 Theorem. Let S be a complex semisimple Lie algebra, θ and involutive automor-
phism of S and τ a compact conjugation. Then there is an automorphism ϕ ∈ AutC S
such that θ commutes with ϕτ ϕ−1 . Moreover, ϕ can be taken of the form exp(i adu ) with
u ∈ K = S τ . In particular, there is a compact form, namely ϕ(K), which is invariant
under θ.
where it has been used that, since (θτ )2 commutes with θτ , so does ϕt for any t. Hence
θτ 0 = τ 0 θ if and only if t = 41 .
The rest follows as in the proof of Theorem 1.7.
Now, a map can be defined for any complex semisimple Lie algebra S:
( ) ( )
Isomorphism classes of Conjugation classes in AutC S
Ψ: −→
real forms of S of involutive automorphisms
[σ] 7→ [στ ]
σ2 = θ2 τ = ϕθ1 ϕ−1 τ
= ϕθ1 (ϕ−1 τ ϕ)ϕ−1 = ϕθ1 γτ γ −1 ϕ−1
= ϕγθ1 τ (ϕγ)−1
= (ϕγ)σ1 (ϕγ)−1 .
1.12 Remark.
(i) The proof of Theorem 1.3 shows that Ψ([‘split form’]) = [ω] (ω(xj ) = −yj , ω(yj ) =
−xj for any j). Trivially, Ψ([‘compact form’]) = [id].
Since κ|K 0̄ and κ|K 1̄ are negative definite, the signature of κL is dimR K 1̄ −
dimR K 0̄ = dimC S 1̄ − dimC S 0̄ , where S 0̄ = {x ∈ S : θ(x) = x} and S 1̄ =
{x ∈ S : θ(x) = −x}.
The decomposition L = K 0̄ ⊕ iK 1̄ is called a Cartan decomposition of L.
98 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
(iii) To determine the real simple Lie algebras it is enough then to classify the involutive
automorphisms of the simple complex Lie algebras, up to conjugation. This will
be done over arbitrary algebraically closed fields of characteristic 0 by a process
based on the paper by A.W. Knapp: “A quick proof of the classification of simple
real Lie algebras”, Proc. Amer. Math. Soc. 124 (1996), no. 10, 3257–3259.
§ 2. Involutive automorphisms
Let k be an algebraically closed field of characteristic 0, and let L be a semisimple Lie
algebra over k, H a fixed Cartan subalgebra of L, Φ the corresponding root system and
∆ = {α1 , . . . , αn } a system of simple roots. Let x1 , . . . , xn , y1 , . . . , yn be the canonical
generators that are being used throughout.
(i) For any subset J ⊆ {1, . . . , n}, there is a unique involutive automorphism θJ of L
such that (
θJ (xi ) = xi , θJ (yi ) = yi , if i 6∈ J,
θJ (xi ) = −xi , θJ (yi ) = −yi , if i ∈ J.
We will say that θJ corresponds to the Dynkin diagram of (Φ, ∆), where the nodes
corresponding to the roots αi , i ∈ J, are shaded.
(ii) Also, if ω is an ‘involutive automorphism’ of the Dynkin diagram of (Φ, ∆), that
is, a bijection among the nodes of the diagram that respects the Cartan integers,
and if J is a subset of {1, . . . , n} consisting of fixed nodes by ω, then there is a
unique involutive automorphism θω,J of L given by,
(
θω,J (xi ) = xω(i) , θω,J (yi ) = yω(i) , if i 6∈ J,
θω,J (xi ) = −xi , θω,J (yi ) = −yi , if i ∈ J.
We will say that θω,J corresponds to the Dynkin diagram of (Φ, ∆) with the nodes
in J shaded and where ω is indicated by arrows, like the following examples:
.....◦
.
◦ • ◦
......
•....................l ∨
◦ • ∨
◦
◦
...
These diagrams, where some nodes are shaded and a possible involutive diagram au-
tomorphism is specified by arrows, are called Vogan diagrams (see A.W. Knapp: Lie
groups beyond an Introduction, Birkhäuser, Boston 1996).
In these tables, one has to note that for the orthogonal Lie algebras of small di-
mension over an algebraically closed field of characteristic 0, one has the isomorphisms
so3 ∼
= A1 , so4 ∼
= A1 × A1 and so6 ∼ = A3 . Also, Z denotes a one-dimensional Lie al-
gebra, and sor,s (R) denotes the orthogonal Lie algebra of a nondegenerate quadratic
§ 2. INVOLUTIVE AUTOMORPHISMS 99
form of dimension r + s and signature r − s. Besides, so∗2n (R) denotes the Lie al-
gebra of the skew matrices relative to a nondegenerate antihermitian form on a vec-
tor space over the quaternions: so∗2n (R) = {x ∈ Matn (H) : xt h + hx̄ = 0}, where
h = diag(i, . . . , i). In the same vein, spn (H) = {x ∈ Matn (H) : xt + x̄ = 0}, while
spr,s (H) = {x ∈ Matr+s (H) : xt h + hx̄ = 0}, where h = diag(1, . . . , 1, −1, . . . , −1) (r 1’s
and s −1’s). Finally, an expression like E8,−24 denotes a real form of E8 such that the
signature of its Killing form is −24.
Proof. Let L be a simple Lie algebra over k and let ϕ ∈ Aut L be an involutive automor-
phism. Then L = S ⊕ T , with S = {x ∈ L : ϕ(x) = x} and T = {x ∈ L : ϕ(x) = −x}.
The subspaces S and T are orthogonal relative to the Killing form (since the Killing
form κ is invariant under ϕ).
(i) There exists a Cartan subalgebra H of L which contains a Cartan subalgebra of S
and is invariant under ϕ:
In fact, the adjoint representation ad : S → gl(L) has a nondegenerate trace form, so
S = Z(S) ⊕ [S, S] and [S, S] is semisimple (Chapter 2, 2.2). Besides, for any x ∈ Z(S),
x = xs + xn with xs , xn ∈ NL (T ) ∩ CL (S) = S ∩ CL (S) = Z(S) (as the normalizer
NL (T ) is invariant under ϕ and NL (T ) ∩ T is an ideal of L and hence trivial). Besides,
κ(xn , S) = 0, so xn = 0. Hence Z(S) is a toral subalgebra,
and there is a Cartan
subalgebra HS of S with HS = Z(S) ⊕ HS ∩ [S, S] . Then HS is toral on L. Let
H = CL (HS ) = HS ⊕ HT , where HT = CL (HS ) ∩ T . Then [H, H] = [HT , HT ] ⊆ S.
Hence [[H, H], H] = 0, so H is a nilpotent subalgebra. Thus, [H, H] acts both nilpotently
and semisimply on L. Therefore, [H, H] = 0 and H is a Cartan subalgebra of L, since
for any x ∈ HT , xn ∈ H, κ(xn , H) = 0 and, as H is the zero weight space relative to
HS , the restriction of κ to H is nondegenerate, hence xn = 0 and H is toral.
(ii) Fix one such Cartan subalgebra H and let Φ be the associated set of roots. Then
ϕ induces a linear map ϕ∗ : H ∗ → H ∗ , α 7→ ᾱ = α ◦ ϕ|H . Since ϕ is an automorphism,
ϕ(Lα ) = Lᾱ for any α ∈ Φ, so Φ̄ = Φ. Besides, for any α ∈ Φ and any h ∈ H,
= α(ϕ(h)) = κ(tα , ϕ(h)) = κ(ϕ(tα ), h), so ϕ(tα ) = tᾱ for any α ∈ Φ. This shows
ᾱ(h) P
that α∈Φ Qtα is invariant under ϕ.
(iii) Consider the subsets ΦS = {α ∈ Φ : Lα ⊆ S} and ΦT = {α ∈ Φ : Lα ⊆ T }. Then
ΦS ∪ ΦT = {α ∈ Φ : ᾱ = α}:
Actually, [HT , S] ⊆ T and [HT , T ] ⊆ S, so for any α ∈ ΦS ∪ ΦT , α(HT ) = 0 and
α = ᾱ. Conversely, if α(HT ) = 0, then Lα = (Lα ∩ S) ⊕ (Lα ∩ T ) and, since dim Lα = 1,
either Lα ⊆ S or Lα ⊆ T .
P
(iv) The rational vector space Ě = α∈Φ Qtα is invariant under ϕ and κ|Ě is positive
P on Q). Hence Ě = ĚS ⊥ ĚT , where ĚS = Ě ∩ S and ĚT = Ě ∩ T .
definite (taking values
Also, Φ ⊆ E = α∈Φ Qα = ES ⊕ ET , where ES = {α ∈ E : α(HT ) = 0} and
ET = {α ∈ E : α(HS ) = 0}, with ES and ET orthogonal relative to the positive definite
symmetric bilinear form ( . | . ) induced by κ. Moreover, Φ ∩ ET = ∅:
In fact, if α ∈ Φ and α(HS ) = 0, then for any x = xS + xT ∈ Lα (xS ∈ S, xT ∈ T ),
and any h ∈ HS , [h, xS + xT ] = α(h)(xS + xT ) = 0. Hence xS ∈ CS (HS ) = HS and
[H, xS ] = 0. Now, for any h ∈ HT , α(h)(xS + xT ) = [h, xS + xT ] = [h, xT ] ∈ S. Hence
xT = 0 = xS , a contradiction.
(v) ¯ = ∆:
There is a system of simple roots ∆ such that ∆
100 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
.
.....
......
◦
Dn ◦ ◦ ◦ ◦ ◦.................. Dn (ϕ = id) so2n (R)
.....
◦
p .
......
......
◦
◦ ◦ ◦ • ◦ ◦.................. so2(n−p) × so2p so2(n−p),2p (R)
.....
◦
(1 ≤ p ≤ b n2 c)
.
......
......
•
◦ ◦ ◦ ◦ ◦..................
.....
An−1 × Z so∗2n (R)
◦
(n > 4)
......◦
.
......
◦ ◦ ◦ ◦ ◦................. l Bn−1 so2n−1,1 (R)
.....
◦
.....◦
.
p ......
◦ ◦ ◦ • ◦ ◦.................. l so2n−2p−1 × so2p+1 so2n−2p−1,2p+1 (R)
.....
◦
(1 ≤ p ≤ b n−1
2 c)
∨ ∨ ∨ ∨
◦ ◦ ◦ ◦ ◦ F4 E6,−26
◦
∨ ∨ ∨ ∨
◦ ◦ ◦ ◦ ◦ C4 E6,6
•
◦ ◦ ◦ ◦ ◦ ◦
E7 E7 E7,−133
◦
◦ ◦ ◦ ◦ ◦ •
E6 × Z E7,−25
◦
• ◦ ◦ ◦ ◦ ◦
D6 × A1 E7,−5
◦
◦ ◦ ◦ ◦ ◦ ◦
A7 E7,7
•
◦ ◦ ◦ ◦ ◦ ◦ ◦
E8 E8 E8,−248
◦
◦ ◦ ◦ ◦ ◦ ◦ •
E7 × A1 E8,−24
◦
• ◦ ◦ ◦ ◦ ◦ ◦
D8 E8,8
◦
F4 ◦ ◦>◦ ◦ F4 F4,−52
◦ ◦>◦ • B4 F4,−20
• ◦>◦ ◦ C 3 × A1 F4,4
G2 ◦<◦ G2 G2,−14
◦<• A 1 × A1 G2,2
0 0
s +2r
with ᾱi0 = αi0 , i = 1, . . . , s0 and ᾱs0 0 +2j−1 = αs0 0 +2j , j = 1, . . . , r0 . Let {µ0i }i=1 be the
dual basis to ∆0 . Since (µ|α) ≥ 0 for any α ∈ Φ+ (µ is said dominant then), and µ̄ = µ,
s 0 r 0
X X
mi µ0i ms0 +j µ0s0 +2j−1 + µ0s0 +2j ,
µ= +
i=1 j=1
1 0 0 0 0
µ − (µs0 +2j−1 + µs0 +2j ) µs0 +2j−1 + µs0 +2j > 0,
2
we would have
µ − (µ0s0 +2j−1 + µ0s0 +2j ) µ − (µ0s0 +2j−1 + µ0s0 +2j ) < (µ|µ),
a contradiction with the minimality of µ, since µ − (µ0s0 +2j−1 + µ0s0 +2j ) ∈ Λ, because for
any α ∈ ΦT , µ0s0 +2j−1 |α = µ̄0s0 +2j−1 |ᾱ = µ0s0 +2j |α .
µ = µ0i ,
(2.1) (µ − µ0j |µ0j ) ≤ 0 for any i 6= j = 1, . . . , s,
1
µ − (µ0s0 +2j−1 + µ0s0 +2j )|µ0s0 +2j−1 + µ0s0 +2j ≤ 0, for any j = 1, . . . , r.
2
(The last condition in (2.1) does not appear in Knapp’s article.)
(x) Looking at Tables A.1, A.2, what remains to be proved is to check that for Vogan
diagrams associated to the Lie algebras of type Cn , Dn , E6 , E7 , E8 , F4 or G2 , in case
there is a shaded node, this node satisfies the requirements in the Tables A.1, A.2. This
can be deduced easily case by case from (2.1):
For any i = 1, . . . , n − 1,
1 1
µ0i − µ0n |µ0n = i − (n − i) = (2i − n),
2 2
n
so (2.1) is satisfied if and only if i ≤ 2 .
• For Dn ,
α1 α2 α3 αn−2.................◦ αn−1
◦ ◦ ◦ ◦ ◦...............
...
◦ αn
Here either ϕ∗ is the identity, or ᾱn−1 = αn (ᾱi = αi for i ≤ n − 1). Also, αi =
i − i+1 for i = 1, . . . , n − 1 and αn = n−1 + n where, up to a scalar, (i |j ) = δij .
Hence µ0i = 1 + · · · + i , for i = 1, . . . , n − 2, µ0n−1 = 12 (1 + · · · + n−1 − n ) and
µ0n = 12 (1 + · · · + n−1 + n ).
For any i = 1, . . . , n − 2,
1 1 1
µ0i − µ0n |µ0n = (2i − n), µ0i − (µ0n−1 + µ0n ) µ0n−1 + µ0n = 2i − (n − 1) ,
4 2 4
i ≤ n−12 .
1
α1 = (1 − 2 − · · · − 7 + 8 ),
2
α2 = 1 + 2 ,
αi = i−1 − i−2 , i = 3, . . . , 8,
for an orthonormal basis (up to a scaling of the inner product) {i : i = 1, . . . , 8}.
Hence
µ01 = 28 ,
1
µ02 = (1 + · · · + 7 + 58 ),
2
1
µ03 = (−1 + 2 + · · · + 7 + 78 ),
2
µ0i = i−1 + · · · + 7 + (9 − i)8 , i = 4, . . . , 8.
For any i = 2, . . . , 6,
µ01 = 8 − 7 ,
1
µ02 = 1 + · · · + 6 + 2(8 − 7 ) ,
2
1
µ03 = −1 + · · · + 6 + 3(8 − 7 ) ,
2
µ04 = 3 + · · · + 6 + 2(8 − 7 ),
3
µ05 = 4 + 5 + 6 + (8 − 7 ),
2
µ06 = 5 + 6 + (8 − 7 ),
1
µ07 = 6 + (8 − 7 ).
2
2
µ01 = (8 − 7 − 6 ),
3
0 1
µ2 = 1 + · · · + 5 + (8 − 7 − 6 ) ,
2
1 5
µ03 = −1 + · · · + 5 + 8 − 7 − 6 ,
2 6
µ04 = 3 + 4 + 5 + (8 − 7 − 6 ),
2
µ05 = 4 + 5 + (8 − 7 − 6 ),
3
0 1
µ6 = 5 + (8 − 7 − 6 ).
3
Moreover,
µ03 − µ01 |µ01 > 0, µ05 − µ06 |µ06 > 0, µ04 − µ02 |µ02 > 0,
Here
α1 = 2 − 3 ,
α2 = 3 − 4 ,
α3 = 4 ,
1
α4 = (1 − 2 − 3 − 4 ),
2
106 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
µ01 = 1 + 2 ,
µ02 = 21 + 2 + 3 ,
µ03 = 31 + 2 + 3 + 4 ,
µ04 = 21 ,
so
µ02 − µ01 |µ01 > 0, µ03 − µ04 |µ04 > 0,
(xi) The assertions on the third column in Tables A.1, A.2 follows by straightforward
computations, similar to the ones used for the description of the exceptional simple Lie
algebra of type F4 in Chapter 2, Section § 8. (Some more information will be given in
the next section.) The involutive automorphisms that appear in these tables for each
type are all nonconjugate, since their fixed subalgebras are not isomorphic.
(see Equation (6.5) in Chapter 2). Let τ be the associated compact conjugation:
τ (x) = −x̄t . For any Vogan diagram, we must find an involutive automorphism ϕ ∈
AutC sln+1 (C) associated to it and that commutes with τ . Then σ = ϕτ is the conjuga-
tion associated to the corresponding real form.
§ 3. SIMPLE REAL LIE ALGEBRAS 107
(b) Let ap = diag(1, . . . , 1, −1, . . . , −1) be the diagonal matrix with p 1’s and (n +
1 − p) −1’s. Then a2p = In+1 (the identity matrix). The involutive automorphism
ϕp : x 7→ ap xap = ap xa−1 p of sln+1 (C) commutes with τ , its fixed subalgebra is
formed by the block diagonal matrices with two blocks of size p and n + 1 − p, so
S ϕp ∼
= slp (C) ⊕ sln+1−p (C) ⊕ Z, where Z is a one-dimensional center. Moreover,
the usual Cartan subalgebra H of the diagonal matrices in S (see Equation (6.4)
in Chapter 2) contains a Cartan subalgebra of the fixed part and is invariant
under ϕp . Here xi = Ei,i+1 (the matrix with a 1 on the (i, i + 1) position and 0’s
elsewhere). Then ϕp (xp ) = −xp , while ϕp (xj ) = xj for j 6= p, so the associated
Vogan diagram is
p
◦ ◦ ◦ • ◦ ◦ ◦
Now, with σp = ϕp τ , the associated real form is
which is invariant under ϕ∗ and shows that the associated Vogan diagram is
∨
◦ ∨
◦ ∨
◦ ∨
◦ ∨
◦ ∨
◦
(d) In the same vein, with n = 2r − 1, consider the symmetric matrix d = I0r I0r
and the involutive automorphism ϕd : x 7→ −dxt d = −dxt d−1 . Here the fixed
subalgebra is so2r (C), and ϕ∗d (i ) = −r+i for i = 1, . . . , r. A suitable system of
simple roots is
∨
◦ ∨
◦ ∨
◦ • ∨
◦ ∨
◦ ∨
◦
(e) Finally, with n = 2r − 1, consider the skew-symmetric matrix c = −I0 r I0r and
the involutive automorphism ϕc : x 7→ cxt c = −cxt c−1 . Here the fixed subalgebra
is sp2r (C), and ϕ∗c (i ) = −r+i for i = 1, . . . , r. The same ∆0 of the previous
item works here but ϕc (E1,r+1 ) = E1,r+1 , which shows that the associated Vogan
diagram is
∨
◦ ∨
◦ ∨
◦ ◦ ∨
◦ ∨
◦ ∨
◦
With σc = ϕc τ , one gets the real form