Symmetries Lectures
Symmetries Lectures
SYMMETRIES,
PA R T I C L E S ,
AND FIELDS
These notes are for use with the Cambridge University Part III lecture course Symmetries, Particles, and
Fields, given in the Department of Applied Mathematics and Theoretical Physics in Michaelmas Term
2022. They draw on the notes of previous lecturers, in particular Ben Allanach.
Introduction 5
Discrete groups 7
Representations 29
Relativistic symmetries 45
Bibliography 57
Introduction
G = { g1 , g2 , . . . , g k , . . . } . (2.1)
∀ g1 , g2 ∈ G , g2 · g1 ∈ G (2.2)
g3 · ( g2 · g1 ) = ( g3 · g2 ) · g1 (2.3)
∀ g1 , g2 , g3 ∈ G.
e· g = g·e = g. (2.4)
g · g −1 = e = g −1 · g . (2.5)
You can also use proof by contradiction to show that for every
g, g−1 is unique.
Note that we have not insisted that the group operation be com-
mutative. If the following property holds for all g1 , g2 ∈ G
g2 · g1 = g1 · g2 (2.6)
Examples
Some examples of infinite groups
g2 ◦ g1 = g2 g1 (2.7)
and read right-to-left, also leaves the physical system invariant. The
collection of all symmetry transformations { g1 , g2 , . . . , gk , . . .} =
G forms a group under composition. The group axioms are all
satisfied. By construction, closure is satisfied. Composition of these
r
transformations is associative. The identity is the “do nothing”
transformation. And we assume that any transformation can be
undone, so the corresponding group element has its inverse in G.
Let’s focus on the symmetries of a two-sided equilateral triangle.
The triangle is invariant under rotation by an angle 2π/3 and un- m3 m1 m2
der reflection about any of the three lines connecting a vertex to the Figure 2.1: Actions of D3 on an equi-
triangle’s centre (see Fig. 2.1). The symmetry group is the dihedral lateral triangle.
y
group D3 .
Let us denote the anticlockwise rotation by 2π/3 by r, and a
reflection about the vertical axis by m. The key relation for the
9
m3
In fact, the dihedral group describing the symmetries ofman
1 n-gon is m2
the generalization
y
D E
Dn = r, m r n = m2 = e ; mr = r −1 m . (2.9)
In this section, let us recall some concepts which will be useful later
gH = { gh | h ∈ H } . (2.14)
Hg = { hg | h ∈ H } . (2.15)
11
It is a theorem that, for all g ∈ G, the left and right cosets are
equal if and only if H is normal:
gH = Hg , ∀ g ∈ G ⇐⇒ H G . (2.16)
where we have noted that {em1 , em2 , em3 } is the same set as
{rm1 , rm2 , rm3 } and {r2 m1 , r2 .m2 , r2 m3 }. Let us label the two
cosets
{ I, a} ∼
= h a| a2 = ei = C2 (2.20)
Cn = h a | an = ei . (2.21)
G = { gi } with i = 1, 2, . . . , | G | . (3.1)
3.1 Manifolds
ϕβ ∘ ϕα−1
𝒰α 𝒰β
ϕα(𝒪α ∩ 𝒪β ) ϕβ(𝒪α ∩ 𝒪β )
φ1 : (S1 − P) → (0, 2π ) , φ1 = θ1
φ2 : (S1 − Q) → (−π, π ) , φ2 = θ2 . (3.2)
Just like discrete groups, Lie groups can have subgroups. If the
subgroup is continuous, and the underlying manifold is a submani-
fold of the original, then the subgroup is a Lie subgroup.
Matrix groups
Most of what we will discuss in this course concern matrix Lie
groups, those Lie groups whose elements are square, invertible
(obviously!) matrices, and whose operation is matrix multiplication.
The n-dimensional general linear group, GL(n, F ) is the set of
invertible n × n matrices over a field F. This is a subset of the set
of all n × n matrices over F, Matn ( F ). We shall normally take F to
be the real R or complex C numbers. The dimension of GL(n, R)
is n2 , since the entries of the square matrix are n2 independent real
numbers. The real dimension of GL(n, C) is 2n2 ; this is what we
will usually mean when we write about dimensionality. Sometimes
one might read that the complex dimension of GL(n, C) is n2 , in
which case the author is counting the number complex parameters
needed to specify a group element.
Let us introduce the most important subgroups of GL(n, R) and
GL(n, C):
1
Pf A = e A · · · Ai2n−1 i2n (3.11)
2n n! i1 i2 ...i2n i1 i2
where e is the 2n-dimensional antisymmetric symbol with
e1...2n = 1.7 The crucial step is to show 7
Note that (Pf A)2 = det A.
( R~v2 , R~v1 ) := vT T
v2 , ~v1 ) .
2 R Rv1 = (~ (3.15)
Similarly for vectors in ~v1 and ~v2 in Cn , the inner product h~v2 |~v1 i :=
v2† v1 is preserved under U (n) transformations.
Let us consider the Lie group SO(2), the group of rotations of
vectors in R2 :
( ! )
cos θ − sin θ
SO(2) = R(θ ) = θ ∈ [0, 2π ) . (3.16)
sin θ cos θ
The group properties are easily checked. Closure implies that the
product of two rotations is another rotation, and the smooth de-
pendence on the underlying manifold requires that R(θ2 ) R(θ1 ) =
R(θ1 + θ2 ), which can be shown to hold explicitly. The identity cor-
responds to R(0), and R(θ )−1 = R(2π − θ ). Of course associativity
follows from ordinary matrix multiplication.
Rotations of vectors in R3 are described by the Lie group SO(3).
Three-dimensional rotations are specified by a unit vector ~n ∈ S2
corresponding to the axis of rotation and an angle θ; therefore,
dim SO(3) = 3. Note that a rotation of angle θ ∈ [−π, 0] about
~n is equivalent to a rotation of angle −θ about −~n, so it suffices
to confine θ ∈ [0, π ] with ~n ∈ S2 . We can depict the manifold
of SO(3) as a ball of radius π in R3 , each point θ~n corresponding
to an angle and direction. Antipodal points on the surface of the
ball are identified with each other: for any ~n rotations clockwise
and anticlockwise by an angle π are equivalent, so we must have
π~n = −π~n.
Later we will show that elements of SO(3) can be written as
Note that the manifolds of SO(2) and SO(3) are compact, that is,
they have finite volume.
These matrices act on vectors in Rn+m and preserve the scalar prod-
uct vT
1 ηv2 for vectors ~
v1 and ~v2 in the vector space. The introduc-
tion of the signs in η does not change the dimensionality of the
group manifold compared to the corresponding orthogonal group,
so dim O(n, m) = dim O(n + m).
Examples familiar from the theory of special relativity are
( ! )
cosh ψ sinh ψ
SO(1, 1) = ψ∈R (3.20)
sinh ψ cosh ψ
g( x ) ∈ G , with x := ( x1 , x2 , . . . , x n ) ∈ Rn , (3.21)
zr = ϕr ( x, y) (3.23)
ϕr ( x̄, x ) = 0 = ϕr ( x, x̄ ) . (3.25)
Associativity,
implies
ordinates of p.
Consider a differentiable function f ( x1 , . . . , x n ) (abbreviated f ( x )
from now on) on M, or at least in some neighbourhood around p.
Along the curve there is a differentiable function h(λ) such that
dh dxi ∂ f
= , (3.29)
dλ dλ ∂xi
where we sum repeated indices. This should be true for any func-
tions f and h, so we can identify
d dxi ∂
= . (3.30)
dλ dλ ∂xi
We will see that d/dλ is a vector in a tangent space defined in
relation to point p in M, denoted Tp (M). The set {∂/∂xi } form a
basis for the tangent space.
Consider another curve through p, say x̃ (µ), in which case we
derive a differential operator
d d x̃i ∂
= . (3.31)
dµ dµ ∂xi
Let us define Lie algebras first and later make the connection with
Lie groups. Recall that a vector space V over a field F (usually F =
R or C for us), along with the following properties. For X, Y ∈ V
and α, β ∈ F:
α( X + Y ) = αX + αY (3.34)
(α + β) X = αX + βX (3.35)
(αβ) X = α( βX ) (3.36)
1X = X (scalar identity) (3.37)
~0 + X = X (vector identity) (3.38)
[ X, Y ] = −[Y, X ] . (3.39)
21
2. Jacobi identity
3. Linearity. For α, β ∈ F,
[ X, Y ] := X ∗ Y − Y ∗ X , (3.42)
[ Ta , Tb ] = f c ab Tc . (3.43)
f e ad f d bc + f e cd f d ab + f e bd f d ca = 0 . (3.44)
[ X, Y ] = X a Y b f c ab Tc . (3.45)
Illustrative examples
Recall that SO(2) is 1-dimensional, so we have a single coordi-
nate θ. Define a “curve” θ (λ) in this 1-dimensional space such that
θ (0) = 0. Then the “curve” in SO(2) is
!
cos θ (λ) − sin θ (λ)
g(λ) = , (3.46)
sin θ (λ) cos θ (λ)
22
d h T i
0= M (λ) M(λ)
dλ
dMT dM
= M + MT
dλ dλ
= ṀT M + MT Ṁ , (3.50)
n ( n − 1) n ( n − 1)
6 dim L(SO(n)) 6 dim Skewn = . (3.54)
2 2
By the “sandwich theorem” dim L( G ) = dim Skewn , so L( G ) =
Skewn . (However, see § 3.6 for a more satisfying proof.)
Note that L(O(n)) = L(SO(n)), since a matrix R ∈ O(n) which is
in a neighbourhood with the identity also has det R = 1. The matri-
ces in O(n) with determinant −1 correspond to a disconnected part
of the group manifold.
Turning to the unitary groups SU (n), let M (λ) be a curve in
SU (n) with M (0) = I. For small λ let us write
I = M† M (3.56)
† 2
= I + λ( X + X ) + O(λ ) (3.57)
we have
1 = det M (3.59)
2
= 1 + λTr X + O(λ ) . (3.60)
For U (n), the determinant can be any phase eiθ , so X need not be
traceless and
n o
L(U (n)) = X ∈ Matn (C) X † = − X . (3.62)
d
ġ3 |0 = g2 (µ̃t) g1 (λ̃t) 0
dt
= λ̃g2 ġ1 + µ̃ ġ2 g1 0
= λ̃X1 + µ̃X2 (3.67)
Note that the steps above could equally well have been done taking
µ̄ = −1, which would result in a minus sign in front of the t2 term.
Thus we can reparametrize this curve as
h̃(s) = e + s[ X1 , X2 ] + . . . . (3.71)
g ( t0 + ε ) = g ( t0 ) h p ( ε ) . (3.73)
We can identify X p with g(t0 )−1 ġ(t0 ). In other words, g(t0 )−1 ġ(t0 ) ∈
L( G ) for any t0 . By a similar argument ġ(t0 ) g(t0 )−1 ∈ L( G ). This
26
g X ( t2 ) g X ( t1 ) = g X ( t2 + t1 ) = g X ( t1 + t2 ) = g X ( t1 ) g X ( t2 ) . (3.80)
X 7→ exp X . (3.81)
where
t t2
Z = X + Y + [ X, Y ] + [ X, [ X, Y ]] + [Y, [Y, X ]] + O(t3 ) . (3.83)
2 12
The proof of this proceeds order-by-order in t and quickly becomes
tedious. This universal formula (3.82)–(3.83) shows that the group
structure of G near the identity e can be determined by the algebra
L ( G ).
For example consider the Lie group O(n). Let X ∈ L(O(n)) and
M = exp tX. We want to show that M ∈ O(n).
MT = (exp X )T
= exp X T
= exp(− X ) = M−1 . (3.84)
Tr X + Tr X T = Tr X − Tr X = 0 (3.85)
while also
Tr X + Tr X T = Tr( X + X T ) = 2 Tr X . (3.86)
γ̇(0) = A (3.90)
D ( g2 g1 ) = D ( g2 ) D ( g1 ) . (4.3)
This property (4.3) implies that the identity of the group maps to
the identity map on V, IN if we have a matrix representation, and
that the inverses map to inverses:
D (e) = IN (4.4)
−1 −1
D( g ) = D ( g) . (4.5)
This is enough to imply that D is injective, i.e. D ( g1 ) = D ( g2 ) =⇒ GL( N, F ) consists of the elements of G
which map to IN .
g1 = g2 .
30
D0 ( g) = 1 , ∀g ∈ G , (4.6)
D f ( g) = g , ∀g ∈ G , (4.7)
Adg X := gXg−1 .
• Group operation:
Adg2 g1 X = ( g2 g1 ) X ( g2 g1 )−1
= g2 g1 Xg1−1 g2−1
= Adg2 Adg1 X . (4.9)
( D (α) f ) ( x ) = f ( x − α) . (4.12)
v 7→ d( X )v . (4.13)
d( X ) = 0 , ∀ X ∈ L( G ) . (4.15)
d f (X) = X , ∀ X ∈ L( G ) . (4.16)
dim d f = n.
The adjoint representation of (any) Lie algebra, adX : L( G ) →
L( G ), is given by
adX Y = [ X, Y ] . (4.17)
g(t) = e + tX + . . . ∈ G, (4.18)
g1 (t) = e + tX1 + t2 W1 + . . .
g2 (t) = e + tX2 + t2 W2 + . . . (4.20)
and two ways of writing the representation of the product g1−1 g2−1 g1 g2 :
D ( g1−1 g2−1 g1 g2 ) = D (e + t2 [ X1 , X2 ] + . . .)
= I + t2 d([ X1 , X2 ]) + . . . (4.22)
as required.
Let us see this for the case of the adjoint representations of ma-
trix Lie groups and algebras. For g ∈ G and X, Y ∈ L( G ),
Adg Y = gYg−1
= ( I + tX )Y ( I − tX ) + . . .
= Y + t[ X, Y ] + . . .
= ( I + tadX + . . .)Y (4.25)
D ( g2 g1 ) = D ( g2 ) D ( g1 ) . (4.26)
33
D2 ( g) = R D1 ( g) R−1 ∀ g ∈ G
or d2 ( X ) = S d1 ( X ) S−1 ∀ X ∈ L( G ) . (4.27)
d( X )w ∈ W (4.28)
U ⊕ W = { u ⊕ w | u ∈ W, w ∈ W } (4.29)
( u 1 ⊕ w1 ) + ( u 2 ⊕ w2 ) = ( u 1 + u 2 ) ⊕ ( w1 + w2 ) (4.30)
λ(u ⊕ w) = (λu) ⊕ (λw) . (4.31)
( D (α) f )( x ) = f ( x − α) . (4.33)
( D (2πk) f )( x ) = f ( x ) , ∀f . (4.34)
Thus the vector space V can be written as the infinite direct sum
∞
M
V = . . . ⊕ W−1 ⊕ W0 ⊕ W1 ⊕ W2 ⊕ . . . =: Wn (4.38)
n=−∞
where we introduce the symbol for a direct sum in the last step.
Note that each invariant subspace occurs exactly once in V.
Tensor product
Given vector spaces V and W, the tensor product space V ⊗ W is
spanned by vectors of the form v ⊗ w ∈ V ⊗ W, where v and w are
basis elements of V and W, respectively. The tensor product of two
vectors v ∈ V and w ∈ W obeys the following distributive properies
v ⊗ ( λ 1 w1 + λ 2 w2 ) = λ 1 v ⊗ w1 + λ 2 v ⊗ w2
( λ1 v1 + λ2 v2 ) ⊗ w = λ1 v1 ⊗ w + λ2 v2 ⊗ w . (4.39)
λ 1 v 1 ⊗ w1 + λ 2 v 2 ⊗ w2 ∈ V ⊗ W (4.41)
i
Ta = i σa (5.2)
2
where the σa are the Pauli matrices.16 16
In most physics literature, the con-
Recall the identity σa σb = Iδab + ieabc σc . vention is to use Hermitian generators
t a = −iTa so that exp( X a Ta ) becomes
1 exp(iX a t a ).
[ Ta , Tb ] = − (σa σb − σb σa )
4
1
= − (ieabc − iebac )σc
4
= eabc Tc . (5.3)
c =e .
We can see that the structure constants for su(2) are f ab abc
The elements of the Lie algebra of SO(3) are the 3 × 3 antisym-
metric, real matrices
Basis
0 0 0 0 0 1 0 −1 0
T̃1 = 0 0 −1 , T̃2 = 0 0 0 , T̃3 = 1 0 0 ,
0 1 0 −1 0 0 0 0 0
(5.5)
or, more briefly ( T̃a )bc = −eabc . After using eacd ebde = −δab δce +
δae δbc in one direction and then the other, we see that [ T̃a , T̃b ] =
38
with ( a0 ,~a) real and a20 + |~a|2 = 1. This manifold is then the unit
sphere in R4 , namely S3 .
Recall that the centre of a group is the set of all x ∈ G such that
xg = gx ∀ g ∈ G . (5.7)
The set of all such cosets form a quotient group SU (2)/Z2 (under
coset multiplication) whose manifold is S3 , now with antipodes
identified.
We can draw the manifold as the upper half of S3 (i.e. a0 > 0)
with opposite points on the equator identified. This is just a curved
version of the SO(3) manifold. Therefore,
SO(3) ∼
= SU (2)/Z2 . (5.9)
1
ρ( A) = R where R has components Rij = Tr(σi Aσj A† ) . (5.10)
2
The map is 2-to-1, since ρ(− A) = ρ( A), and is called a double
covering of SO(3). That is, SU (2) is the double cover of SO(3).
There is a theorem which states that every Lie algebra is the
Lie algebra of exactly one simply-connected Lie group. Any other
Lie group with the same Lie algebra is covered by the simply con-
nected group.
V = { λ a Ta | λ a ∈ R } . (5.11)
39
VC = { λ a Ta | λ a ∈ C } . (5.12)
d( X + iY ) = d( X ) + id(Y ) (5.13)
d ( X ) = dC ( X ) (5.14)
where X ∈ g ⊂ gC .
A real form of a complex Lie algebra h is a real Lie algebra g
whose complexification is h, i.e. such that
gC = h . (5.15)
su(2)C = { λ a σa | λ a ∈ C } . (5.16)
Note that, while the elements of su(2) are the traceless, antihermi-
tian 2 × 2 matrices, the complexification breaks the antihermiticity
property, extending the algebra to all traceless matrices. This is just
the Lie algebra of SL(2, C). In fact this is true su(n)C , so we have
su(n)C ∼
= sl(n, C) . (5.17)
These satisfy
[ H, E± ] = ±2E±
[ E+ , E− ] = H . (5.19)
40
ad H E± = ±2E± , (5.20)
ad H H = [ H, H ] = 0 . (5.21)
The process must terminate for some n = N, again since the repre-
sentation is finite. This implies a basis for the irrep
where we have
using (5.29) to obtain the last line. Again, the raising operator just
undoes the lowering operator up to a multiplicative factor. In gen-
eral we will have the form
rn = rn−1 + Λ − 2n + 2 (5.32)
rn = (Λ + 1 − n)n . (5.33)
r N +1 = 0 = (Λ − N )( N + 1) (5.34)
SΛ = { −Λ, −Λ + 2, . . . , Λ − 2, Λ } . (5.35)
We identify
d( H ) = 2J3
d( E± ) = J1 ± iJ2 (5.37)
DΛ ( g) = exp dΛ ( X ) . (5.38)
DΛ (− I ) = exp iπ dΛ ( H ) . (5.41)
00
where LΛΛ,Λ0 are nonnegative integers, viz multiplicities, (Littlewood–
Richardson coefficients), counting the number of times irrep dΛ00
appears in the decomposition of the tensor product representation.
Given bases for the representation spaces VΛ and VΛ0 respec-
tively
Therefore, weights add in the tensor product to give the weight set
of the tensor product representation as
d Λ ⊗ d Λ 0 = d Λ + Λ 0 ⊕ d Λ + Λ 0 −2 ⊕ . . . ⊕ d | Λ − Λ 0 | . (5.51)
44
S1 = { −1, 1 }
S1,1 = { −2, 0, 0, 2 } . (5.52)
Thus we find
d1 ⊗ d1 = d2 ⊕ d0 (5.54)
or the combination of two spin- 21 states can yields spin-1 and spin-0
states. Sometimes equations like (5.54) are written using the repre-
sentations’ dimensions, e.g.
4-vector x µ = ( x0 , { xi })
Lorentz group are the set of transformations x µ 7→ x 0 µ which
leave scalar products such as xµ ηµν x ν invariant, where we take the
Minkowski metric to be
1 0 0 0
0 −1 0 0
ηµν := . (6.1)
0 0 −1 0
0 0 0 −1
x 0 = Λµ ν x ν .
µ
(6.2)
x µ ηµν x ν = x σ Λµ σ ηµν Λν ρ x ρ
=⇒ ηρσ = Λµ σ ηµν Λν ρ
η = ΛT ηΛ . (6.3)
Λµ 0 ηµν Λν 0 = η00 = 1
( Λ0 0 )2 − ∑ ( Λ i 0 )2 = 1
i
( Λ0 0 )2 > 1 (6.5)
1. Rotations
!
1 0
Λ R := with R ∈ SO(3) . (6.7)
0 R
2. Lorentz boosts
!
cosh ψ nT sinh ψ
Λ B := (6.8)
−n sinh ψ I − nnT (cosh ψ − 1)
While the rotations form a subgroup, SO(3) < SO(1, 3)↑ , the
boosts do not. The boosts do not close under composition (at least
in any spacetime with more than 1 spatial dimension).
7.1 Definitions
Here we define a few key terms required in the rest of the chapter.
This is an ideal of g.
• The centre of g is
J (g) := { X ∈ g | [ X, Y ] = 0, ∀Y ∈ g } . (7.2)
i : V×V → F (7.3)
i (v, w) 6= 0 (7.4)
defining M( X, Y )ec := X a Y b f d bc f e ad .
κ ab := f d bc f e ad ∈ F . (7.7)
Note that κba = κ ab . The Killing form κ ( X, Y ) meets all the criteria
for being an inner product on the Lie algebra.
Adjoint representation is how elements Z ∈ g act on g. For
X, Y ∈ g
X 7→ X + adZ X
Y 7→ Y + adZ Y . (7.8)
κ ( X, Y ) 7→ κ ( X + adZ X, Y + adZ Y )
≈ κ ( X, Y ) + κ (adZ X, Y ) + κ ( X, adZ Y ) . (7.9)
κ (adZ X, Y ) + κ ( X, adZ Y ) = 0
i.e. κ ([ Z, X ], Y ) + κ ( X, [ Z, Y ]) = 0 . (7.10)
ad[ Z,W ] U = [[ Z, W ], U ]
= −[[W, U ], Z ] − [[U, Z ], W ]
= [ Z, [W, U ]] − [W, [ Z, U ]]
= (adZ adW − adW adZ )U . (7.11)
49
Thus,
[ X, A] ∈ a . (7.14)
We will show that this implies the Killing form is degenerate. Let
us choose a basis { Ti } for a and extend it to g:
[ Ti , Tj ] = 0 =⇒ f ijB = 0 , (7.16)
Eigenvectors of ad H
As we did for su(2) in (5.21), we wish to study the eigenvectors of
the adjoint representation of h. For any H, H 0 ∈ h
ad H H 0 = [ H, H 0 ] = 0 (7.21)
which implies
[ad H , ad H 0 ] = 0 . (7.22)
{ Hi | i = 1, . . . , r } (7.23)
ad Hi Eα = [ Hi , Eα ] = αi Eα (7.24)
[ H, Eα ] = ρi [ Hi , Eα ] = ρi αi Eα =: α( H ) Eα (7.26)
α( H + H 0 ) Eα = [ H + H 0 , Eα ]
= [ H, Eα ] + [ H 0 , Eα ]
= α( H ) + α( H 0 ) Eα .
(7.27)
{ Hi | i = 1, . . . , r } ∪ { Eα | α ∈ Φ } . (7.28)
[ Hi , Hj ] = 0 (7.29)
[ Hi , Eα ] = αi Eα (7.30)
Nα,β Eα+ β
α+β ∈ Φ
[ Eα , Eβ ] = κ ( Eα , Eβ ) Hα α+β = 0 (7.31)
0 otherwise
52
[ Hα , Eβ ] = (κ −1 )ij α j [ Hi , Eβ ]
= (κ −1 )ij α j β i Eβ
= (α, β) Eβ . (7.32)
[ hα , h β ] = 0 (7.34)
2(α, β)
[ hα , e β ] = e (7.35)
(α, α) β
nα,β eα+ β
α+β ∈ Φ
[ eα , e β ] = hα α+β = 0 . (7.36)
0 otherwise
Φ = Φ+ ∪ Φ− . (7.37)
α ∈ Φ+ =⇒ −α ∈ Φ− (7.38)
α, β ∈ Φ+ =⇒ α + β ∈ Φ+ . (7.39)
α = (α − β) + β (7.40)
53
β = ( β − α) + α (7.41)
2(α, β)
`α,β = 1 − ∈ Z+ . (7.42)
(α, α)
Recall the root string is the set
Sα,β = { β + nα | n ∈ Z, n− 6 n 6 n+ } (7.43)
2(α, β)
n+ + n− = − ∈ Z. (7.44)
(α, α)
From (i) we know β − α is not a root, so n− = 0. Then
2(α, β)
n+ = − ∈ Z>0 . (7.45)
(α, α)
We add 1 to obtain the length
`α,β = n+ − n− + 1 ∈ Z+ (7.46)
(v) The next three properties show that the simple roots form
∗ . Anticipating this, let us denote the set of
a basis for hR
simple roots with as
n o
ΦS = α(i) i = 1, . . . , |ΦS | . (7.47)
α= ∑ a i α (i ) (7.48)
i
54
(vi) The simple roots are linearly independent. By (vi), all dual
∗ can be written as
vectors λ ∈ hR
λ= ∑ c i α (i ) , with ci ∈ R . (7.49)
i
λ = 0 ⇐⇒ ci = 0 ∀i . (7.50)
J± := { i | c ≷ 0 } (7.51)
and write
λ+ := ∑ ci α(i) and
i ∈ J+
λ− := − ∑ c i α (i ) = ∑ bi α ( i ) , (7.52)
i ∈ J− i ∈ J−
λ = λ+ − λ− = ∑ c i α (i ) − ∑ bi α ( i ) . (7.53)
i ∈ J+ i ∈ J−
∗ we have
Using the inner product on hR
∗
(vii) A corollary of (v) and (vi), is that there are r = dim hR
simple roots, i.e.
|ΦS | = r , (7.55)
∗.
and that they form a basis for hR
55
7.7 Classification
matrix elements
2( α ( j ) , α (i ) )
A ji = . (7.56)
( α (i ) , α (i ) )
[ h α (i ) , h α ( j ) ] = 0
[hα(i) , e±α( j) ] = ± A ji e±α( j)
[eα(i) , e−α( j) ] = δij hi , (7.57)
with no sums over indices above. The first and second relations are
straightforward transcriptions of (7.36). The third one is clear in the
/ Φ.
case where i = j. For i 6= j we use the fact that α(i ) − α( j) ∈
From (7.36) we have
The relations (7.57) and (7.59) completely characterize the Lie al-
gebra. It can further be proven that any finite-dimensional, simple,
complex Lie algebra is uniquely determined by its Cartan matrix.
We proceed by classifying the possible Cartan matrices, and
showing how to reconstruct the corresponding Lie algebras.
Bibliography