Ward
Ward
REPRESENTATIONS
ABIGAIL WARD
Contents
1. An introduction to Lie algebras 1
1.1. Lie groups and Lie algebras: some beginning motivation 1
1.2. Definitions and examples 3
1.3. Characterizing properties of Lie algebras 4
1.4. A useful example: sl(2, C) 5
2. Nilpotent sub-algebras and diagonalization results 5
3. Weight Spaces and Root spaces 8
3.1. Weight Spaces 8
3.2. Root spaces 10
4. Structure of Root Systems 12
5. The Universal Enveloping Algebra 15
6. Relating roots and weights 16
7. Applying these results: the Lorentz group and its Lie algebra. 20
Acknowledgments 22
References 22
When studying manifolds, we often pass to studying the tangent space, which
is simpler since it is linear; since Lie groups are manifolds, we can apply this
technique as well. The question then arises: if the tangent space is an infinitesimal
approximation to the Lie group at a point, how can we infinitesimally approximate
the group action at this point? The study of Lie algebras gives us an answer to this
question.
The study of matrix groups is well-understood, so a first step in understanding
Lie groups is often representing the group as a group of matrices; for an example
consider the following:
Example 1.1. Recall the circle group, which may be written {eiθ |θ ∈ [0, 2π)}—we
know we may also represent this group as the set of matrices
cos θ − sin θ
|θ ∈ [0, 2π) .
sin θ cos θ
All the Lie groups that we give as examples in this paper will be finite-dimensional
matrix groups. Thus when motivating the study of Lie algebras, let us assume our
group G is a matrix group. We know we may write the tangent space at the identity
to this group as
1.2. Definitions and examples. A Lie algebra is g is a vector space over a field
K, equipped with an operation [·, ·] : L × L → L satisfying the following properties:
for all X, Y, Z ∈ g, and a, b ∈ K.
• linearity: [aX + bY, Z] = a[X, Z] + b[Y, Z]
• anti-symmetry: [X, Y ] = −[Y, X]
• the Jacobi identity:[X, [Y, Z]] = [[X, Y ], Z] + [Y, [X, Z]]
Example 1.2. If g is an algebra, then g becomes a Lie algebra under the bracket
operation defined by the commutator:
[X, Y ] = XY − Y X.
In particular for a vector space K, EndK (V ) becomes a Lie algebra under this
operation.
As we might expect, a Lie algebra homomorphism is a map between Lie
algebras that respects the vector space and bracket structure: if ϕ : g → g0 is a
Lie algebra homomorphism, then ϕ([X, aY + Z]) = [ϕ(X), aϕ(Y ) + ϕ(Z)] for all
X, Y, Z ∈ g and all a in the underlying field. If a, b ⊂ g are subsets of a Lie algebra
g, then we take [a, b] = span{{[a, b]|{a ∈ a, b ∈ b}. A subalgebra h ⊆ g is a vector
subspace of g such that [h, h] ⊆ h; if in addition [g, h] ⊆ h, then h is an ideal. g
is said to be Abelian if [g, g] = 0; this definition makes sense since the bracket
operation often arises as the commutator of operators1. We may also define in the
usual way a direct sum of Lie algebras.
If g is a vector space over a field K, and V is a vector space over the same
field, and π : g → EndK V is a homomorphism, then π is a representation of g
on V . If π : g → EndK V and π 0 : g → EndK V 0 are two representations, these
representations are equivalent if there exists an isomorphism E : V → V 0 such that
Eπ(X) = π 0 (X)E for all X ∈ g. An invariant subspace for a representation is a
subspace U such that π(X)U ⊆ U for all X ∈ g.
We can consider the representation of g on itself denoted ad and defined by
ad(X)(Y ) = [X, Y ]; that this is a linear map that respects the bracket structure
follows from the linearity of the bracket operation and the Jacobi identity. Fur-
thermore, for any subalgebra h ⊆ g, the adjoint representation restricted to h is a
representation of h on g.
Example 1.3. The Heisenberg Lie algebra is a Lie algebra H over C generated
by elements {P1 , . . . , Pn , Q1 , . . . , Qn , C} satisfying
(1.4) [Pi , Pj ] = [Qi , Qj ] = [Pi , C] = [Qj , C] = [C, C] = 0
(1.5) [Pi , Qj ] = δij C.
(We use C since this denotes the center of the Lie algebra). If we take C = i~1,
we may note that this models the quantum mechanical position and momentum
operators, with Pi and Qi being the momentum and position operators of the i-th
coordinate, respectively.
In the case where n = 1, one representation of this is given by
0 a c
π(aP + bQ + cC) = 0 0 b .
0 0 0
1Indeed, when studying the universal enveloping algebra of a Lie group, we will see that every
bracket operation of two elements can be regarded as a commutator of the two elements
4 ABIGAIL WARD
We omit the proof of these statements. In many cases it is obvious that a Lie
algebra is semi-simple from (b) or (c), and some texts (especially physics texts)
often take (c) as the definition.
1.4. A useful example: sl(2, C). One of the most commonly encountered Lie
algebraz is
sl(n, C) = X ∈ gl(n, C)| Tr(X) = 0}
That this is a Lie algebra follows from the fact that Tr(AB) = Tr(BA) for all
matrices A, B, so Tr([A, B]) = 0 for any A, B ∈ gl(n, C)| Tr(X) = 0.
Remark 1.13. sl(2, C) has a basis given by
1 0 0 1 0 0
h= , e= , f= .
0 −1 0 0 1 0
The study of the representations of (2, C) has implications in classifying the
structure of semi-simple Lie algebras. While we do not prove the major theorems
about these representations found in e.g. [1], we do present a result that has
important implications in our study. If we let ϕ be a representation of a Lie algebra
g on a vector space V , then ϕ is completely reducible if there exist invariant
subspaces U1 , . . . , Un such that V = U1 ⊕ . . . ⊕ Un and the restriction of ϕ to Ui is
irreducible.
We have the following (which is unproven here).
Theorem 1.14. If ϕ is a representation of a semi-simple Lie algebra on a finite
dimensional vector space, then ϕ is completely reducible; in particular, if ϕ is a
representation of sl(2, C), then ϕ is completely reducible.
We also have the following, which is specific to sl(2, C):
Theorem 1.15. With h the basis vector for sl(2, C) above, ϕ(h) is diagonalizable
with all eigenvalues integers and the multiplicity of each eigenvalue k equal to the
multiplicity of −k.
For a proof of these results, see [1].
such that [g, g] ⊆ h. Now, note that [g, h] ⊆ [g, g] ⊆ h, so h is an ideal. As any
subalgebra of a solvable Lie algebra is solvable, we have that h is solvable, and since
it has dimension n − 1, we may apply the induction hypothesis to obtain a vector
v ∈ V such that v is a simultaneous eigenvector for all H ∈ h. We then have for
each H ∈ h, π(H)v = λ(H)v, where λ : h → K is a functional.
Now, choose nonzero X ∈ g not in h, and define recursively:
e−1 = 0, e0 = v, ep = π(X)ep−1
and let E = span{e0 , e1 , . . .}. Note that π(X)E ⊆ E, so we may regard π(X) as
an operator on E; then, since all the eigenvalues of π(X) lie in K, we may chose
an eigenvector w ∈ E of π(X).
Denote Ep = span{e0 , e1 , . . . , ep }. We first show that π(H)ep ≡ λ(H)ep mod Ep−1 .
We may proceed by induction on p. We chose e0 = v such that this condition holds.
Now assume the result for p and let H ∈ h. We have
π(H)ep+1 = π(H)π(X)ep
= π([H, X])ep + π(X)π(H)ep .
Now, note that π([H, X]) ∈ h, so by induction we have π([H, X])ep ≡ λ([H, X])ep
mod Ep−1 . We also have that π(H)ep ≡ λ(H)ep mod Ep−1 , so
π(X)π(H)ep ≡ π(X)λ(H)ep mod span{Ep−1 , π(X)Ep−1 }.
But
span{Ep−1 , π(X)Ep−1 } = span{e0 , . . . , ep−1 , π(X)e0 , . . . , π(X)ep−1 } = Ep ,
so we have that
π(H)ep+1 ≡ λ(H)ep + λ(H)π(X)ep mod Ep
≡ λ([H, X])ep + λ(H)π(X)ep mod Ep
≡ 0 + λπ(H)π(X)ep mod Ep
≡ λπ(H)ep+1 mod Ep
and so the result follows by induction.
Now, we may chose n such that {e1 , . . . , en } form a basis for E. Then as
π(H)ep ≡ λ(H)ep mod Ep−1 , relative to this basis, π(H) when considered as an
operator on E has the form
λ(H) ?
λ(H)
.. ..
. .
0 λ(H)
(i.e. is upper triangular with all diagonal entries λ(H).) We see then that Tr(π(H)) =
λ(H) dim E. Applying this result to [H, X], we obtain that
λ[H, X] dim E = Tr π([H, X]) = Tr[π(H), π(X)] = 0
(where here we have used the fact that Tr[A, B] = 0 for any matrices A and B).
Since dim E 6= 0, we have that λ[H, X] = 0. We may use this to refine our pre-
vious result. Before, we had π(H)ep ≡ λ(H)ep mod Ep−1 ; we wish to refine this
SEMI-SIMPLE LIE ALGEBRAS AND THEIR REPRESENTATIONS 7
and we may conclude that π(H)ep = λ(H)ep for all p. Then π(H)e = λ(H)e for
all e ∈ E and in particular π(H)w = λ(H)w. Since then w is an eigenvector for
all H ∈ h, and we chose w to be an eigenvector of X, we conclude that w is a
simultaneous eigenvector for all G ∈ g.
Corollary 2.2. Under the assumptions on g, V, π, and K as above, there exists a
sequence of subspaces
V = V0 ⊇ V1 ⊇ . . . ⊇ Vm = 0
such that each Vi is stable under π(g) and dim Vi /Vi+1 = 1. This means that there
exists a basis with respect to which all the matrices of π(g) are upper triangular. ??
Proof. Proceed by induction on the dimension of V . If m = 1, the result is obvious.
For m > 1, the above theorem tells us that we may find a simultaneous eigenvector
v of π(X) for all X ∈ g. Let U be the subspace spanned by this vector. Then
since π(g)U = U , π acts a representation π̃ on the quotient space V /U , which has
dimension strictly less than V . By the inductive hypothesis we may thus find a
sequence of subspaces
W = W 0 ⊇ V 1 ⊇ . . . ⊇ Wm = 0
such that each Wi is stable under π̃(g) and dim Wi /Wi+1 = 1. Now, consider the
corresponding sequence of subspaces in V , where each Vi has image Wi under the
projection to the quotient space; we obtain the sequence
V = V0 ⊇ V1 ⊇ . . . ⊇ Vm = 0
which satisfies the desired properties. Hence by induction, such a sequence may
always be found.
Now, chose a vector vi ∈ Vi−1 for 1 ≤ i ≤ m such that Kvi + Vi = Vi−1 ; with
respect to the basis {v1 , . . . , vm }, all matrices of π(g) are upper triangular.
The following result allows us to relate the property of nilpotency of a Lie algebra
and nilpotentcy of its adjoint representation:
Theorem 2.3. A Lie algebra h is nilpotent if and only if the Lie algebra ad H is
nilpotent.
Proof. Let H ∈ h. We have
[[...[Hk+1 , Hk ], Hk−1 , . . . , H1 ] = ad[...[Hk+1 , Hk ], ..., H2 ](H1 )
so using the fact that
ad[...[Hk+1 , Hk ], ..., H2 ] = [. . . [ad Hk+1 , ad Hk ], . . . , ad H2 ]
8 ABIGAIL WARD
shows us that if h is nilpotent, then the left side is 0 for high enough k, and
conversely, if ad h is nilpotent, then the right side is 0 for high enough k and hence
h is nilpotent.
In the future, these theorems will allow us to put matrices in our Lie algebras
into useful forms.
Now, recall that for m ≥ dim V , for v ∈ Vα,H , (π(H) − α(H)1)m v = 0. In the
above expression, take n = 2 dim V .Then for v ∈ Vα,H , we have that
(π(H) − α(H)1)n π(Y )v = π(Y )(π(H) − α(H)1)n v
n−1
(π(H) − α(H)1)n−s−1 π([H, Y ])(π(H) − α(H)1)s v
X
+
s=0
n/2
(π(H) − α(H)1)n−s−1 π([H, Y ])(π(H) − α(H)1)s v
X
=
s=0
(all other terms vanish). Now, consider each remaining term of the form
(π(H) − α(H)1)n−s−1 π([H, Y ])(π(H) − α(H)1)s v.
Note that (π(H) − α(H)1)s v ∈ Vα,H , and since [H, Y ] ∈ h(m−1) , we have by the
inductive hypothesis that [H, Y ] leaves Vα,H stable, so
π([H, Y ])(π(H) − α(H)1)s v ∈ Vα,H .
But then since n − s − 1 ≥ dim V , (π(H) − α(H)1)n−s−1 acts as 0 on Vα,H , so we
obtain that
n/2
(π(H) − α(H)1)n−s−1 π([H, Y ])(π(H) − α(H)1)s v = 0.
X
s=0
(this direct sum decomposition makes since there are only finitely many µ(H1 ) such
that Vµ(H1 ),H1 6= 0). Now, note that for each µ ∈ h∗ , Vµ(H1 ),H1 is invariant under
π(h), so in particular is invariant under H2 ; Thus we may further decompose V in
the same way:
M M
V = Vµ(H1 ),H1 ∩ Vµ(H1 ),H2 ,
values of µ(H1 ) values of µ(H2 )
The above definition of a Cartan subalgebra is hard to work with; the following
characterization is more useful:
Proposition 3.4. If g is a finite-dimensional Lie algebra and h is a nilpotent
subalgebra, then h is a Cartan subalgebra if and only if
h = Ng (h) = {X ∈ g|[X, h] ⊆ h}.
Proof. First note that if h is any nilpotent subalgebra, we have that h ⊆ Ng (h) ⊆ g0 :
the first of these inclusions holds because h is a subalgebra, and the second holds
because ad H n X = ad H n−1 [H, X], and ad H n−1 is 0 on h for large enough n, since
h is nilpotent.
Now, assume that h is not a Cartan subalgebra, so g0 6= h. Then g0 /h is nonzero,
and as the quotient of a solvable Lie algebra, is solvable. By Lie’s theorem, there
exists a nonzero X̃ + h ∈ g0 /h that is a simultaneous eigenvector of ad h, and its
eigenvalue has to be zero (as otherwise X would not be in g0 ). But then for H ∈ h,
[H, X] ∈ h, so X is not in h but is in Ng (h). Thus h 6= Ng (h).
Conversely, note that if h is a Cartan subalgebra, g0 = h, so Ng (h) = h.
In particular, semi-simple Lie algebras have easily understood Cartan sub-algebras.
Proposition 3.5. If g is a complex finite-dimensional semi-simple Lie algebra and
h is a Cartan subalgebra then h is Abelian.
Proof. Since h is nilpotent, ad h is solvable and we may apply Lie’s theorem to
obtain a basis for g in which ad h is simultaneously triangular. Now, note that for
triangular matrices, Tr(ABC) = Tr(BAC), so for any H1 , H2 , H ∈ h, we have
B(ad[H1 , H2 ], ad H) = 0
Now, let H ∈ h and X ∈ gα where α is any nonzero weight. By the above
proposition, may find a basis
{Gα1 ,1 , . . . , Gα1 ,n1 , . . . , Gαm ,1 , . . . , Gαm ,nm }
where each set {Gαi ,1 , . . . , Gαi ,ni } is a basis for gαi , and the set of all such Gα,i
forms a basis for g. Now, recall that for any B ∈ gβ ,
ad H(ad XB) = ad H([X, B]) ∈ gα+β .
In this basis, we see that ad H ad X has all zeros on the diagonal, so Tr(ad H ad X) =
0. In particular, for any H1 , H2 ∈ h, Tr(ad[H1 , H2 ] ad X) = 0.
We thus have proven for any X ∈ gα or X ∈ h, for any H1 , H2 ∈ H,
Tr(ad[H1 , H2 ] ad X) = 0. Since any X ∈ g may be written as a linear combination
of such elements and because trace is linear, we obtain that for any H1 , H2 ∈ h,
any X ∈ g,
B([[H1 , H2 ], X]) = 0.
Since g is semi-simple, the Killing form is non-degenerate; hence [H1 , H2 ] = 0 for
any H1 , H2 ∈ h.
12 ABIGAIL WARD
Then since
L adg h is simultaneously diagonalizable, we know we may write g0 as
g0 = h τ , where [h, τ ] = 0. Now, note that if τ is nonzero, then there exists
X ∈ τ , and [h + CX, h + CX] = [h, h] + [h, CX] + [X, X] = 0. Hence h + CX is an
Abelian subalgebra properly containing h, a contradiction. Hence τ = 0 and h is a
Cartan subalgebra.
We will show later that all maximal Abelian sub-algebras are diagonalizable, so
we may leave out that assumption. Many texts take this as the definition of a
Cartan subalgebra.
Furthermore, we have the following: (for a proof, see [1], pp. 92-93).
Proposition 3.7. Any two Cartan sub-algebras of a finite-dimensional complex
Lie algebra g are conjugate by an inner automorphism of g.
Corollary 3.8. All Cartan sub-algebras h of g have the same dimension; this is
the rank of g.
Now, for each α ∈ ∆ ∪ {0} and consider the action of h on gα . Since h fixes
gα and ad h is nilpotent and hence solvable, by Lie’s theorem that we may find a
simultaneous eigenvector for ad h on gα . Let Eα be such an eigenvector; then for
all H ∈ h, [H, Eα ] = α(H)Eα .
We have the following further information about the roots of h:
Proposition 4.2. (a) If α is a root, X is in g−α , then [Eα , X] = B(Eα , X)Hα .
(b) If α, β ∈ ∆, then β(Hα ) = qα(Hα ) for some q ∈ Q.
(c) If α ∈ ∆, then α(Hα ) 6= 0.
Proof. (a) We have that
B([Eα , X], H) = B(X, [H, Eα ])
= α(H)B(X, Eα )
= B(Hα , H)B(Eα , X)
= B(B(Eα , X)Hα , H)
and since B is nondegenerate, we conclude that B(Eα , X)Hα = [Eα , X].
0
(b) By Proposition 4.1b we can chose X−α ∈ g such that B(Eα , X−α ) 6= 0; further-
more, any such X−α ∈ g−α . After normalizing, we have that B(Eα , X−α ) = 1.
Then by a, we have that [Eα , X−α ] =LHα .
Now, consider the subspace g0 = n∈Z gβ+nα , which by Proposition 4.1b
is invariant under ad Hα . Now, consider the trace of ad Hα considered as an
operator on this subspace in two ways. First, note that for any n, ad Hα acts
on gβ+nα with generalized eigenvalue β(Hα ) + nα(Hα ), so we have
X
Tr(ad Hα ) = (β(Hα ) + nα(Hα )) dim gβ+nα .
n∈Z
0
We also know that g is invariant under Eα and X−α , so we regard these
as operators on this subspace. Since we have that [Eα , X−α ] = Hα , we then
14 ABIGAIL WARD
With this normalization and proposition 4.2 above, we have the relations
[Hα , Eα ] = α(Hα )Eα ,
[Hα , E−α ] = −α(Hα )E−α ,
[Eα , E−α ] = Hα
After re-normalization, we obtain that
[Hα0 , Eα0 ] = 2Eα ,
[Hα0 , E−α0 ] = −2E−α ,
[Eα0 , E−α
0
] = Hα0 .
Then some checking shows that
0 1 0 0 0 1 0 1
Hα 7→ , Eα 7→ , E−α0 7→
0 −1 0 0 0 0
0
defines an isomorphism between the vector space spanned by Hα0 , Eα0 , and E−α and
the vector space spanned by these three matrices. This the vector space
a b
|a, b, c ∈ C = sl(2, C).
c −a
So, we conclude that every semi-simple Lie algebra contains within it embedded
copies of sl(2, C).
i ϕ̃
U (g)
commutes.
A universal enveloping algebra g can be explicitly constructed for any complex
Lie algebra; for the details of this construction, refer to [1, pp. 164–180]. Further-
more, any two universal enveloping algebras are the same up to unique isomorphism.
Proposition 5.2. Representations of g on complex vector spaces are in one-to-one
correspondence with left U (g) modules, with the correspondence being π 7→ π̃.
Proof. If π : g → EndC (V ) is a representation of g on V , then by the definition
of U (g), there exists an extension π̃ : U (g) → EndC (V ), and V becomes a left
U (g) module with the left action being defined as uv = π̃(u)v for all u ∈ U (g).
16 ABIGAIL WARD
Conversely, if V is a left V module, then we may define for all X ∈ g π(X)v = i(X)v,
and since i is a Lie algebra homomorphism, this defines a representation. Since
π̃ ◦ i = π these two operations are inverses, and thus this correspondence is one-to-
one.
To proceed, we need an important theorem about the universal enveloping al-
gebra which describes a basis for U (g). We want to describe this basis in terms of
the basis for g, and we may do this as follows:
Theorem 5.3. (Poincaré-Birkhoff-Witt). Let g be a complex Lie algebra, and let
{Xi }i∈A be a totally ordered basis for g (note that here this basis can be uncountable,
although we will always work with A finite). Then the set of all finite monomials
i(Xi1 )j1 · · · i(Xik )jk
with all jk ≥ 0 forms a basis for U (g).
Corollary 5.4. The canonical map i : g → U (g) is one-to-one.
The proof of the Poincaré-Birkhoff-Witt theorem, while not difficult, is rela-
tively long, and is omitted here; for reference, see [[1] pp. 168-171]. This theorem
gives us many useful results, among of which is the following, which follows almost
immediately:
Corollary 5.5. If h is a Lie subalgebra of g, then the associative subalgebra of U (g)
generated by 1 and h is canonically isomorphic to U (h).
We know from the above that Eβk v = 0, and that E−βk v ∈ Vλ−β , and H` v = λv
from Proposition X. We conclude that if the mutliplication by the representation
of the monomial gives a result in Vλ then q1 , . . . qk , p1 , . . . , pk = 0, and hence we
obtain a multiple of v. Since we know that U (g)v = V , we conclude that Vλ consists
precisely of the span of v, proving (b).
Furthermore, we know that if the monomial acts nontrivially on v, then the
result lands in the weight space with weight
k
X
λ− ∂i βi .
i=1
Again, since U (g)v = V , we obtain that these are all the weight spaces there are;
Pk
in other words, every weight is of the form λ − i=1 ∂i βi , and d is proved. Since λ
is the only weight in ∆ that can have this property, and this is independent of the
ordering chosen, a is proven.
We can now prove that the second half of c: that if Eα v = 0 for all α ∈ ∆+ , then
v ∈ Vλ . Assume not, so there exists some such v ∈ / Vλ . Without loss of generality,
we may assume that v has no component in Vλ . Let λ0 be the largest weight such
that v has a nonzero component in Vλ0 and let v 0 be this component. Then since
Eα v = 0 for all α, and in particular the component of Eα v in Vα+λ0 is 0, we must
have Eα v 0 = 0 for all α (as Eα (v − v 0 ) will always have 0 component in Vα+λ0 ). We
also have that ϕ(h)v 0 ⊆ spanv 0 . Then applying a general monomial to v and using
the result that U (g)v = V , we conclude that
X p
pk
1
V = E−β 1
. . . ϕ Eβk Cv = V
p1
. . . ϕ Eβpkk Cv ∈ Vλ0 − i βi pi , so we would
P
but this is impossible since E−β 1
obtain that V is contained within only weight spaces with weight less than or equal
to λ0 , a contradiction. Hence if Eα v = 0 for all α ∈ ∆+ , then v ∈ Vλ , and this
property characterizes Vλ .
Now, we may prove that λ is dominant, i.e. that hλ, αi ≥ 0 for all α ∈ ∆+ . For
α ∈ ∆+ , form Hα , Eα , and E−α , normalized to obey equations X. As we have seen,
SEMI-SIMPLE LIE ALGEBRAS AND THEIR REPRESENTATIONS 19
these vectors span an isomorphic copy of sl(2, C) that we label slα . For v 6= 0 in
Vλ , the subspace of V spanned by all monomials
7. Applying these results: the Lorentz group and its Lie algebra.
In this section we apply the results proven above to sketch out how the study
of the Lie algebra of the Lorentz group gives information in theoretical physics.
For the sake of brevity, we assume the reader has some familiarity with theoretical
physics and in particular the Lorentz group and Einstein summation convention;
we also leave out many computational details that are easy to check but lengthy
to write out. For much more detail on the applications of Lie algebras to particle
physics, the reader is referred to [2].
Briefly, the Lorentz group is the group of symmetries that acts on events in
Minkowski space that leaves the relativistic interval between two events invariant.
The the characterizing condition states that for two events x and y, for all Λ in the
Lorentz group
gαν (Λx)α (Λy)ν = gαβ xα y β .
(where gαν is the Minkowski metric). If we use the matrix representation where
1 0 0 0
0 −1 0 0
G= ,
0 0 −1 0
0 0 0 1
and we consider only transformations that preserve orientation and the direction
of time, we obtain the group SO(1, 3), which can be represented as matrices X
satisfying the equation
X T G + GX = 0.
This is a quadratic equation in the entries of X that defines the manifold SO(1, 3).
In identifying the Lie algebra with the tangent space so(1, 3) at the identity by
differentiating, we obtain that for X ∈ so(1, 3), we must have X + X T = 0, so
so(1, 3) is the space of skew symmetric matrices (which is clearly a Lie algebra, as
the commutator of two skew symmetric matrices is skew symmetric). Note that
so(1, 3) has dimension 6, since any element is characterized by its entries above the
diagonal.
Our study of this Lie algebra takes us through the study of a related Lie algebra,
su(2) (denoted as such since it is the Lie algebra of the group SU (2)). Consider
the real Lie algebra of dimension three with the following basis:
0 −i 0 −1 −i 0
τ1 = 1/2 , τ2 = 1/2 , τ3 = 1/3 .
−i 0 1 0 0 i
and so on; we obtain that with respect to the basis {τ1 , τ2 , τ3 }, the adjoint repre-
sentation has matrices
0 0 0 0 0 0 0 −1 0
ad(τ1 ) = 0 0 −1 , ad(τ2 ) = 0 0 −1 , ad(τ2 ) = 1 0 0 .
0 1 0 0 1 0 0 0 0
Consider now the complexification of the aforementioned Lie algebras–i.e., the
set of matrices when considered as a complex vector space. Ticciati identifies the
complexification of so(1, 3) with C ⊗ so(1, 3) and the complexification of su(2) with
C ⊗ su(2); we follow Knapp in denoting these instead as so(1, 3)C and su(2)C . This
is useful because it allows us to apply our above results with roots and weights,
which all depended on having a complex Lie algebra. Note that so(1, 3)C is the Lie
algebra of anti-Hermitian matrices, and su(2)C is exactly sl(2, C).
Consider the matrices
0 0 0 0 0 0 0 0 0 0 0 0
, X3 = 0 0 −1 0 ,
0 0 0 0 0 0 0 1
X1 = 0 0 0 −1
, X 2 =
0 0 0 0 0 1 0 0
0 0 1 0 0 −1 0 0 0 0 0 0
0 1 0 0 0 0 1 0 0 0 0 1
1 0 0 0 0 0 0 0 0 0 0 0
B1 = 0 0 0 0 ,
B2 = 1 0 0 0 , B3 = 0 0 0 0 .
0 0 0 0 0 0 0 0 1 0 0 0
and the basis of so(1, 3)C given by Ti = 1/2(Xi + iB1 ), Ti = 1/2(Xi − iB1 ).
Computations will show that the maps Ti 7→ τi , Ti 7→ τi are isomorphisms, so we
conclude that so(1, 3)C ∼= su(2)C ⊕ su(2)C . Thus studying the representations of
su(2)C will give us information on the representations of su(2)C .
To study the representations of su(2)C , we first choose a Cartan subalgebra.
Since su(2)C is semi-simple and we cannot chose two linearly independent elements
of su(2)C that commute with each-other, the subspace generated by any element
is a maximal Abelian subalgebra and hence a Cartan subalgebra. For notational
convenience’s sake, denote ρj = iτj for j = 1, 2, 3; then, following the usual conven-
tion intheoretical
physics (where our z axis is always the axis of choice), we chose
1 0
ρ3 = as our basis for this algebra. We wish to find then operators X
0 −1
such that
[ρ3 , X] = λX
where such X will be in our root spaces. Computation will show that for operators
of the form R = iρ1 + iρ2 , λ = 1, and for operators of the form L = ρ1 − iρ2 ,
λ = −1. Dimension considerations and Proposition 3.2 then show that we may
write X = Cρ3 ⊕ CL ⊕ Ŗ; this is one root space decomposition of su(2)C .
We may now apply the Theorem of Highest Weight to note that for any dominant
linear functional λ on Cρ3 , it makes sense to talk about the irreducible represen-
tation Dλ where λ is the highest weight. The linear functionals of Cρ3 are just
determined by the value they send ρ3 to; the condition that these are dominant
amounts to the condition that this value must be greater than zero, and the fact
that they are algebraically integral just amounts to the fact that this value must
be n/2 for some integer n ≥ 0. Furthermore, given that our roots here are just α
and −α, we see that the weights of this representation are of the form λ − n for
22 ABIGAIL WARD
0 ≤ n ≤ 2λ, and the operator L acts as a lowering operator. The reader familiar
with quantum mechanics will recognize this as precisely the conditions describing
the spin of a particular particle.
We can therefore see how the choice of a particular representation of this Lie
algebra can be useful in describing the actual physical system that the Lie algebra
acts upon; furthermore, we may chose the highest weight as appropriate to the
physical system at hand. The above shows briefly how the representations of su(2)C
can be useful in describing physical systems; in more complicated systems described
by SO(1, 3), similar methods apply.
Acknowledgments
It is a pleasure to thank my mentor, Jonathan Gleason, for his guidance and
prompt and thorough answers to all of my questions. I would also like to thank
Wouter van Limbeek for his idea for this project and suggestions of reading material.
References
[1] Anthony W. Knapp Lie Groups: Beyond an Introduction Birkhauser. 1996.
[2] R. Ticciati Quantum Field Theory for Mathematicians. Cambridge University Press. 1999.