Phy 422 Texv 2
Phy 422 Texv 2
∗
Lecturer: Das
Last updated: 2020/02/17, 10:33:13
6 Lie Groups 16
6.1 Geometry of SU (2) . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Lie Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Certain aspects of SO(3) . . . . . . . . . . . . . . . . . . . . . 22
6.4 SU (2) and SO(3) . . . . . . . . . . . . . . . . . . . . . . . . . 26
∗
Please send corrections to didas at iitk.ac.in
1
1 Group Theory Basics
Introduction
Symmetries play a very important role in dictating physics. They give strong
constraints on the observables of our theory (starting from the point of de-
ciding what are the good observables? For instance in a gauge theory mean-
ingful quantities are gauge invariant objects. Symmetries can be analyzed
nicely in the path integral formalism and give rise to Ward identities which
are constraints respected by the observables. However sometimes symmetries
go beyond the Lagrangian framework and can allow computation of observ-
ables. This is the case for instance in a conformal field theory.
Symmetries have a group structure. Once we understand symmetries via the
theory of groups, we can compute the physics unavailable to us via pertur-
bation. The group transformations can be represented as linear operations
in vector spaces, and this leads naturally to finding representations of the
group. Much of group theory, concerns classifying all possible representa-
tions..
Axioms
A group G, is a set with binary operation that assigns to every ordered pair,
(g1 , g2 ) of its elements, a third element g3 usually written as g3 = g1 g2 . This
binary operation of product follow the following rules,
There are many corollaries that follow from the above three axioms,
3. e is unique.
4. Inverse is unique.
1
Two elements are said to commute if, g1 g2 = g2 g1 . If this is true ∀gi ∈ G
then the group is abelian, else it is non − abelian 1 . If G contains finite
number of elements, order of G, |G|, then it is called a f inite group.
Some examples
1. The integers Z under addition. – Infinite What is the identity?
Here φ is the boost angle. One can verify that the composition of two
transformations adds up the boost angles.
[End of Lecture 1]
1
There may be composition rules which are commutative but not associative. e.g., the
arithmetic mean (example suggested by Radhika Prasad)
2
Some basic properties
1. Subgroups : A subset of a group (lets call it H) that forms a group.
{e} and G itself are trivial subgroups, others are proper subgroups.
For a finite group, |G| is divisible by |H|. This is Lagrange0 s theorem.
n = mk. (proved)
An example is Z2 ⊗ Z2 = {(1, 1), (1, −1), (−1, 1), (−1, −1)}. Note that
this is different from the group, Z4 .
e g1 ... gi ... gn
e e g1 ... gi ... gn
g1 g1 g12 ... g1 gi ... g1 gn
... ... ... ... ... ... ...
gj gj ... ... gj gi ... gj gn
... ... ... ... ... ... ...
gn gn ... ... gn gi ... gn2
3
The multiplication table follows the once and only once theorem. Ev-
ery group element appears only once if we trace a particular row or a
coloumn. This can be proven once again by contradiction. If we as-
sume gj gk = gj gl , then this gives the contradiction gk = gl . This is a
simple yet a very useful theorem. This rule immediately tells us,
X X X
f (g) = f (gg 0 ) = f (g 0 g).
g g0 g0
Example : In the rotation group SO(3), the conjugacy classes are the
sets of rotations by same angle, but about different axis.
Permutation groups
The action of this group is to permute its elements. Since the once and only
once rule need to be satisfied, one can show easily that every finite group of
order n is some subgroup of the permutation group Sn .
4
The permutation group of n objets, Sn has order n!. If we order G =
{e, g1 , g2 , . . . } and then multiply from left by g ∈ G then the ordered list,
{g, gg1 , gg2 , . . . } is a permutation of G. Thus any group G is a subgroup of
the permutation group S|G| . This is Cayley 0 s theorem.
The rhs defines the cycle notation. Any permutation with the pattern, (∗ ∗
∗)(∗∗)(∗∗)(∗) is in the same conjugacy class as π1 .
Problem
If ϕ : F → G is a group homomorphism, and if we define Ker(ϕ) as the set of
elements in F that map to eG , then show that Ker(ϕ) is a normal subgroup
of F .
By definition,
eG = ϕ(hl ).
∴ hl = ghi g −1 (proved).
[End of Lecture 3]
5
7. Cosets :Given, H ⊆ G with elements H = {h1 , h2 , . . . }, and g ∈ G,
the left coset is gH = {gh1 , gh2 , . . . }. If two cosets intersect, then they
coincide. A group can be written as,
G = g1 H + g2 H + · · ·
Coxeter groups
This group is defined by the relation, g ∈ G, g 2 = 1 and also (gi gj )nij = 1
for nij ≥ 2. One can show that nij = nji . One starts with
gj2 = 1,
gj (gi gj )nij gj = 1,
gj (gi gj )(gi gj ) . . . (gi gj ) gj = 1,
| {z }
nij number of terms
(gj gi )nij = 1.
6
2 Prelude to Lie groups
In contrast to finite groups, one can have infinite dimensional continuous
groups. Here we focus on the rotation group where the elements denote
rotations in space by angles. In 2 dimensions, the group is SO(2) and acts
via, 0
x cos φ sin φ x
= . (2.1)
y0 − sin φ cos φ y
The matrix is denoted by R(φ). In addition to length of a vector, rotation also
leaves angles between two vectors invariant. Let us denote the vectors by, ~u, ~v .
Then under the rotation : ~u → ~u0 = R(φ)~u and similarly ~v → ~v 0 = R(φ)~v .
Since, |~u|, |~v |, as well as the angle between ~u and ~v stays invariant; this
implies: ~u · ~v = ~u0 · ~v 0 = (R(φ)~u)T (R(φ)~v ) = ~uT R(φ)T R(φ)~v . Thus the
condition on the transformation is,
This condition is not only satisfied by rotations, but also for instance by re-
flections. Real matrices satisfying this relation are called orthogonal matrices.
We can check this explicitly for the matrices (2.1) by matrix multiplication :
cos φ − sin φ cos φ sin φ 1 0
= .
sin φ cos φ − sin φ cos φ 0 1
Next, using det AB = det A det B, and det A = det AT we can conclude that,
Thus the determinant of the matrix is ±1. Orthogonal matrices also contain
reflections. For instance reflecting the x axis corresponds to the orthogonal
matrix :
−1 0
0 1
which has determinant −1. We can select out rotations from the set of
orthogonal matrices by demanding determinant to be +1. Matrices with
positive unity determinant are called special.
7
Rotating infinitesimally
A key insight of Lie was that the aspects of rotation group can be captured
by focussing on infinitesimal rotations. Thus we write the small rotation
through the small angle, θ as an expansion:
R(θ) = I + A + O(θ2 ).
A = −AT .
This means that A is antisymmetric. In two dimensions then the only pos-
sibility is,
0 1
J=
−1 0
Thus near identity rotations look like:
R(θ) = I + θJ + O(θ2 )
1 θ
' . (2.3)
−θ 0
To get to the finite transformation, we may apply the above matrix operation
many many times. Let the small angle θ = Nφ where N → ∞. Therefore we
want to compute repeated action of R(θ) and see if after N → ∞ number of
times if we recover R(φ) from (2.1). We have :
N
N φJ
lim (R(φ/N )) = lim 1 +
N →∞ N →∞ N
∞
X φn J n
= eφJ =
n=0
n!
∞
! ∞
!
X φ2m X φ2m+1
= (−1)m I+ (−1)m J
m=0
(2m)! m=0
(2m + 1)!
= cos φI + sin φJ
cos φ sin φ
= = R(φ). (2.4)
− sin φ cos φ
8
In the third equality above we used, J 2 = −1, therefore all even (= 2m)
powers are just (−1)m while odd (= 2m + 1) powers are (−1)m J.
[End of Lecture 4]
SO(3)
For SO(3), the rotations are in 3 dimensions. For 3 × 3, one may choose the
following antisymmetric matrices,
0 0 0 0 0 −1
J1 = −i 0 0 1 , J2 = −i 0 0
0 (2.5)
0 −1 0 1 0 0
0 1 0
J3 = −i −1 0 1 .
(2.6)
0 0 0
Thus analogous to R = eφJ for SO(2) in this case one can write a group
element as,
~ = ei 3i=1 θi Ji .
P
R(θ) (2.7)
Note that here we have chosen the generators to be Hermitian which is why
there is the factor of i in the exponent. The algebra of these generators is :
9
The identity element is the identity matrix. Inverse are simply matrix in-
verses. We already encountered some matrix representations with these prop-
erty in (2.4) and (2.7). For finite groups, the multiplication table provides
automatically a representation called the regular representation. In order to
write down the matrices from the multiplication table in this representation,
first order the group elements as gi for i = 1, 2, . . . , N (G).2 and represent ev-
ery gi as a column vector with all zeros except a 1 at the ith entry. Therefore
the matrix, D(gk ) in regular representation is just read from the character
table, as D(gk )ij = 1 if the gk -th row and gi -th column in the multiplication
table gives gj else it is zero. Thus the regular representation is a N (G) di-
mensional square matrix. The trivial representation is furnished by just the
number 1, i.e., D(g) = 1∀g ∈ G.
A given group may have many representations. However only some of them
are irreducible. An irreducible representation (written as irrep for short) is
a representation which does not have any invariant subspace.
A reducible representation may be obtained by stacking up irreps (Di (g)) in
block diagonal form,
1
D (g) . . . ...
D(g) = . . . D2 (g) ... .
... ... D3 (g)
Character
Given a matrix representation, D(g), the trace is an important quantity. In
particular it depends on the representation used. The trace defines character
of the representation :
χ(g) = Tr D(g). (3.11)
A part of the utility of the character is inherited from the cyclicity property
of the trace. Suppose we transform our basis, then
10
Under this similarity transformation, non-zero entries of the D(g) matrix
may become zero and vice-versa. These two representations should still be
equivalent to each other, since we just changed the basis. The trace is pre-
served under similarity transformations, since
Thus given two non-equivalent representations the traces will turn out to be
different. Another property of the character is that for the group elements
belonging to the same conjugacy class, it is the same. This is because, if g
and g 0 are in the same conjugacy class, then ∃g 000 , such that, g 0 = g 000 gg 000−1 .
Thus,
[End of Lecture 5]
[End of Lecture 6]
11
3.1.1 Consequences of the orthogonality theorem
We start with equation :
X N (G) rs i k
D†r (g)ij Ds (g)kl = δ δl δj . (3.15)
g
dr
since character is a function just of the conjugacy class, the above can be
simplified further to,
X
n(c)χ∗r (c)χs (c) = N (G)δ rs . (3.17)
c
In the above the sum is over all conjugacy classes c, n(c) is the number of
elements in the conjugacy class c.
Viewing the characters as vectorial functions in the space of conjugacy classes,
we have an orthogonality of vectors indexed by the irrep indices r, s. This is
possible if the space of irreps form a subset of the space of classes.
Following a similar logic and this time viewing the characters as vectorial
functions in the space of irreducible representations, we have an orthogonality
indexed by conjugacy classes. This implies,
The only way the two inequalities, (3.18), (3.20) can be satisfied is when
12
Thus the number of conjugacy classes also give us the number of irreducible
representations.
Now suppose we are given the character of some matrix representation D
which in principle can be reducible. Let us assume that D contains nr number
of times the irreducible matrix representation Dr . In this case, we can write,
X
χ(c) = nr χr (c). (3.22)
r
Therefore plugging this into, the l.h.s of the equation below we get,
X X X X
n(c)χ∗r (c)χ(c) = n(c)χ∗r (c) ns χs (c) = N (G) ns δ rs ,
c c s c,s
= N (G)nr . (3.23)
n(c) ∗r
χ (c).
N (G)
projects any reducible character into the irrep r subspace and gives the times
the irrep r is containedPupon summing over the class.
If we plug (3.22) into c n(c)χ(c)∗ χ(c) we get,
X X X
n(c)χ(c)∗ χ(c) = nr ns χ∗ r(c)χs (c) = N (G) δ rs nr ns
c c,r,s r,s
X
= N (G) n2r . (3.24)
r
For Regular representations the only non-zero character is for the identity
conjugacy class, and this just give the dimension of the representation, which
for this case is just the dimension of the group N (G)3 . Hence from (3.23) we
get,
χ∗r (e)N (G) = N (G)nr .
But χr (e) = dr , the dimension of irrep r. Therefore we obtain that in the
regular representation,
dr = nr . (3.25)
3
i.e, χregular (e) = N (G)
13
It is highly reducible, moreover the irrep r is contained dr number of times!
If we look at the regular representation for (3.24) we get:
X X
N (G)2 = N (G) n2r , or, n2r = N (G). (3.26)
r r
(1)(2)(3)(4)(5) =
The number 5 has been partitioned into integers in 7 different ways 4 , which
corresponds to Nc = 7 conjugacy classes, which is also the possible cycle
structures5 . Since we have, Nc = NR , these are also in correspondence with
the number of irreducible representations. The Young diagrams can be used
to calculate the dimensions of the irreducible representations using Hook’s
law.
4
In Problem Set 2 you have seen p(5) = 7.
5
The cycle structure is invariant under conjugation – see here
14
• Starting from the first row, pass a line through the boxes, from the
right to the left.
• Next take a left-turn of 90o in order to exit the figure. This turn makes
a shape of hook.
• Count the number of boxes the hook passed through. Let the product
of all possible hook lengths be called H.
• The dimension of the corresponding irreducible representation of Sn is
given via
n!
dr = .
H
• Finally an example for the (∗ ∗ ∗)(∗∗) cycle structure of S5 :
The coloured lines in the above diagram show the non-trivial hooks,
we calculate
5!
d= = 5.
4×3×2
Calculating similarly we obtain: , d1 = 1, , d2 = 4, , d3 = 5,
, d4 = 6, , d5 = 5, , d6 = 4, , d7 = 1. We can immediately verify
(3.27), that,
X
n! = d2r = 5! = 120 = 12 + 42 + 52 + 62 + 52 + 42 + 12 .
r
For the use of Young tableaux for SU (N ) see §8. [End of Lecture 8]
15
6 Lie Groups
Continuous groups have infinite group elements. However to parametrize
them one can often use finite number of real quantities. A convenient way to
describe the transformations enacted by the group is via matrix representa-
tions where the finite number of real quantities enter as variables. Here we
shall look at some important examples and count the number of the variables.
Since we are concerned with Lie groups, we will be looking close to the
identity, thus the matrix group is expanded around the identity matrix,
D(g) = 1 + X.
As we shall see in the next lecture, the matrices X are the generators, closing
under Lie algebra, and can be exponentially parametrized to generate finite
group transformations in the neighbourhood of identity.
The number of generators is same as the number of real variables describing
the group. Each variable can be thought of as the strength of the transfor-
mation away from identity, in the direction of the corresponding generator
element.
• the group GL(n, C) is the group of n × n complex matrices, which are
invertible. Since every entry of the matrix is an independent complex
number, one has a total of 2n2 number of variables.
• the group SL(n, C) is the above with the restriction that the determi-
nant is 1.
16
where, λi ’s are the eigenvalues of the generator matrix X. Therefore, de-
manding det(1 + A) = 1, sets trX = 0.
17
• Sp(2n, R) : The real symplectic group of rank 2n is a 2n × 2n matrix
S which satisfies :
S T ωS = ω. (6.28)
where
0n×n −In×n
ω= .
In×n 0n×n
Note that even for Sp(2n, C), when the entries are complex, the defining
relation is (6.28), with a transpose. Thus the generators satisfy :
X T ω = −ωX. (6.29)
Parametrizing the generator with the following n × n matrices :
an×n bn×n
X= .
cn×n dn×n
Now (6.29) gives the following constraints :
d = −aT b = bT , c = cT .
Thus we have one arbitrary real matrix, and two symmetric real matri-
ces. Therefore the total number of generators is, n2 + 2 × n(n + 1)/2 =
n(2n + 1).
18
Equation (6.31) describes a three dimensional sphere, S 3 with unit radius.
The S 3 coordinates are x1 , x2 , x3 , and x0 is determined via (6.31) as,
v
u
X3
0
u
x = 1− t (xj )2 . (6.33)
j=1
we can rewrite,
A = x0 I2×2 + ix1 σ 1 + ix2 σ 2 + ix3 σ 3 .
Away from identity, there are 3 independent directions along the 3 generators.
These near identity elements along the three directions are :
Thus the change in the parameters xi (i ∈ {0, 1, 2, 3}), under the action of
(1 + iσ 3 ) can be written as,
0 3
x −x
x1 −x2
δ x2 = x1
(6.36)
x3 x0
19
Similarly,
∂ ∂ ∂
L1 = x0 1
− x3 2 + x2 3
∂x ∂x ∂x
3 ∂ 0 ∂ 1 ∂
L2 = x +x −x (6.38)
∂x1 ∂x2 ∂x3
Now, using (6.33),
∂x0 xi
= − , for, i = 1, 2, 3.
∂xi x0
With the above, we can explicitly check that,
This coincides with the albegra of iσ i ’s as it should, since both are represen-
tations of the generators of SU (2).
[End of Lecture 10]
D(0) = 1.
(note we interchangeably use 1 for the identity matrix I). Expanding in-
finitesimally around α = 0,
D(dα) = 1 + idαa Xa .
20
transformation, we can keep acting by the infinitesimal transformation many
number of times;
k
α a Xa
D(α) = lim 1 + i = eiαa Xa .
k→∞ k
Note, that we have also used, dαa = αa /k. This is called the exponential
mapping, how from a generator, we can get to finite group transformations
(at least those connected to the identity element). This implies, that we can
focus our study to the generators, which is nice since the generators form a
vector space.
Now we want to see how the closure property of the group G, enforces
the Lie algebra of the generators X. Closure implies,
eiαa Xa eiβb Xb = eiδc Xx .
This is same as,
1
iδa Xa = log 1 + eiαa Xa eiβb Xb − 1 = log(1 + K) ' K − K 2 .
2
We have just added and subtracted 1 within the parentheses in the r.h.s.
Next we expand the exponentials in K = eiαa Xa eiβb Xb − 1, and keep till
quadratic order in αa , βa ’s. Finally we get,
[αa Xa , βb Xb ] = 2i(αc + βc − δc )Xc = iγc Xc .
Next with the choice, γc = αa βb fabc , we can rewrite the above as,
[Xa , Xb ] = ifabc Xc .
Clearly the l.h.s is antisymmetric in a, b, thus,
fabc = −fbac .
From the hermiticity of X (which is true given a unitary representation),
[Xa , Xb ]† = −ifabc
∗
Xc = [Xb , Xa ] = ifbac Xc = −ifabc Xc .
This shows that fabc are real quantities.
Also once can go on to show,
[Xa , [Xb , Xc ]] + [Xb , [Xc , Xa ]] + [Xc , [Xa , Xb ]] = 0.
This is called the Jacobi identity. [End of Lecture 11]
21
Adjoint Representation
It turns out that,
(Xa )bc = −ifabc , (6.40)
forms a representation of the generator. From the Jacobi identity one has,
Using the above, one can show that with the adjoint representation (6.40)
one has group closure property.
22
Here, cm , bm are unfixed normalizations. The generators J± act as rais-
ing(lowering) operators. Since the generators are hermitian, we have (J± )† =
J∓ .
Since, hm + 1|J+ |mi = cm+1 , and hm|J− |m + 1i = bm ,
the hermiticity implies,
bm−1 = c∗m .
Thereby,
J− |mi = c∗m |m − 1i.
Thus we also have,
J+ J− |mi = |cm |2 |mi (6.44)
J− J+ |mi = |cm+1 |2 |mi.
Since we are after finite irreps of SO(3), we would need the states to terminate
somewhere, so that it cannot be raised further. This defines the highest
weight state. Let it have a Jz eigenvalue j. Then Jz |ji = j|ji. Now since
this is the highest weight,
hj|J− J+ |ji = 0.
However, using (6.41) this is also same as,
hj|J+ J− − 2Jz |ji = |cj |2 − 2j.
where we also use (6.44). Thus we immediately have,
|cj |2 = 2j. (6.45)
To figure out the rest of the normalizations, we use again the commutator,
[J+ , J− ] (6.41) along with (6.44),
hm|[J+ , J− ]|mi = hm|J+ J− − J− J+ |mi = |cm |2 − |cm+1 |2 = hm|2Jz |mi = 2m.
∴ |cn |2 = |cn+1 |2 + 2n. (6.46)
Now we can use the above recursion relation multiple times to determine,
|cj−1 |2 , |cj−2 |2 , . . . , |cj−s |2 using the highest weight normalization value (6.45).
|cj−1 |2 = |cj |2 + 2(j − 1) = 2(2j − 1)
|cj−2 |2 = |cj−1 |2 + 2(j − 2) = 2(3j − (1 + 2))
(6.47)
23
Identifying the pattern, in the general case we have,
s
X
2
|cj−s | = 2((s + 1)j − k) = (s + 1)(2j − s). (6.48)
k=1
This tells us now, that we hit zero after going down s = 2j steps, so, |c−j |2 =
0. Thus we have 2j + 1 states in total,
The way a group element g ∈ G acts on the product states is the following:
D(r) (g) ⊗ D(s) (g) |j (r) i ⊗ |j (s) i = D(r) (g)|j (r) i ⊗ D(s) (g)|j (s) i .
(r)
In terms of generators, D(r) = 1+iθa Xa , thus plugging this in the generator
acts as,
Xa (|ji ⊗ |j 0 i) = Xa |ji ⊗ |j 0 i ⊕ |jiXa |j 0 i. (6.50)
Thus the Jz eigenvalue of the state |j, m; j 0 , m0 i is m + m0 which at the most
can be j + j 0 . Next, J− can act on this state to produce, |j, m − 1; j 0 , m0 i
and |j, m; j 0 , m0 − 1i, both of whose Jz eigenvalue is m + m0 − 1, which can
24
at the most be j + j 0 − 1. Thus this is part of the j + j 0 − 1 highest weight
representation. Inductively therefore,
j ⊗ j 0 = (j + j 0 ) ⊕ (j + j 0 − 1) ⊕ · · · ⊕ |j − j 0 |. (6.51)
One can check that all the states are accounted for. In both sides there are a
total of (2j + 1)(2j 0 + 1) states. In the following we look at an example with
j = j 0 = 21 . The highest weight state in the spectrum is, |1/2, 1/2; 1/2, 1/2i.
This has Jz value 1, and since this is highest √ weight, it is = |1, 1i. Acting by
Jz on this gives, using (6.49), J− |1, 1i = 2|1, 0i. However this should be
same as J− |1/2, 1/2; 1/2, 1/2i = |1/2, −1/2; 1/2, 1/2i + |1/2, 1/2; 1/2, −1/2i.
Thus we identify,
1
|1, 0i = √ (|1/2, −1/2; 1/2, 1/2i + |1/2, 1/2; 1/2, −1/2i) . (6.52)
2
Applying J− once again to both hand sides of the above equation gives :
1 1 1 1
|1, −1i = | , − ; , − i. (6.53)
2 2 2 2
This is the lowest weight state. The only other missing state in the decom-
position is the singlet |0i. One can easily convince that this is the state
:
1
|0i = √ (|1/2, −1/2; 1/2, 1/2i − |1/2, 1/2; 1/2, −1/2i) . (6.54)
2
Note, that the sign is undetermined. The coefficients relating one basis of
irreps to another are known as the Clebsch-Gordon coefficients,
j j 0
X X
|J, M i = |j, m; j 0 m0 ihj, m; j 0 , m0 |J, M i. (6.55)
m=−j m0 =−j 0
Here in the r.h.s one has inserted identity by using completeness and the
Clebsch-Gordon coefficients are the overlaps,
hj, m; j 0 , m0 |J, M i.
25
6.4 SU (2) and SO(3)
Reminder: SO(3) generates rotations of 3 dimensional vectors (§SO(3)):
Here the matrix R, is made up of real entries, with the property that RRT =
1, and det R = 1. Whereas, an element of SU (2) can be represented by the
2 × 2 complex matrix :
a b
u= . (6.57)
−b∗ a∗
This has 4 real quantities (a, b are complex numbers). Also the property,
uu† = 1 needs to be satisfied. This holds provided,
This is also the necessary condition to have, det u = 1. Now consider the
matrix :
h = σ i xi . (6.59)
Explicitly,
x3 x1 − ix2
h= . (6.60)
x1 + ix2 −x3
And enact a SU (2) transformation on it,
h → h0 = uhu† (6.61)
∗
a b x3 x1 − ix2 a −b
=
−b∗ a∗ x1 + ix2 −x3 b∗ a
x03 x01 − ix02
= 0 0 (6.62)
x1 + ix2 −x03
From explicit matrix multiplication one can read off the transformed x0i ’s :
0 1 2 2 i ∗ 2 ∗ 2
x1 = (a − b ) + c.c. x1 + ((a ) + (b ) ) + c.c. x2 + (−ab + c.c.) x3 ,
2 2
0 i ∗ 2 ∗ 2 1 2 2 i ∗ ∗
x2 = ((b ) − (a ) ) + c.c. x1 + (a + b ) + c.c. x2 + a b + c.c. x3 ,
2 2 2
x03 = (a∗ b + c.c.) x1 + (ia∗ b + c.c.) x2 + |a|2 − |b|2 x3 .
(6.63)
26
One can check that this transformation has the properties of an SO(3) trans-
formation. Constructing the R matrix from above :
(a − b2 ) + c.c. 2i ((a∗ )2 + (b∗ )2 ) + c.c.
1 2
2
(−ab + c.c.)
R = 2i ((b∗ )2 − (a∗ )2 ) + c.c. 1 2 i ∗ ∗
2
(a + b2 ) + c.c. 2
a b + c.c. .
∗ ∗
(a b + c.c.) (ia b + c.c.) (|a|2 − |b|2 )
(6.64)
For the above matrix all the entries are real. Furthermore,
2
|a| + |b|2 0 0
RRT = 0 |a|2 + |b|2 0 . (6.65)
2 2
0 0 |a| + |b|
But this is just the identity matrix, due to (6.58). Similarly,
det R = (|a|2 + |b|2 )3 = 1.
Thus R as defined in (6.64) represents SO(3). Hence from the representation
of SU (2) we can write down the R matrix of SO(3). Going back to (6.59)
and (6.61) and writing in terms of components (with summation implicit),
h0 = σj x0j = uσi xi u† . (6.66)
Now using the property of Pauli matrices :
Tr (σj σk ) = 2δjk
we can solve for x~0 as follows. First multiply b.h.s of (6.66) by σk and then
take trace:
Tr σk h0 = Tr (σk σj ) x0j = Tr σk uσi u† xi
1
∴ x0j = Tr σk uσi u† xi .
(6.67)
2
Thus
1
Rji = Tr σj uσi u† .
2
Note from the above that both u and −u give the same matrix element Rij .
Thus SU (2) is not just SO(3), rather, it covers SO(3) twice. This choice of
sign is expressed in the following isomorphism relation:
SO(3) ' SU (2)/Z2 .
27
The upshot of this discussion is that the irreps of SO(3) are also irreps of
SU (2). The generators of SU (2) also satisfy the same Lie algebra as (2.8).
However we will follow standard SU (2) conventions and define the raising
and lowering operators are bit differently :
J1 ± J2
J± = √ . (6.68)
2
√
In comparison with (6.41) there is an extra prefactor of 1/ 2, due to which
(6.49) is also modified to :
p
(j + 1 + m)(j − m)
J+ |mi = √ |m + 1i
2
p
(j + 1 − m)(j + m)
J− |mi = √ |m − 1i. (6.69)
2
And, (6.41) is modified to,
Here the eigenvalues, µi are called the weights, and in general the vector,
(µ1 , µ2 , . . . , µm ) is called the weight vector.
28
Adjoint Representations
Recall the definition of Adjoint representation from §AdjointRep . In this case
the states themselves can be labelled by the generators, Xa as, |Xa i. One
can also have a linear combination of generators as a state : |αXa + βXb i.
The inner product can be defined as,
1
Tr Xa† Xb .
hXa |Xb i =
λ
In the above, λ is just kD as in (7.71) for the adjoint representation. Let us
next see, how generators act on these states:
Xa |Xb i = |Xc ihXc |Xa |Xb i = |Xc i [Ta ]cb = ifabc |Xc i = |[Xa , Xb ]i. (7.73)
Though we do not prove it here, it can be shown that the label α is unique
for a generator, i.e., there cannot be a β 6= α such that, Eα and Eβ are the
same generator. This is called the uniqueness theorem for the roots. Since
the states, |Eα i are eigenstates of the Hermitian operators, Hi , we can and
will choose a basis such that these are normalized, i.e.,
1
hEβ |Eα i = Tr (E−β Eα ) = δαβ . (7.77)
λ
Note that we also have,
hHi |Hj i = δij .
29
Next we show that (7.74) implies that the generators E±α act as raising and
lowering operators (similar to J± for SU (2) ),
In the above first equality we used the algebra, (7.74). This implies that the
state, Eα |E−α i has weight zero. Thus it must be linear combination of the
Cartan generators. Thus,
Thus we have shown that βi = αi . In the third and the fifth equalities we
used the cyclicity property of the trace. Next, we can define :
1 α·H
E± = E±α , E3 = . (7.83)
α α2
Then using (7.74), one can derive:
30
The above is exactly the algebra of SU (2) (6.70), thus we see that given the
Cartan subalgebra and given a particular generator Eα we can construct an
SU (2) subalgebra for the group G. We shall call this algebra, SU (2)α .
[End of Lecture 15] Acting on a state in representation D,
α·µ
E3 |µ, x, Di = |µ, x, Di. (7.85)
α2
Since this is an SU (2) eigenvalue this must be either and integer or a half-
integer; therefore,
2α · µ
∈ Z. (7.86)
α2
Now let (E + )p |µ, x, Di be a highest weight state, which means that there is
some integer p for which,
(E + )p+1 |µ, x, Di = 0.
Now the weight of this highest weight state can be directly evaluated :
α · µ
E3 (E + )p |µ, x, Di = + p |µ, x, Di. (7.87)
α2
E3 (E + )p = E3 E + (E + )p−1 = E + E3 (E + )p−1 + (E + )p
= (E + )2 E3 (E + )p−2 + 2(E + )p = · · · = (E + )p E3 + p(E + )(7.88)
p
.
Thus,
[E3 , (E + )p ] = p(E + )p .
Thus we have :
α·µ
+ p = j. (7.89)
α2
31
Similarly we construct a lowest weight state with,
(E − )q |µ, x, Di,
whose E3 eigenvalue gives :
α·µ
− q = −j. (7.90)
α2
We can now add (7.89) and (7.90) and obtain:
α·µ 1
= − (p − q) . (7.91)
α2 2
Now, consider two generators, Eα and Eβ . Both has their own SU (2)s.
Consider first, SU (2)α . In this case (7.91) for the case when the state
|µ, x, Di = |Eβ i we have,
α·β 1
2
= − (p − q). (7.92)
α 2
Similarly for SU (2)β algebra with state, |Eα i we have,
β·α 1
2
= − (p0 − q 0 ). (7.93)
β 2
Next multiplying (7.92) and (7.93),
(β · α)2 1
2 2
= (p − q)(p0 − q 0 ) = cos2 θαβ . (7.94)
β α 4
In the above we see that the l.h.s is the angle between the root vectors θαβ
whose cosine squared appears. Thus the product, (p − q)(p0 − q 0 ) which
appears in the middle equality should be positive (since cosine squared is
positive) and also should be an integer (since each of p, q, p0 , q 0 are integers).
Thus there are only 4 cases :
(p − q)(p0 − q 0 ) θαβ
0 90o
1 60o or 120o
2 45o or 135o
3 30o or 150o
4 0o or 180o
In the above, the last case is again trivial. The zero degree case is ruled
out due to the uniqueness theorem of the roots, while the 180o case is the
generator E−α .
32
7.1 SU(3)
Let us look at the group SU (3) and how the above features fit in here. For
SU (3) there are 8 generators, which are traceless, hermitian matrices of rank
3. The standard basis for them are generalizations of Pauli matrices and are
called Gellmann matrices in the literature. This also furnishes us with the
fundamental representation of SU (3).
0 1 0 0 −i 0 1 0 0
λ1 = 1 0 0 λ2 = i 0 0 λ3 = 0 −1 0
0 0 0 0 0 0 0 0 0
0 0 1 0 0 −i 0 0 0
λ4 = 0 0 0 λ5 = 0 0 0 λ6 = 0 0 1 (7.95)
1 0 0 i 0 0 0 1 0
0 0 0 1 0 0
1
λ7 = 0 0 −i λ8 = √ 0 1 0 .
0 i 0 3 0 0 −2
0.2
0.0 H1
-0.2
-0.4
-0.6
Figure 1: Weight diagram for SU (3) fundamental, also called 3 due to its dimensionality.
The figure 1 is the weight diagram for SU (3) in the fundamental representa-
tion. The other 6 generators generate movements between the points in this
weight lattice. [End of Lecture 16]
One can thus determine the 6 roots :
1 1
E±1,0 = √ (T1 ± iT2 ) , E± 1 ,± √3 = √ (T4 ± iT5 ) , (7.101)
2 2 2 2
1
E∓ 1 ,± √3 = √ (T6 ± iT7 ) .
2 2 2
To find the combinations of the generators appearing above we can act by
the Cartans and fix the coefficients. For instance, to find E±1,0 , by definition
we need to have :
34
H2
1.0
0.5
0.0 H1
-0.5
-1.0
Let us see how (7.101) satisfy the above requirements. We have already
identified, H1 = T3 and H2 = T8 . Thus,
1
H1 |E±1,0 i = |[H1 , E±1,0 ]i = |[T3 , √ (T1 ± iT2 )]i
2
1 i
= √ |[T3 , T1 ]i ± √ |[T3 , T2 ]i
2 2
1 i
= √ |if31j Tj i ± √ |if32j Tj i
2 2
1 i
= √ |if312 T2 i ± √ |if321 T1 i
2 2
i 1 1
= √ |T2 i ± √ |T1 i = ± √ (|T1 i ± i|T2 i)
2 2 2
= ±1|E±1,0 i. (7.104)
Clearly since the individual states, |Ta i are ortho-normalized the overall nor-
malization is √12 . The weight diagram for the adjoint representation (which
is 8 dimensional and also called 8) has the shape of a hexagon along with
two weights in the origin corresponding to the Cartans. See Fig.2.
35
H2
0.6
0.4
0.2
0.0 H1
-0.2
Figure 3: Weight diagram for SU (3) anti-fundamental, also called 3̄ due to its dimen-
sionality and conjugation.
Product representations
The weight diagrams can also be drawn for product representations. And
then one can use it to decompose it into the contained irreducible represen-
tations. To see how this is done, we first look at SU (2) where we know the
decomposition rule (6.51). Let us consider the example
1 1
⊗ .
2 2
In the case of SU (2) there is only one axis for the Cartan generator which
we choose to be J3 . The first step is to list all the possible weights contained
in the product representation. For the above example the weights are the
J3 eigenvalues of the four states | 12 , ± 12 ; 12 , ± 12 i. The J3 values simply add
up and gives us three possible values −1, 0, 1. However there are two zeros,
corresponding to addition of 12 and − 12 which happens twice. Thus we have
36
0.10 0.10
0.05 0.05
-1.0 -0.5
-0.10
0.5 1.0
H
= -1.0 -0.5 0.5 1.0
H
⊕ -1.0 -0.5
-0.10
0.5 1.0
H
1 1
Figure 4: The weight diagram indicates 2 ⊗ 2 = 1 ⊕ 0.
the following weight diagram as given in l.h.s of Fig. 4. The next step is
to realize that in the weight diagrams of the irreducible representations of
SU (2) none of the weights are degenerate. For instance in the spin-j irrep of
SU (2), the possible J3 eigenvalues, −j, −j + 1, . . . , j − 1, j, occur only once.
Thus the decomposition of the diagram is into the two diagrams as shown in
the r.h.s of Fig. 4. These are nothing but the weight diagrams of j = 1 and
j − 0. Thus we have derived,
1 1
⊗ = 1 ⊕ 0.
2 2
For SU (3) it is a bit tricky since in the irreps of this group, weights can have
degenerate values. Clearly the adjoint weight diagram (Fig. 2) has double
degeneracy at the origin reflecting the rank 2 feature of SU (3). Let us look
at the following example for SU (3) for the case,
3 ⊗ 3̄.
In this case, following the first step let us list down all the possible weights in
this product representation. We simply need to do all possible additions of
the weights which are given in (7.100) to the weights in (7.105). We obtain
the set,
√ √ √ √
1 3 1 3 1 3 1 3
(0, 0), (−1, 0), (1, 0), (− , ), (− , − ), (− , − ), ( , − ).
2 2 2 2 2 2 2 2
In the above (0, 0) appears thrice, all others appear once. This gives the
weight diagram, l.h.s of Fig. 5 from which we can immediately see that it
is built out of Fig. 2 and the singlet, which is the trivial 1 dimensional
representation (just one weight in the origin). Thus we conclude:
3 ⊗ 3̄ = 8 ⊕ 1. (7.106)
[End of Lecture 17]
37
H2 H2 H2
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
...
...
.. .. ..
. . .
• The boxes can be associated with numbers that can be used to calculate
dimension of the irrep. The way numbers are assigned are as follows :
38
– Numbers increase by one in a given row. The boxes in the top
row next to the left most will have numbers: N + 1, N + 2, . . . .
– Numbers decrease by one in a given column. In the left most
column, the boxes below the N box have numbers, N − 1, N −
2, . . . .
N N +1 N +2 ...
N −1 N N +1 ...
.. .. ..
. . .
Factors
dr = .
Hooks
Here the numerator is the product of all the entries in the boxes of the
diagram, and the denominator is the product of all the hook numbers.
For example we find for the irrep given by, , For N = 3 i.e., SU (3)
we obtain dr = 3, this is the Young diagram,
∗
= = ,
39
N 2 (N + 1)(N − 1)(N − 2)
: dr = .
4×3×2
3 ⊗ 3̄ = 8 ⊕ 1.
⊗ = ⊕
3 ⊗ 3 = 6 ⊕ 3̄.
40