0% found this document useful (0 votes)
57 views

Lecture Notes

This document provides lecture notes on group representation theory. It introduces fundamental concepts such as representations, character theory, algebras and modules. It covers topics including representations of finite groups, character tables, the regular representation, Maschke's theorem and Schur's lemma.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Lecture Notes

This document provides lecture notes on group representation theory. It introduces fundamental concepts such as representations, character theory, algebras and modules. It covers topics including representations of finite groups, character tables, the regular representation, Maschke's theorem and Schur's lemma.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

Group representation theory, Lecture Notes

Travis Schedler
January 12, 2021

Contents
1 References 2
1.1 Limitations on scope of the exam and course . . . . . . . . . . . . . . . . . . 3

2 Introduction and fundamentals 4


2.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Representations: informal definition . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Motivation and History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Representations: formal definition . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Going back from matrices to vector spaces . . . . . . . . . . . . . . . . . . . 10
2.6 Representations from group actions . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 The regular representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.8 Subrepresentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9 Direct sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.10 Maschke’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.11 Schur’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.12 Representations of abelian groups . . . . . . . . . . . . . . . . . . . . . . . . 20
2.13 One-dimensional representations and abelianisation . . . . . . . . . . . . . . 24
2.13.1 Recollections on commutator subgroups and abelianisation . . . . . . 25
2.13.2 Back to one-dimensional representations . . . . . . . . . . . . . . . . 26
2.14 Homomorphisms of representations and representations of homomorphisms . 28
2.15 The decomposition of a representation . . . . . . . . . . . . . . . . . . . . . 30
2.16 The decomposition of the regular representation . . . . . . . . . . . . . . . . 32
2.17 Examples: S3 , dihedral groups and S4 . . . . . . . . . . . . . . . . . . . . . . 32
2.18 The number of irreducible representations . . . . . . . . . . . . . . . . . . . 36
2.19 Duals and tensor products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.19.1 Definition of the tensor product of vector spaces . . . . . . . . . . . . 39
2.19.2 Natural isomorphisms involving tensor products . . . . . . . . . . . . 41
2.19.3 Tensor products of representations . . . . . . . . . . . . . . . . . . . 42
2.20 External tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.21 Summary of section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

1
3 Character theory 49
3.1 Traces and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Characters are class functions and other basic properties . . . . . . . . . . . 52
3.3 Characters of direct sums and tensor products . . . . . . . . . . . . . . . . . 54
3.4 Inner product of functions and characters . . . . . . . . . . . . . . . . . . . . 57
3.5 Dimension of homomorphisms via characters . . . . . . . . . . . . . . . . . . 58
3.5.1 Applications of Theorem 3.5.1 . . . . . . . . . . . . . . . . . . . . . . 58
3.5.2 Proof of Theorem 3.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6 Character tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.2 Row and column orthogonality; unitarity . . . . . . . . . . . . . . . . 65
3.6.3 More examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.7 Kernels of representations and normal subgroups . . . . . . . . . . . . . . . . 72
3.8 Automorphisms [mostly non-examinable] . . . . . . . . . . . . . . . . . . . . 74

4 Algebras and modules 78


4.1 Definitions and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.3 Group algebras as direct sums of matrix algebras . . . . . . . . . . . . . . . 83
4.4 Representations of matrix algebras . . . . . . . . . . . . . . . . . . . . . . . 83
4.5 Semisimple algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6 Character theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.7 The center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.8 Projection operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.9 General algebras: linear independence of characters . . . . . . . . . . . . . . 93

1 References
These notes are for courses taught by the author at Imperial College. They follow somewhat
the previous iterations by Bellovin (http://wwwf.imperial.ac.uk/~rbellovi/teaching/
m3p12.html), Newton (https://nms.kcl.ac.uk/james.newton/M3P12/notes.pdf), Segal
(http://www.homepages.ucl.ac.uk/~ucaheps/papers/Group%20Representation%20theory%
202014.pdf), and Towers (https://sites.google.com/site/matthewtowers/m3p12).
I will mostly base the course on the above notes and will not myself look much at
textbooks. That said, it may be useful for you to read textbooks to find additional exercises
as well as further details and alternative points of view. The main textbook for the course
is:

ˆ G. James and M. W. Liebeck, Representations and characters of groups.

Other recommended texts include (listed also on the library page for the course):

2
ˆ P. Etingof et al, Introduction to representation theory. This is also available online
at http://math.mit.edu/~etingof/replect.pdf. Note that the approach is more
advanced and ring-theoretic than we will use, but nonetheless we will discuss some
of the results in Sections 1–3 and maybe touch on (non-examinably) the definition of
categories (Section 6).

ˆ J.-P. Serre, Linear representations of finite groups. Only Chapters 1,2,3,5, and 6 are
relevant (and maybe a bit of Chapters 7 and 8).

ˆ M. Artin, Algebra. Mainly Chapter 9 is relevant (also you should be familiar with most
of Chapters 1-4 and some of Chapter 7 already). You should also be familiar with some
of Chapter 10 (be comfortable with rings; no algebraic geometry is needed).

ˆ W. Fulton and J. Harris, Representation theory : a first course. Only Sections 1–3 are
relevant; Sections 4 and 5 might be interesting as examples but we won’t discuss it in
the course (beyond a brief mention perhaps).

ˆ J. L .Alperin, Local representation theory. The first two chapters are introductory and
relevant to the course; beyond that is irrelevant for us, but interesting if you are curious
what happens working over positive characteristic fields.

1.1 Limitations on scope of the exam and course


In this course we will exclusively use the complex field in all assessed work and in all material
you are responsible for on the exam. You can also expect that you can restrict to the case
where the representations are finite-dimensional complex vector spaces. There are a few
results stated in greater generality, but you don’t need to remember the details of what
extends and what doesn’t to the infinite-dimensional case: if not then simply say in your
solutions (e.g., to the recollection of a definition or theorem) that you will take the vector
spaces involved to be finite-dimensional. Therefore for purposes of following the course and
preparing for the exam, you may restrict your attention to finite-dimensional complex vector
spaces if you prefer.
Additionally, we will almost exclusively be dealing with representations of finite groups:
that is, the group G that we take representations of will almost always be finite. You
won’t need to learn a lot about what happens outside of this case, although a few results are
stated in greater generality (because the hypothesis that G is finite is not needed) and I have
provided a few examples where G is infinite that I think you can and should understand.
Any assessed work (including the exam) will keep the use of representations of infinite groups
to a minimum.
In these notes I have included several non-examinable remarks. The main purpose of
these is to make sure you have some exposure to more general situations than the ones we
are primarily concerned with in this course: typically the case where the field is not C,
or where the vector spaces (representations) are infinite-dimensional. You should glance at
these to have an idea for what happens, but only read in more detail if you are curious.

3
I stress that there is no obligation to look at these at all, and I will not expect you to
understand or remember these.

2 Introduction and fundamentals


2.1 Groups
Definition 2.1.1. A group is a set G together with an associative multiplication map G ×
G → G (written g · h) such that there is an identity element e ∈ G (i.e., e · g = g · e = g for all
g ∈ G) and, for every element g ∈ G, an inverse element g −1 satisfying g · g −1 = e = g −1 · g.

A group G is called finite if G is a finite set.

2.2 Representations: informal definition


Informally speaking, a representation of a group G is a way of writing the group elements
as square matrices of the same size (which is multiplicative and assigns e ∈ G the identity
matrix). The dimension of the representation is the size (number of rows = number of
columns) of the matrices. Before getting to the formal definitions, let us consider a
few examples. Given g ∈ G, let ρ(g) denote the corresponding matrix.

Example 2.2.1. If G is any group, we can consider the 1-dimensional representation ρ(g) =
(1) for all g ∈ G. This is called the trivial representation.

Example 2.2.2. Let ζ be any n-th root of unity (e.g., ζ = e2πi/n = cos(2π/n)+i sin(2π/n), or
more generally ζ = e2πik/n for any k ∈ {0, 1, . . . , n−1}). Then, if G = Cn = {1, g, g 2 , . . . , g n−1 }
is the cyclic group of size n, then we can consider the 1-dimensional representation ρ(g m ) =
(ζ m ). Notice that for ζ = 1 we recover the trivial representation.

Example 2.2.3. Let G = Sn . Consider the n-dimensional representation where ρ(g) is the
corresponding n × n permutation matrix.

Example 2.2.4. Let G = Sn and consider the one-dimensional representation where ρ(g) =
(sign(g)), the sign of the permutation g.

Remark 2.2.5. The previous two examples are related: applying the determinant to the
permutation matrix recovers the sign of the permutation.

Finally one geometric example:

Example 2.2.6. Let G = Dn , the dihedral group of size 2n [Caution: in algebra this
group is often denoted D2n ! We decided to use the geometric Dn notation since Cn and
Dn naturally are subgroups of Sn , whereas |Sn | = n 6= n!.] We have a 2-dimensional
representation where ρ(g) = A is the matrix such that the map v 7→ Av, v ∈ R2 is the
associated reflection or rotation in R2 . Therefore, for g a (counterclockwise) rotation by θ,

4
 
cos θ − sin θ
ρ(g) = . For g a reflection about the line which makes angle θ with the
sin θ  cos θ 
cos 2θ sin 2θ
x-axis, ρ(g) = . For example, if n = 4 so |G| = 8, we can list all eight
sin 2θ − cos 2θ
elements g ∈ G and their matrices ρ(g):
 
1 0
ˆ g = 1: ρ(1) = I =
0 1
 
0 −1
ˆ g = (counterclockwise) rotation by π/2: ρ(g) = .
1 0
 
−1 0
ˆ g = rotation by π: ρ(g) = −I = .
0 −1
 
0 1
ˆ g = rotation by 3π/2: ρ(g) = .
−1 0
 
1 0
ˆ g = reflection about x-axis: ρ(g) =
0 −1
 
0 1
ˆ g = reflection about x = y line: ρ(g) =
1 0
 
−1 0
ˆ g = reflection about y-axis: ρ(g) =
0 1
 
0 −1
ˆ g = reflection about x = −y line: ρ(g) = .
−1 0

2.3 Motivation and History


Group representation theory was born in the work of Frobenius in 1896, triggered by a letter
from Dedekind (which made the following observation, which I take from Etingof et al: Let
the elements of a finite group G be variables x1 , . . . , xn , and consider the determinant of the
multiplication table, a polynomial of degree n. Then Dedekind observed that this factors
into irreducible polynomials, each of whose multiplicity equals its degree.) The history of
group representation theory is explained in a book by Curtis, Pioneers of representation
theory.
This theory appears all over the place, even before its origin in 1896:

ˆ In its origin, group theory appears as symmetries. This dates at least to Felix Klein’s
1872 Erlangen program characterising geometries (e.g., Euclidean, hyperbolic, spheri-
cal, and projective) by their symmetry groups. These include as above rotational and
reflectional matrices.

5
ˆ In 1904, William Burnside famously used representation theory to prove his theorem
that any finite group of order pa q b , for p, q prime numbers and a, b ≥ 1, is not simple,
i.e., there exists always a proper nontrivial normal subgroup. (By induction on a and
b this implies that these groups are solvable). This is in the course textbook of James
and Liebeck and it requires not much more than what is done in this course (we could
try to get to it at least in non-assessed coursework).

ˆ In number theory it is of crucial importance to study representations of the absolute Ga-


lois groups of finite extension fields of Q. The case of one-dimensional representations is
called (abelian) class field theory, and was a top achievement of algebraic number theory
of the 20th century. A generalisation to higher dimensional representations was for-
mulated by Robert Langlands in the late 1960s, and is called the (classical) Langlands
correspondence. In the two-dimensional case for certain representations coming from
elliptic curves, this statement becomes the Taniyama–Shimura conjecture, which is
now a theorem, which implies Fermat’s Last Theorem (Wiles and Taylor–Wiles proved
the latter in 1995 by proving the necessary cases of the conjecture, and in 2001 the
full Taniyama–Shimura conjecture was proved by Breuil–Conrad–Diamond–Taylor).
In general the classical Langlands correspondence remains a wide-open conjecture.

ˆ Partly motivated by an attempt to break through its difficulty, the Langlands corre-
spondence has a (complex) geometric analogue, thinking of the Galois group as au-
tomorphisms of a geometric covering (of a complex curve = Riemann surface), where
representations yield local systems. One obtains the “geometric Langlands correspon-
dence” (the first version was proved in the one-dimensional case by Deligne, in the two-
dimensional case by Drinfeld in 1983, stated for dimensions ≥ 3 by Laumon in 1987,
and proved in 2001 and 2004 by Frenkel–Gaitsgory–Vilonen and Gaitsgory). There are
by now many deeper and more general statements, many of which are proved.

ˆ Representation theory appears in chemistry (going back at least to the early 20th cen-
tury). For instance, G is the symmetry group of a molecule (rotational and reflectional
symmetries) and ρ(g) = A is the 3 × 3-matrix such that v 7→ Av is the rotation or
reflection.

ˆ In quantum mechanics, spherical symmetry of atoms give rise to orbitals and, in atoms,
is responsible for the discrete (“quantised”) energy levels, momenta, and spin of elec-
trons. By the Pauli exclusion principle (formulated by Wolfgang Pauli in 1925) this is
responsible for the structure of electron shells and ultimately all of chemistry.

ˆ More generally, given any differential equations that have a symmetry group G, the
solutions to the system of equations form a representation of G. Applied to the
Schrödinger equation this recovers the previous example in quantum mechanics.

ˆ The study of quantum chromodynamics (quarks and gluons and their strong nuclear
interactions) is heavily based in the representation theory of certain basic Lie groups.
For example, Gell-Mann famously proposed in 1961 the “Eightfold Way” (of physics,

6
not Buddhism after which it was named) which states that the up, down, and strange
quarks form a basis for the three-dimensional representation of SU(3) (the “special
unitary group” of matrices A such that det(A) = 1 and A−1 = At ). This explained
why the particles appear in a way matching precisely this group’s representation theory.

2.4 Representations: formal definition


Now we give the main definition of the course. We will need to work with vector spaces and
linear algebra: so you are assumed to be familiar with this.
Given a vector space V , let GL(V ) denote the group of invertible linear transformations
from V to itself (“GL” stands for “General Linear”). If V has a basis v1 , . . . , vn , then in
terms of the basis we can identify GL(V ) with the more familiar group GLn of invertible
n × n matrices: this follows by the correspondence between linear maps V → V and n × n
matrices (which you should know).

Definition 2.4.1. A representation of a group G is a pair (V, ρ) where V is a vector space


and ρ : G → GL(V ) is a group homomorphism.

Remark 2.4.2. We can also think of (V, ρ) as a pair of V and a map G × V → V, g · v :=


ρ(g)(v). Then, the axioms that G × V → V should satisfy are: (1) it is a group action (the
action is associative and e · v = v for all v ∈ V ), and (2) the map V → V , v 7→ g · v is linear
for all g ∈ G. That is, a representation is nothing but a linear action of G on V .

You may have noticed that whenever a new notion is introduced in mathematics, there is
a corresponding notion of the important functions, or “morphisms,” between such objects.
Then, the “isomorphisms” are the invertible morphisms. For example:
Objects Morphisms Isomorphisms
Groups Homomorphisms Isomorphisms
Vector spaces Linear maps Isomorphisms
Topological spaces Continuous maps Homeomorphisms
Rings Ring homomorphisms Ring isomorphisms
Group representations ? ?
The question marks are the following:

Definition 2.4.3. A homomorphism of representations T : (V, ρ) → (V 0 , ρ0 ) is a linear map


T : V → V 0 such that
T ◦ ρ(g) = ρ0 (g) ◦ T (2.4.4)
for all g ∈ G. T is an isomorphism if T is invertible. Two representations are isomorphic if
there exists an isomorphism between them.

As is the case in many places in mathematics, condition (2.4.4) can equivalently be


expressed by requiring that the following diagram commute, i.e., all paths with the same

7
endpoints give the same function (in this case, both compositions from the top left corner
to the bottom right are equal):
ρ(g)
V / V (2.4.5)
T T
 0 
0 ρ (g) /
V V0
Exercise 2.4.6. (i) If T is a homomorphism of representations and T is invertible as a linear
transformation then the inverse T −1 is also a homomorphism of representations. (ii) Using
linear algebra and (i), prove that the following are equivalent: (a) T is an isomorphism of
representations; (b) T is a bijective homomorphism of representations.
In this course we will always take the underlying field of the vector space to be C, the
field of complex numbers, and will not further mention it. However, much of what we say
will not require this, and in your future lives you may be interested in more general fields
(e.g., the real field in geometry, and finite extensions of Q as well as finite and p-adic fields
in number theory). Moreover, we will exclusively be concerned with the case where G is a
finite group. Thus the subject of this course is properly complex representations of finite
groups, and especially studying them up to isomorphism.
Suppose that V is n-dimensional, and pick a basis B = (v1 , . . . , vn ) of V . As above,
we can identify GL(V ) with GLn (C), the group of invertible n × n matrices with complex
coefficients. Precisely, we have the map T 7→ [T ]B = TB,B , where [T ]B is the matrix of T in
the basis B (and more generally [T ]B,B0 is the matrix of T in the pair of bases B, B 0 ). Thus
Definition 2.4.1 becomes more concrete:
Definition 2.4.7. A (complex) n-dimensional representation of G is a homomorphism ρ :
G → GLn (C).
Although equivalent to Definition 2.4.1, Definition 2.4.7 is inferior since the equivalence
requires choosing a basis of V . Let us define some notation for the result of such a choice:
Definition 2.4.8. Given (V, ρ) with dim V = n and a basis B of V , let ρB : G → GLn (C)
be the map
ρB (g) = [ρ(g)]B . (2.4.9)
Different choices of basis give rise to different homomorphisms to GLn (C). This follows
directly from the change-of-basis formula from linear algebra. Below we will need the fol-
lowing basic formula: Let W1 , W2 , and W3 be vector spaces with bases B1 , B2 and B3 . Then
given linear maps T : W1 → W2 and S : W2 → W3 ,
[S ◦ T ]B1 ,B3 = [S]B2 ,B3 [T ]B1 ,B2 . (2.4.10)
Now let B = (v1 , . . . , vn ) and B 0 = (v10 , . . . , vn0 ) are two bases of V . They are related by some
invertible matrix P ∈ GLn (C). Precisely,
P = [I]B0 ,B , (2.4.11)
or equivalently
(v1 , v2 , . . . , vn )P = (v10 , v20 , . . . , vn0 ). (2.4.12)

8
Exercise 2.4.13. Verify that (2.4.11) and (2.4.12) are equivalent.
Solution (omitted from lecture):PNote that (2.4.12) means that, for P = (pij ), then
vi0
=P j vj Pji . We claim that, for v = i ai vi in terms of B, then in terms of B 0 , we have
P
v = i a0i vi0 , where
(a01 , a02 , . . . , a0n )t = P −1 (a1 , . . . , an )t . (2.4.14)
To see that this follows from (2.4.12), we can do the computation
v = (v1 , . . . , vn ) · (a1 , . . . , an ) = (v1 , . . . , vn )(a1 , . . . , an )t = (v10 , . . . , vn0 )P −1 (a1 , . . . , an )t ,
(2.4.15)
using the dot product of vectors in the second expression. We conclude from (2.4.14) that
P −1 = [I]B,B0 . Similarly, P = [I]B0 ,B .
0
Lemma 2.4.16. The representations ρB and ρB are related by simultaneous conjugation:
0
ρB (g) = P −1 ρB (g)P. (2.4.17)
0
Proof. By (2.4.10): ρB (g) = [ρ(g)]B0 ,B0 = [I]B,B0 [ρ(g)]B,B [I]B0 ,B = P −1 ρB (g)P .
This motivates:
Definition 2.4.18. Two n-dimensional representations ρ, ρ0 : G → GLn (C) are equivalent
if there is an invertible matrix P ∈ GLn (C) such that ρ0 (g) = P −1 ρ(g)P for all g ∈ G.
Proposition 2.4.19. Two finite-dimensional representations (V, ρ) and (V 0 , ρ0 ) are isomor-
phic if and only if dim V = dim V 0 and, for any (single) choices of bases of V and V 0 , the
resulting (dim V )-dimensional representations ρ and ρ0 are equivalent.
Here (V, ρ) is called finite-dimensional if V is finite-dimensional.
Proof. First, it is obvious that the two representations can be isomorphic only if they have
the same dimension. Now the statement follows from the next important lemma.
Lemma 2.4.20. Let (V, ρ) and (V 0 , ρ0 ) be two representations. Let B = (v1 , . . . , vn ) and
B 0 = (v10 , . . . , vn0 ) be bases of V and V 0 , respectively. Then a linear map T : V → V 0 is a
homomorphism of representations if and only if
0
[T ]B,B0 ◦ ρB (g) = (ρ0 )B (g) ◦ [T ]B,B0 . (2.4.21)
It is an isomorphism if and only if [T ]B,B0 is additionally an invertible matrix, so that it
0
conjugates ρB (g) to (ρ0 )B (g).
Proof. This follows immediately from the linear algebra rule for matrices in terms of bases
(2.4.10).
Therefore, Definitions 2.4.1 and 2.4.7 are not so different if we look only at representations
up to isomorphism in the first case and up to equivalence in the second case. This is what
we are mostly interested in in this course.
Generalising Example 2.2.2, we can understand all representations of cyclic groups:

9
Exercise 2.4.22. Let G = hgi be a cyclic group. (i) Prove that a representation (V, ρ) of
G is equivalent to a pair (V, T ) of a vector space V and a linear transformation T ∈ GL(V )
such that, if g has finite order m ≥ 1, then T m = I. The equivalence should be given by
T = ρ(g). (ii) Prove that, if G = {1}, then representations are equivalent to vector spaces.

2.5 Going back from matrices to vector spaces


We just discussed how to go from vector spaces (Definition 2.4.1) to matrices (Definition
2.4.7). It is useful to go back, especially in order to work with the examples from 2.2 via
vector spaces. This is actually easier than the other way: Given an n × n matrix A, let
Ae : Cn → Cn be the linear map given by A(e e i ) = Aei . The map A 7→ A
e is an isomorphism

GLn (C) → GL(C ). (There is no danger in thinking of GLn (C) and GL(Cn ) as the same
n

in this way, but I will use the tildes to avoid possible confusion.)
Example 2.5.1. Let ρ : G → GLn (C) be a group homomorphism. Then we can define the
corresponding homomorphism ρe : G → GL(Cn ), given by ρe(g) = ρ(g) g for all g ∈ G. Then
n
(C , ρe) is a group representation in the sense of Definition 2.4.1.
We apply this to Example 2.2.3 and it rewrites in the following subtly different way:
Example 2.5.2. We take G = Sn and ρ : Sn → GLn (C) assigning to a permutation its
permutation matrix. Then we get (Cn , ρe) where ρe(σ)(ei ) = eσ(i) .

2.6 Representations from group actions


The next construction can be thought of as a wide generalisation of Example 2.5.2:
Definition 2.6.1. If X is a finite set, define the complex vector space C[X] of linear com-
binations of the elements of X:
X
C[X] := { ax x | ax ∈ C} (2.6.2)
x∈X

Here, addition and scalar multiplication is done coeffiecientwise:


X X X
λ ax x + µ bx x := (λax + µbx )x, (2.6.3)
x∈X x∈X x∈X

for all λ, µ, ax , bx ∈ C.
For ease of notation, we can drop the summands where ax = 0. For instance, we write,
for X = {x1 , x2 , x3 },
2x1 + 0x2 + 3x3 = 2x1 + 3x3 . (2.6.4)
Example 2.6.5. Let X be a finite set and G × X → X a group action, (g, x) 7→ g · x.
Then we can consider the representation (C[X],Pρ) given by ρ(g)(x)
P = g · x for all g ∈ G and
x ∈ X, linearly extended to all of C[X]: ρ(g)( x∈X ax x) = x∈X ax (g · x).

10
Remark 2.6.6 (Non-examinable). The above can be done also if X is infinite, although
then we have to modify the definition of C[X] to include only linear combinations of finitely
many elements of X (or equivalently, all but finitely many of the ax have to be zero). We
will not need to use this and this remark is non-examinable.

2.7 The regular representation


The most important representation obtained as before is the regular representation:
Example 2.7.1. Let G be a finite group. Let X be the set X := G together with the action
G × X → X given by left multiplication, g · x = gx. Then the resulting representation
(C[X], ρ) is called the regular representation.
When there is no confusion, we will drop the X and simply write (C[G], ρ) in this case.
Remark 2.7.2. Actually, C[G] as above also admits a ring structure with the same multi-
plication g · h = gh for g, h ∈ G, extended linearly. This will become important later and we
will call it the group algebra or group ring. Caution that you should still think of the ring
C[G] and the representation C[G] as different types of objects, although the underlying set
is the same.
Example 2.7.3. Let G = Cn be the cyclic group. Then the regular representation (C[G], ρ),
in terms of its basis B = (1, g, g 2 , . . . , g n−1 ), has the following form:
 
0 0 ··· 0 1
1 0 · · · 0 0 
 
ρB (g) = P(1,2,...,n) = 0 1 · · · 0 0 , (2.7.4)
 
 .. .. . . .. .. 
. . . . .
0 0 ··· 1 0
and ρB (g m ) = P(1,2,...,n)m = P(1,2,...,n)
m
. In other words, ρB (g m ) is the permutation matrix such
that the ones appear in the entries (i + m, i), with i taken modulo n.
For example, when n = 3, we have
     
1 0 0 0 0 1 0 1 0
2 2 2
ρ(1,g,g ) (1) = 0 1 0 , ρ(1,g,g ) (g) = 1 0 0 , ρ(1,g,g ) (g 2 ) = 0 0 1 . (2.7.5)
0 0 1 0 1 0 1 0 0
Exercise 2.7.6. Let (V, ρV ) be a representation of G. Show that, for every vector v ∈ V ,
there is a unique homomorphism of representations C[G] → V sending e ∈ G ⊆ C[G] to
v ∈ V . Conversely, show that all homomorphisms are of this form. Letting HomG (C[G], V )
denote the set of homomorphisms of representations C[G] → V , deduce that there is a

bijection of sets φ : HomG (C[G], V ) → V , φ(T ) = T (e). Moreover, prove that this is linear,
in the sense that φ(aS + bT ) = aφ(S) + bφ(T ) for all a, b ∈ C and all homomorphisms of
representations S, T : C[G] → V . (In fact, as we will explain in multiple ways in Section
2.14, HomG (C[G], V ) is a vector space under addition and scalar multiplication. Thus, we
proved that φ is an isomorphism of vector spaces.)

11
2.8 Subrepresentations
Definition 2.8.1. A subrepresentation of a representation (V, ρ) is a vector subspace W ⊆ V
such that ρ(g)(W ) ⊆ W for all g ∈ G. Such a W is also called a G-invariant subspace.
Thus, (W, ρ|W ) is a representation, where by definition ρ|W : G → GL(W ) is defined by
ρ|W (g) := ρ(g)|W (Caution: we are not restricting the domain of ρ, but rather the
domain of ρ(g) for all g ∈ G.) We call the subrepresentation proper if W 6= V and nonzero
if W 6= {0}.
Exercise 2.8.2. For W finite-dimensional, show that ρ(g)(W ) ⊆ W implies ρ(g)(W ) = W .
Harder: Show that this is not true if W is infinite-dimensional, although it is still true that
ρ(g)(W ) ⊆ W for all g ∈ G implies ρ(g)(W ) = W for all g ∈ G. So we could have stated
the definition using an equality.
Remark 2.8.3. Caution: probably the terminology “proper nontrivial” exists in the litera-
ture instead of “proper nonzero” and I might even say this by mistake, but I think it could be
confusing since the trivial representation (or a trivial representation) is not the same thing
as the zero representation. I will try to avoid it.
Definition 2.8.4. A representation (V, ρ) is irreducible (or simple) if it is nonzero and there
does not exist any proper nonzero subrepresentation of V . It is reducible if it has a proper
nonzero subrepresentation.
Remark 2.8.5. Caution that the nonzero condition here is not redundant: it will be con-
venient for us not to consider the zero representation to be irreducible. For one thing, we
will want to say how many irreducibles we need to build a general representation, so we
obviously don’t want to count zero. It’s like not including one as a prime number.
On the other hand, we should neither call zero a reducible representation: just as for
the integer 1, which is neither prime nor composite, we call the zero representation neither
reducible nor irreducible.
We will only be interested in finite-dimensional representations in this course. It turns
out that, for finite groups G (the main case of interest in this course), all irreducible repre-
sentations are automatically finite-dimensional:
Proposition 2.8.6. If G is a finite group and (V, ρ) an irreducible representation, then V
is finite-dimensional.
Proof. Let v ∈ V be a nonzero
P vector. Let
PW be the span of ρ(g)(v). This is a subrepresen-
tation of V , since ρ(h) g∈G ag ρ(g)v = g∈G ag ρ(hg)v. But W is a span of finitely many
vectors and hence finite-dimensional. Since v was nonzero, W is nonzero. By irreducibility,
V = W , which is finite-dimensional.
Remark 2.8.7 (Non-examinable). For general infinite groups G, there can be infinite-
dimensional irreducible representations (for example, V could be an infinite-dimensional
vector space and G = GL(V ), under which V is an irreducible representation). For an
example where G is abelian, see Remark 2.12.5.

12
Exercise 2.8.8. (i) Every one-dimensional representation is irreducible. In particular this
includes the sign representation of Sn . (ii) The two-dimensional representations of Dn for
n ≥ 3 explained in Example 2.2.6 are irreducible. (iii) The permutation representation of
Sn is not irreducible for n ≥ 2.
Solution to (ii): This will be for problem class.
Solution to (iii): There is a nonzero subrepresentation {(a, a, . . . , a) | a ∈ C} ⊆ Cn (let
us write Cn as n-tuples now, where to translate to column vectors we take transpose). That
is, ρe(σ)(a, a, · · · a) = (a, a, · · · , a) for all σ ∈ Sn , since multiplying a permutation matrix by
a column vector whose entries are all equal does not do anything.
Another subrepresentation of the permutation representation is {(a1 , . . . , an ) | a1 + · · · +
an = 0} ⊆ Cn , called the reflection representation:
Definition 2.8.9. The reflection representation of Sn is the subrepresentation {(a1 , . . . , an ) |
a1 + · · · + an } ⊆ Cn of the permutation representation.
Remark 2.8.10 (Non-examinable). This is called the reflection representation because of
its appearance associated to the special linear group SLn (C) of n×n matrices of determinant
one: it is the representation of the Weyl group on the Lie algebra of the maximal torus (see
an appropriate textbook on Lie groups or Lie algebras for details).
We obtain that, not only is the permutation representation (Cn , ρe) not irreducible, but
it is a direct sum of a trivial representation {(a, a, . . . , a)} and the reflection representation.
Proposition 2.8.11. If T : (V, ρ) → (V 0 , ρ0 ) is a homomorphism of representations, then
ker(T ) and im(T ) are subrepresentations of V and V 0 , respectively.
Proof. If T (v) = 0 then T (g·v) = g·T (v) = 0 for all g ∈ G, thus ker(T ) is a subrepresentation
of V . Also, for all v ∈ V , g · T (v) = T (gv) ∈ im(T ), so im(T ) is a subrepresentation of
V 0.
This is related to quotient groups and vector spaces. We have the following definition
parallel to that of subrepresentations:
Definition 2.8.12. A quotient representation of a representation (V, ρV ) is one of the form
(V /W, ρV /W ) for W ⊆ V a subrepresentation and ρV /W (g)(v + W ) := ρ(g)(v) + W .
Another way to think of this is via the following analogue of the first isomorphism theo-
rem:
Proposition 2.8.13. If T : (V, ρ) → (V 0 , ρ0 ) is a homomorphism of representations, then
for W := ker T and W 0 := im T , we have an isomorphism T : (V /W, ρV /W ) →∼
(W 0 , ρ0 |W 0 ),
given by T (v + W ) = T (v).
Proof. By linear algebra T : V /W → W 0 is an isomorphism. We only need to observe it is
G-linear:
T ◦ ρV /W (g)(v + W ) = T ◦ ρ(g)(v) = ρ0 (g) ◦ T (v) = ρ0 (g) ◦ T (v + W ). (2.8.14)

13
To apply the above, let us recall the following standard definition from linear algebra.
First let us fix notation:

Definition 2.8.15. For V and W (complex) vector spaces, by Hom(V, W ) and End(V ) =
Hom(V, V ) we always mean the vector space of linear maps (not the set of all group homo-
morphisms: only the linear ones!).

Caution: We will sometimes also use Hom to denote group homomorphisms, but never
when the inputs are vector spaces (unless specified).
Now we can recall the definition of projection operators.

Lemma 2.8.16. The following are equivalent for T ∈ End(V ): (i) T 2 = T ; (ii) For all
w ∈ im(T ), we have T (w) = w. In the case these assumptions hold, we have a direct sum
decomposition V = ker(T ) ⊕ im(T ).

Proof. (i) implies (ii): For w ∈ im(T ), write w = T (v). Then T (w) = T 2 (v) = T (v) = w.
(ii) implies (i): By (ii), T (T (v)) = I(T (v)) = T (v).
For the last statement, first we show that ker(T ) ∩ im(T ) = {0}. If w = T (v) ∈ im(T )
and T (w) = 0, then T 2 (v) = 0, so T (v) = 0 as well. But then w = 0. Next we show that
V = ker(T ) + im(T ). To see this, v = T (v) + (v − T (v)) and T (v − T (v)) = T (v) − T 2 (v) =
0.

Definition 2.8.17. Let T ∈ End(V ). Then T is a projection (operator) if either of the two
equivalent conditions of Lemma 2.8.16 hold.

For convenience let us give an adjective for the property of being a homomorphism of
representations:

Definition 2.8.18. A linear map T : V → W is called G-linear if it is a homomorphism of


representations.

This is also suggestive, since the property is saying in terms of the G-action that T (g·v) =
g · T (v), which looks like linearity over G.

Corollary 2.8.19. Suppose that T : V → V is a G-linear projection operator. Then we get


that (V, ρ) is a direct sum of two subrepresentations, ker(T ) and im(T ).

In the next subsection, we will formalise the phenomenon in the preceding examples into
the notion of decomposability.

2.9 Direct sums


We first recall the relevant notion from linear algebra. If V and W are two vector spaces,
then V ⊕ W is the Cartesian product V ⊕ W = V × W equipped with the addition and scalar
multiplication, a(v, w) = (av, aw) and (v1 , w1 ) + (v2 , w2 ) = (v1 + v2 , w1 + w2 ). This is called
the external direct sum. If there is any confusion, let us denote this operation by ⊕ext .

14
There is another important standard notation which conflicts with this: if V is a vector
space and V1 , V2 two vector subspaces, such that V1 ∩V2 = {0} and V1 +V2 = V then we write
V = V1 ⊕ V2 . This is called the internal direct sum. To avoid confusion with external direct
sum, let us denote it for a moment as ⊕int . [Generally, if V1 , V2 ⊆ V , then the operation
V1 ⊕int V2 makes sense if and only if V1 ∩ V2 = {0}, and in this case V1 ⊕int V2 = V1 + V2 , with
the notation of direct sum indicating that V1 ∩ V2 = {0}.]
There should be no confusion between the two notations because, if V = V1 ⊕ V2 , then
the sum is the internal direct sum if and only if V1 and V2 are subspaces of V .

Remark 2.9.1 (Omit from lecture, non-examinable). The relation between the two notions
is the following: If V1 , V2 ⊆ V are two subspaces, then there is a map V1 ⊕ext V2 → V defined
by (v1 , v2 ) 7→ v1 + v2 . It is an isomorphism if and only if V = V1 ⊕int V2 .
Thus when V = V1 ⊕int V2 we have V1 ⊕ext V2 ∼ = V = V1 ⊕int V2 . We conclude that V is
an internal direct sum of its subspaces V1 , V2 if and only if every vector v ∈ V is uniquely
expressible as a sum of vectors in V1 and V2 , i.e., v = v1 + v2 for unique v1 ∈ V1 and v2 ∈ V2 .

The two notions both carry over to representations: we just need to require that V, V1 ,
and V2 are all representations of G (and in the internal case, that V1 and V2 are subrepre-
sentations):

Definition 2.9.2. Given two representations (V1 , ρ1 ) and (V2 , ρ2 ), the external direct sum
is the representation (V1 , ρ1 ) ⊕ext (V2 , ρ) := (V, ρ) where V = V1 ⊕ext V2 and ρ(g)(v1 , v2 ) =
(ρ1 (g)(v1 ), ρ2 (g)(v2 )).

Definition 2.9.3. Given a representation (V, ρ) and subrepresentations V1 , V2 ⊆ V , we say


that (V, ρ) is the internal direct sum of (V1 , ρ|V1 ) and (V2 , ρ|V2 ) if V = V1 ⊕int V2 .

Definition 2.9.4. A nonzero representation is decomposable if it is a direct sum of two


proper nonzero subrepresentations. Otherwise, it is indecomposable.

Remark 2.9.5. As in Remark 2.8.5, we exclude the zero representation from being irre-
ducible or indecomposable, for the same reason as before. The zero representation is neither
decomposable nor indecomposable.

Definition 2.9.6. A representation is semisimple or completely reducible if it is an (internal)


direct sum of irreducible representations.

Example 2.9.7. Let V be an indecomposable representation. Then by definition, Lm we see


that V is semisimple if and only if it is actually irreducible. Indeed, if V = i=1 Vi is a
direct sum expression with the Vi irreducible, then by indecomposability, m = 1 and V = V1
is irreducible.

Remark 2.9.8. As we will see shortly, by Maschke’s theorem, when G is finite and V finite-
dimensional (working over C as always), then V is always semisimple. Hence in the case of
main interest, indecomposable is equivalent to irreducible. As the next example shows, this
is not true without these assumptions.

15
Example  2.9.9. Let G = Z and consider the two-dimensional representation given by
1 x
ρ(x) = . This is reducible since the x-axis is a subrepresentation. However, this
0 1
is the only subrepresentation: any nonzero subrepresentation
  is a line fixed by all of these
1 1
matrices, i.e., an eigenline. But the only eigenline of is the x-axis. This shows that
0 1
there cannot be a decomposition into irreducible subrepresentations, so ρ is indecomposable.

From now on, we will not include the notation “ext” and “int” in our direct sums, because
which type of sum will be clear from context, and anyway the distinction does not change
much. Here is another exercise to make this clear.

Exercise 2.9.10 (Non-examinable). Show that V ⊕ext W = (V ⊕ext 0) ⊕int (0 ⊕ext W ).

2.10 Maschke’s Theorem


In this section finally the assumption that we are working over C, as well as the finiteness
of G, becomes very important.

Definition 2.10.1. Let (V, ρ) be a representation of G and W ⊆ V a subrepresentation. A


complementary subrepresentation is a subrepresentation U ⊆ V such that V = W ⊕ U .

Theorem 2.10.2. (Maschke’s Theorem) Let (V, ρ) be a finite-dimensional representation of


a finite group G. Let W ⊆ V be any subrepresentation. Then there exists a complementary
subrepresentation U ⊆ V .

Proof. The idea of the proof is to construct a homomorphism T : V → V of representations


which is a projection operator with image W , and set U = ker(T ). Then we apply Corollary
2.8.19.
We begin by observing that, by linear algebra, there exists a complementary subspace
U ⊆ V , i.e., a vector subspace such that V = U ⊕ W . (Recall the proof: extend a basis
(v1 , . . . , vm ) of W to a basis (v1 , . . . , vn ) of V , and set U to be the span of vm+1 , . . . , vn .) As
a result, every vector v ∈ V is uniquely expressible as a sum v = w + u for w ∈ W and
u ∈ U.
Let T : V → V be the map defined by T (w + u) = w for w ∈ W and u ∈ U . Then it
is immediate that T 2 = T , i.e., T is a projection. By definition im(T ) = W . If T were a
homomorphism of representations, we would be done by Corollary 2.8.19. However, it need
not be.
The idea of the proof is to average T over the group G, in order to turn T into a
homomorphism of representations. Namely, define
X
Te := |G|−1 ρ(g) ◦ T ◦ ρ(g)−1 . (2.10.3)
g∈G

16
Let us check that the result is a homomorphism of representations:
X X
Te(ρ(h)v) = |G|−1 ρ(g)◦T ◦ρ(g)−1 ρ(h)v = |G|−1 ρ(h)ρ(g 0 )◦T ◦ρ(g 0 )−1 v = ρ(h)◦Te(v).
g∈G g 0 =h−1 g∈G
(2.10.4)
It remains to check that Te(w) is still a projection with image W . We first show that
Te(w) = w for all w ∈ W :
X X
Te(w) = |G|−1 ρ(g) ◦ T (ρ(g)−1 (w)) = |G|−1 ρ(g) ◦ ρ(g)−1 (w) = w. (2.10.5)
g∈G g∈G

The crucial second equality comes from the fact that ρ(g)−1 (w) = ρ(g −1 )(w) ∈ W , and
T (w0 ) = w0 for all w0 ∈ W (in particular w0 = ρ(g −1 )(w)).
Since im(T ) = W , and ρ(g)(W ) ⊆ W for all g ∈ G (because W is a subrepresentation),
the definition of Te implies that im(Te) ⊆ W . Since Te(w) = w for all w ∈ W , we obtain that
im(Te) = W . Therefore T is a projection by Lemma 2.8.16. Now setting U := ker(T ) we get
a complementary subrepresentation to W by Corollary 2.8.19.
Remark 2.10.6. Note that there is a clever formula in the proof which turns a general
linear map T : V → V into a homomorphism of representations. In Section 2.14 we will
generalise this.
By induction on the dimension we immediately conclude:
Corollary 2.10.7. If V is a finite-dimensional representation of a finite group, then V is
a direct sum of some irreducible subrepresentations. That is, V is semisimple (completely
reducible).
This immediately implies:
Corollary 2.10.8. If V is a finite-dimensional indecomposable representation of a finite
group, then V is irreducible.
(Note that actually the finite-dimensional hypothesis is not necessary, as every indecom-
posable representation of a finite groups is also irreducible.)
Remark 2.10.9. See Example 2.9.9 above for an example where G is infinite and the
conclusions of the corollaries fail (hence also of the theorem). In fact that example is inde-
composable but not irreducible, so it is not semisimple by Example 2.9.7.
Example 2.10.10. If G = {1} is trivial, then a representation V is the same thing as
a vector space. There is only one irreducible representation of G up to isomorphism: the
trivial (one-dimensional) representation, which is the same as a one-dimensional vector space.
Indeed,Levery (finite-dimensional vector space V is a direct sum of one-dimensional subspaces
V = Vi , but choosing such a decomposition is almost the same as choosing a basis.
Precisely, it is choosing a little bit less than choosing a basis: each Vi is the same information
as a vector up to nonzero scaling. So we are choosing a basis up to rescaling each element
of the basis.

17
Here is a notation we will use from now on: Cv (or C · v) denotes the span of the (single)
vector v.

Exercise 2.10.11. Let’s give a similar example where the decomposition is unique. Let
A ∈ GLn (C) be a diagonal matrix with distinct diagonal entries which are m-th roots
of unity (for some m ≥ n of course). Then consider the n-dimensional representation of
Cm = {1, g, . . . , g m−1 } given by (Cn , ρ) with ρ(g) = A. Show that: (i) the irreducible
subrepresentations of (Cn , ρ) are precisely the subspaces Cei ⊆ Cn , i.e., the coordinate axes;
(ii) Cn = Ce1 ⊕ · · · ⊕ Cen is the unique decomposition of Cn into a direct sum of irreducible
subrepresentations.

Example 2.10.12. Here is another example with a unique decomposition. Let G = C3 =


{1, g, g 2 } and (C[G], ρ) be the regular representation. Then there is an obvious subrepresen-
tation, C · v1 for v1 := 1 + g + g 2 , since g · v1 = g · (1 + g + g 2 ) = 1 + g + g 2 = v1 . This is
one-dimensional and hence irreducible. We can generalise this: if ζ is a cube-root of unity,
let vζ := 1 + ζg + ζ 2 g 2 . Then g · vζ = g · (1 + ζg + ζ 2 g 2 ) = g + ζg 2 + ζ 2 1 = ζ −1 vζ . Hence,
g i ·vζ = ζ −i vζ and the subspace C·vζ is a one-dimensional subrepresentation. For ζ := e2πi/3 ,
we see that C[G] = C · v1 ⊕ C · vζ ⊕ C · vζ 2 .

Exercise 2.10.13. Generalise the previous example: (i) For G = Cn = {1, g, g 2 , ·P · · , g n−1 },
n−1 i i
take the regular representation (C[G], ρ). For every n-th root of unity ζ, let vζ := i=0 ζ g .
−1 2πi/n
L n−1
Show that g · vζ = ζ vζ , and deduce that, when ζ = e , then C[G] = i=0 C · vζ i
is a decomposition into one-dimensional representations.
P (ii) Now for G an arbitrary finite
group, we cannot say as much, but let v1 := g∈G g ∈ C[G], and show that C · v1 ⊆ C[G] is
always a subrepresentation isomorphic to the trivial representation. Find a complementary
subrepresentation.

Remark 2.10.14 (Non-examinable). We used that the field was C in order for |G|−1 to
exist. As this is all we needed, we don’t really need the field to be C: any field in which |G|
is invertible (i.e., of characteristic not a factor of |G|) will do.
For an example where the conclusion of Maschke’s theorem fails if |G| is not invertible,
let G = C2 = {1, g} and the field also be F2 (the field  of two
 elements, i.e., Z/2Z). Then
1 1
the two-dimensional representation defined by ρ(g) = has only one one-dimensional
0 1
subrepresentation, F2 · e1 , so it is indecomposable but not irreducible (hence not semisimple
by Example 2.9.7). This is really similar to Example 2.9.9 where the theorem fails because
instead G is infinite (and the field is C).
This motivates the following definition: modular representation theory of a finite group
means the study of representations of the group over a field of characteristic dividing the order
of the group. (For infinite groups, in general “modular representation theory” simply means
working over any field of positive characteristic, especially in characteristics where there exist
non-semisimple finite-dimensional representations, i.e., the conclusion of Maschke’s theorem
fails).

18
Remark 2.10.15 (Non-examinable). The assumption that V be finite-dimensional is not
really needed for Maschke’s theorem, as long as you know that complementary subspaces
exist in infinite dimensions as well. The key thing is that G is finite (and |G|−1 is in the
ground field).

2.11 Schur’s Lemma


We now give a fundamental result about irreducible representations.

Lemma 2.11.1 (Schur’s Lemma). Let (V, ρV ) and (W, ρW ) be irreducible representations
of a group G.
(i) If T : V → W is a G-linear map, then T is either an isomorphism or the zero map.
(ii) Suppose V is finite-dimensional. If T : V → V is G-linear then T = λI for some
λ ∈ C.

By Proposition 2.8.6, in part (ii), if we assume that G itself is finite then we can drop
the assumption that V is finite-dimensional (it is automatic).

Remark 2.11.2 (Non-examinable). Part (i) and its proof actually holds when C is replaced
by a general field. Part (ii) requires the field to be C as we assume (or more generally
an algebraically closed field). This is needed in order to have the existence of a (nonzero)
eigenvector. (Recall the proof of this fact in linear algebra: take the characteristic polynomial
χT (x) of T and finding a root λ ∈ C, which exists by the fundamental theorem of algebra.
Then an eigenvector with eigenvalue λ is any element of ker(T − λI) 6= 0. In general, a
field with the property that every polynomial over a field has a root is called an algebraically
closed field.)

Proof of Lemma 2.11.1. (i) If T : V → W is a homomorphism of representations, then


ker(T ) is a subrepresentation. If T is nonzero, this is not all of V , so by irreducibility,
ker(T ) = 0. That is T is injective. Since V is nonzero, this means that im(T ) is a nonzero
subrepresentation of W . By irreducibility again, im(T ) = W . So T is also surjective, and
hence an isomorphism.
(ii) We know from linear algebra that every linear transformation T ∈ End(V ) has a
nonzero eigenvector: a nonzero vector v such that T v = λv for some λ ∈ C.
We claim that T = λI. Indeed, T − λI : V → V is also a G-linear map, which is not
injective. By (i) T − λI = 0, which implies T = λI as desired.

Example 2.11.3. Let us take G = Sn and (Cn , ρ) to be the permutation representation.


Then we can obtain a G-linear map T : CnP→ Cn which is not a multiple of the identity:
T (a1 , . . . , an ) = (a, a, . . . , a) where a = n1 ni=1 ai is the average of a1 , . . . , an (this is the
projection of Cn onto the trivial subrepresentation), so this representation is not irreducible
by (the contrapositive of) Schur’s Lemma.

Example 2.11.4 (Omit from lecture, non-examinable). If G = Dn is a dihedral group for


n ≥ 3 and (C2 , ρ) is the representation of Example 2.2.6, Schur’s Lemma implies that every

19
G-linear map T : C2 → C2 is a multiple of the identity. This can be proved parallel to how
we showed that the representation is irreducible, using reflections: if we take any reflection
g ∈ G, then T (gv) = gT (v) shows that, if v is parallel to the axis of reflection, i.e., gv = v,
then T (v) = T (gv) = gT (v), so also T (v) is parallel to the axis of reflection. Thus v is also
an eigenvector of T . But then every vector parallel to a reflection axis is an eigenvector of
T . This implies that T has at least three non-parallel eigenvectors, so T has to be a multiple
of the identity (if it were not, then the eigenspaces of T would have to be one-dimensional
and there could be at most two of these).
Example 2.11.5. Let G = Cn = {1, g, . . . , g n−1 } and let (C2 , ρe) be the two-dimensional
representation where g k acts as a rotation in the plane by angle 2πk/n, i.e., the one given
by  
k cos θ − sin θ
ρ(g ) = R2πk/n , Rθ := . (2.11.6)
sin θ cos θ
Then, every rotation T = Rθ is G-linear. For θ ∈
/ πZ, these are not scalar multiples of the
identity. By the contrapositive of Schur’s Lemma, the representation is not irreducible.
Remark 2.11.7 (Non-examinable). We see from this example that Schur’s Lemma (part
(ii)) does not hold when we work over the real field instead of the complex field, since the
representation (R2 , ρ) of Cn given by rotations is actually irreducible, working over R, when
n ≥ 3. Indeed, rotation matrices by θ only have real eigenvectors when θ = mπ for m ∈ Z,
so the rotations by 2πk/n do not have real eigenvectors when n ≥ 3. This means they have
no real one-dimensional subrepresentations.
Exercise 2.11.8. Use Schur’s Lemma to prove the following formula: Let V = V1 ⊕ · · · ⊕ Vn
where each of the Vi are irreducible and (pairwise) nonisomorphic (i.e., Vi ∼ 6= Vj for i 6= j).
Prove that every linear map Vi → V is a scalar multiple of the inclusion map. Conclude that
the decomposition V = V1 ⊕ · · · ⊕ Vn is the unique one into irreducible representations.
Conversely, show that, if Vi ∼
= Vj for some i 6= j, then there are infinitely many injective
homomorphisms of representations Vi → V . Conclude that the decomposition V = V1 ⊕
· · · ⊕ Vn is unique if and only if all of the Vi are (pairwise) nonisomorphic.

2.12 Representations of abelian groups


Thanks to Schur’s Lemma we can understand finite-dimensional irreducible representa-
tions of abelian groups up to isomorphism (and hence, by Maschke’s theorem, also finite-
dimensional representations of finite abelian groups):
Proposition 2.12.1. Let G be an abelian group (not necessarily finite). Then every finite-
dimensional irreducible representation of G is one-dimensional.
By Maschke’s theorem we immediately deduce:
Corollary 2.12.2. Let G be a finite abelian group. Then every finite-dimensional represen-
tation is a direct sum of one-dimensional representations.

20
The proposition will follow from the following basic observation:

Lemma 2.12.3. Let (V, ρV ) be a representation of a group G. Let z ∈ G be central, i.e.,


zg = gz for all g ∈ G. Then ρV (z) : V → V is G-linear.

Proof. We have ρV (z) ◦ ρV (g) = ρV (zg) = ρV (gz) = ρV (g) ◦ ρV (z).


Proof of Proposition 2.12.1. Assume G is abelian. Let (V, ρV ) be a finite-dimensional irre-
ducible representation. Since every g ∈ G is central, by Lemma 2.12.3, ρV (g) is G-linear. By
Lemma 2.11.1.(ii), ρV (g) is actually a multiple of the identity matrix. Now let v ∈ V be any
nonzero vector. Let V1 := C · v, the span of v, which is one-dimensional. We have that V1 is
a subrepresentation, since ρV (g)(v) is always a multiple of v. By irreducibility, V = V1 .

Remark 2.12.4. [Non-examinable, omit from lectures] The proof actually shows the follow-
ing characterisation of the center of G: z ∈ G is central only if ρV (g) is a scalar matrix for
all irreducible representations V . If G is finite, then the converse is also true: if ρV (z) is a
scalar matrix for all irreducible representations V , then ρV (zg) = ρV (gz) for every irreducible
representation V and every g ∈ G. By Maschke’s theorem, for every finite-dimensional rep-
resentation V , it follows that ρV (zg) = ρV (gz) for all g ∈ G. Now let V = C[G], and then
we get zg = gz for all g ∈ G. Thus z is central.

Remark 2.12.5 (Non-examinable). The hypothesis of finite-dimensionality is required, since


the proof invokes Schur’s Lemma (Lemma 2.11.1), part (ii). Here is a counterexample without
this hypothesis (which also gives a counterexample to Lemma 2.11.1.(ii) when V is infinite-
n n−1 +···+a
dimensional). Let V = C(x) = { axnmx+b+am−1 n−1 x
xm−1 +···+b0
0
| m, n ≥ 0} be the field of rational
fractions of x (some of these expressions are equal: two fractions are equal if they can
be simplified to the same fraction by canceling common factors from the numerator and
denominator and deleting 0xn from the numerator if n > 0; equivalently fg11 = fg22 if and
only if f1 g2 = f2 g1 as polynomials). Let G = C(x)× := C(x) \ {0} be the multiplicative
group of nonzero rational fractions (i.e., the numerator is nonzero). Then V is an irreducible
infinite-dimensional representation of G under the action ρ(g)(f ) = gf , and G is abelian.

Corollary 2.12.6. For G = Cn , there are exactly n irreducible representations, up to


isomorphism, which are one-dimensional and are the ones given in Example 2.2.2: ρζ (g i ) =
(ζ i ), as ζ ranges over all n-th roots of unity.

Proof. Since G is finite, all irreducible representations are finite-dimensional by Proposition


2.8.6. By Proposition 2.12.1, all irreducible representations are therefore one-dimensional.
By Proposition 2.4.19, up to isomorphism, they are given by representations G → GL1 (C) =
C× , i.e., multiplicative assignments of G to nonzero complex numbers. But the generator
g ∈ G has to map to an n-th root of unity, call it ζ. Then we obtain the representation ρζ .
Note that two one-dimensional matrix representations are conjugate if and only if they are
identical, since one-by-one matrices commute.

21
Exercise 2.12.7. Classify all irreducible representations of finite abelian groups. Namely,
from group theory, recall that every finite abelian group is isomorphic to a product G =
Cn1 × · · · × Cnk . Extending the previous corollary, show that the irreducible representations
of G are, up to isomorphism, precisely the one-dimensional representations of the form ρζ1 ,...,ζk
sending the generators gi of Cni to (ζi ) ∈ GL1 (C). Deduce that the number of irreducible
representations up to isomorphism is |G| = n1 · · · nk . We will generalise this statement
to products of nonabelian groups (and higher-dimensional irreducible representations) in
Proposition 2.20.5.

We now give some examples of subrepresentations and decompositions of representations


of abelian groups.

Example 2.12.8. Let G = {±1}×{±1} be (isomorphic to) the Klein four-group and (C2 , ρ)
be the two-dimensional representation defined by
 
a 0
ρ(a, b) = (2.12.9)
0 b

V is the direct sum of the subrepresentations C · e1 and C · e2 . These two are isomorphic to
the one-dimensional representations ρ1 , ρ2 : G → C× given by ρ1 (a, b) = a and ρ2 (a, b) = b,
respectively.

Note that, if G is infinite, even if it is abelian, Corollary 2.12.2 does not apply: see, for
example, Example 2.9.9, with G  = Z.A similar example can be given with G = C: we can
1 z
use the same formula, ρ(z) = . Again, the only proper nonzero subrepresentation
0 1
is C · e1 so the representation is indecomposable but not irreducible, exactly as in Example
2.9.9.

Example 2.12.10. Now let G = C× be the group of nonzero complex numbers under
multiplication, and consider the two-dimensional representation (C2 , ρe), with
 
a −b
ρ(a + bi) = . (2.12.11)
b a

This is easily verified to be a representation:


    
a −b c −d ac − bd −(ad + bc)
ρ(a + bi)ρ(c + di) = =
b a d c ad + bc ac − bd
= ρ(ac − bd + (ad + bc)i) = ρ((a + bi)(c + di)). (2.12.12)

See the exercise below for a more conceptual way to verify this.
Since G is infinite we cannot apply Corollary 2.12.2, but can still apply Proposition 2.12.1
and we know there is at least one one-dimensional subrepresentation. In fact, there are two
anyway: V+ := C · (1, i) = {(a, ai) | a ∈ C} and V− := C · (1, −i) = {(a, −ai) | a ∈ C}. This

22
means that actually C2 = V+ ⊕V− , and we do get a direct sum of irreducible representations.
Concretely, in terms of the bases (1, i) and (1, −i) of V+ and V− , respectively [or any bases
since these are one-dimensional], we have:

(V+ , ρe|V+ ) ∼
= (C, ρ+ ), (V− , ρe|V− ) ∼
= (C, ρ− ), (2.12.13)
ρ+ (a + bi) = a − bi, ρ− (a + bi) = a + bi. (2.12.14)

This is because ρ(a + bi)(1, i) = (a − bi)(1, i) and ρ(a + bi)(1, −i) = (a + bi)(1, −i).
In other words, ρ+ : C× → GL1 (C) = C× is the complex conjugation map, and ρ− :
C → GL1 (C) = C× is the identity map.
×

Explicitly (but equivalently),


 we can realise this decomposition
 by conjugating by the
1 1 1 −i
change-of-basis matrix P = , with inverse P −1 = 21 :
i −i 1 i
     
1 1 −i a −b 1 1 a − bi 0
= (2.12.15)
2 1 i b a i −i 0 a + bi

Exercise 2.12.16. Here is a more conceptual way to verify that ρ in Example 2.12.10 is a
representation. Show that ρ(a + bi)|R2 : R2 → R2 is the complex multiplication by a + bi
when we identify R2 with C. Using this show that ρ(a+bi)ρ(c+di)|R2 = ρ((a+bi)(c+di))|R2 .
Since R2 spans C2 over the complex numbers, conclude that ρ is a homomorphism.

Example 2.12.17. Restricting the Example 2.12.10 from C× to the group of n-th roots
of unity, G = {e2πim/n } ⊆ C× , which is isomorphic to Cn (and to Z/nZ), we obtain the
representation sending e2πim/n to the rotation matrix by 2πm/n. For ζ = e2πi/k , we can
identify G with the cyclic group Cn = {1, g, . . . , g n−1 }, via g k = ζ k = e2πik/n . Then, the
representation of Cn we obtain
 is the one of Example 2.11.5, which I reprint for convenience:
cos θ − sin θ
ρ(g k ) = R2πk/n , Rθ := .
sin θ cos θ
The previous example then decomposes this as a direct sum of two one-dimensional
representations. In the form of Example 2.2.2, these are the ones corresponding to ζ −1 = ζ
and ζ respectively. Indeed, ρ+ (ζ k ) = (ζ −1 )k and ρ− (ζ k ) = ζ k . Explicitly,
     −iθ 
1 1 −i cos θ − sin θ 1 1 e 0
= , (2.12.18)
2 1 i cos θ sin θ i −i 0 eiθ

which we apply to the cases θ = 2πk/n.

The above examples illustrate the general phenomenon:

Exercise 2.12.19. Let G be a group and ρ : G → GLn (C) be an n-dimensional repre-


sentation. Show that all matrices ρ(g) are simultaneously diagonalisable (i.e., there exists
a simultaneous eigenbasis for all of them) if and only if (Cn , ρe) decomposes as a direct
sum of one-dimensional representations. (Hint: an eigenbasis (v1 , . . . , vn ) corresponds to a
decomposition Cv1 ⊕ · · · ⊕ Cvn .)

23
Remark 2.12.20. By linear algebra, one can directly see that, if G is finite abelian, then
for every representation, the matrices ρ(g) are simultaneously diagonalisable for all g ∈ G.
Thus Exercise 2.12.19 provides a linear-algebraic proof of (and explanation for) Corollary
2.12.2.

2.13 One-dimensional representations and abelianisation


In terms of matrices, a one-dimensional representation is a homomorphism ρ : G → GL1 (C) ∼ =
C× (where C× is the multiplicative group of nonzero complex numbers under multiplica-
tion). Note that two distinct such representations are inequivalent, since one-by-one matrices
commute. To avoid confusion, let us call a representation in terms of vector spaces, by Defi-
nition 2.4.7, an abstract representation. By Proposition 2.4.19, all one-dimensional abstract
representations are isomorphic to exactly one one-dimensional matrix representation (as the
latter are all inequivalent). So to find the one-dimensional abstract representations up to
isomorphism is the same as finding all homomorphisms ρ : G → C× , i.e., all one-dimensional
matrix representations.
Since C× is abelian, we can reduce the problem of one-dimensional matrix representations
of G to a canonical abelian quotient of G, called its abelianisation, Gab = G/[G, G]: there
will be a bijection between one-dimensional representations of G and of Gab . Perhaps you
have seen the construction of Gab in group theory, but just in case you haven’t, we will recall
the construction in Section 2.13.1 below.
By Exercise 2.12.7, this implies that, when Gab is finite, the number of one-dimensional
matrix representations of G is precisely the size of this quotient Gab (see Corollary 2.13.13
below).
However, this construction is mostly only of theoretical importance, and is not necessary
in order to compute the one-dimensional representations. Let us give some examples, which
we will then explain from the point of view of abelianisation.
Example 2.13.1. A one-dimensional matrix representation ρ of Sn , n ≥ 2, must send (1, 2)
to ±1, since (1, 2)2 is the identity. However, every transposition (i, j) is conjugate to (1, 2):
(1, i)(2, j)(1, 2)(2, j)(1, i) = (i, j). [More generally, conjugacy classes in Sn just depend on
the cycle decomposition by the formula σ(a1 , a2 , . . . , am )σ −1 = (σ(a1 ), σ(a2 ), . . . , σ(am )) for
all σ ∈ Sn and a1 , . . . , am ∈ {1, . . . , n}.] Therefore, since conjugation of one-by-one matrices
is trivial, we must have ρ(i, j) = ρ(1, 2) = ±1, a fixed number, for all i, j. The two choices of
sign give rise to the trivial and the sign representation. Thus these two are precisely the one-
dimensional (matrix) representations of Sn for n ≥ 2. These are nonisomorphic (since they
are distinct and conjugation is trivial), and every one-dimensional abstract representation is
isomorphic to one of these.
Example 2.13.2. Let G = Dn . Let x ∈ Dn be the counter-clockwise rotation by 2π/n and
y ∈ Dn be a reflection (about the x-axis, say), so that x and y generate Dn , and we have
the relations xn = 1 = y 2 and yxy −1 (= yxy) = x−1 . These relations actually define Dn , i.e.,
a homomorphism ϕ : Dn → H is uniquely determined by a = ϕ(x) and b = ϕ(y) subject to
the conditions an = 1 = b2 and bab−1 = a−1 .

24
Let ρ be a one-dimensional representation, i.e., a homomorphism ρ : G → C× . Then,
again since C× is abelian (conjugation of one-by-one matrices is trivial), we have

a−1 = ρ(x−1 ) = ρ(yxy −1 ) = ρ(y)ρ(x)ρ(y −1 ) = ρ(x) = a, (2.13.3)

which implies that a = ±1. Also, y 2 = 1 implies that b = ±1 (not necessarily the same
value as ρ(x). So there can be at most four one-dimensional representations, depending on
the choice of a, b ∈ {±1}. Moreover, if n is odd, then an = 1 implies that a = 1, not −1.
Conversely any choices of a, b ∈ {±1}, subject to the condition that a = 1 if n is odd, will
satisfy the conditions. So in fact there are two one-dimensional matrix representations when
n is odd, and four one-dimensional matrix representations if n is even. As before, these are
all inequivalent and hence every one-dimensional abstract representation is isomorphic to
exactly one of these.

2.13.1 Recollections on commutator subgroups and abelianisation


Definition 2.13.4. Given two elements g, h ∈ G, the commutator [g, h] is defined as
[g, h] := ghg −1 h−1 . The commutator subgroup [G, G] ⊆ G is the subgroup generated by
all commutators [g, h] = ghg −1 h−1 .
Lemma 2.13.5 (Group theory). The subgroup [G, G] is normal.
Proof. Since [g, h]−1 = [h−1 , g −1 ], an arbitrary element of [G, G] is a product of commutators
(we don’t need inverse commutators). Then the lemma follows from the standard formula
k(gh)k −1 = (kgk −1 )(khk −1 ), since it implies k[g, h]k −1 = [kgk −1 , khk −1 ].
Remark 2.13.6 (Non-examinable). The above lemma is really unnecessary since we could
simply define the commutator subgroup to be the normal subgroup generated by the com-
mutators. The lemma explains though why this is the same as the ordinary subgroup so
generated, which is the usual definition of [G, G].
The subgroup [G, G] is characterised by the following:
Proposition 2.13.7 (Group theory). (i) The quotient G/[G, G] is abelian. (ii) For an
arbitrary normal subgroup N E G, the quotient G/N is abelian if and only if [G, G] ⊆ N .
Proof. Note that (i) is a formal consequence of (ii), so we only prove (ii). This follows
because hg = gh[h−1 , g −1 ], so that ghN = hgN if and only if [h−1 , g −1 ] ∈ N . So all elements
in G/N commute if and only if N contains all commutators, and the statement follows.
Definition 2.13.8. The quotient G/[G, G] is called the abelianisation of G, and denoted
Gab .
A further justification for the term is the following. Let qab : G → Gab be the quotient.
Corollary 2.13.9 (Group theory). If ϕ : G → H is a homomorphism, then the image of ϕ
is abelian if and only if [G, G] ⊆ ker(G). Thus, this is true if and only if ϕ factors through
the quotient qab , i.e., ϕ = ϕ ◦ qab for some ϕ : Gab → H.

25
Proof. By the first isomorphism theorem, im(ϕ) ∼ = G/ ker(ϕ). Then the first statement
follows from Proposition 2.13.7. The second statement then follows from the following basic
lemma.

Lemma 2.13.10. Let ϕ : G → H be a homomorphism and K E G a normal subgroup.


Then ϕ factors through q : G → G/K if and only if K ⊆ ker(ϕ).

Proof. If K ⊆ ker(ϕ) then we can define ϕ(gK) = ϕ(g), so that ϕ = ϕ ◦ q. Conversely, given
ϕ such that ϕ = ϕ ◦ q, for all k ∈ K we have ϕ(k) = ϕ(K) = e and hence K ⊆ ker(ϕ).
In particular, we will often use the following statement.

Corollary 2.13.11. Let G be a group and A an abelian group. Then the map Hom(Gab , A) →
Hom(G, A), given by φ 7→ φ ◦ qab is an isomorphism.

Proof. The map is obviously injective, since qab is surjective. By Corollary 2.13.9, every
homomorphism ϕ ∈ Hom(G, A) is of the form ϕ = φ ◦ qab for some φ.

2.13.2 Back to one-dimensional representations


Since GL1 (C) = C× is abelian, we obtain from Corollary 2.13.11 the following:

Proposition 2.13.12. The one-dimensional matrix representations of G are the same as


the one-dimensional matrix representations of the abelian group Gab .

Applying Exercise 2.12.7 we obtain:

Corollary 2.13.13. If Gab is finite (e.g., if G is finite), then the number of one-dimensional
matrix representations of G is equal to |Gab |.

Example 2.13.14. Suppose that G = [G, G]. This is called a perfect group. Then Gab = {e},
and it follows that there are no nontrivial one-dimensional (matrix) representations of G.

Example 2.13.15. As a special case of the previous example, suppose G is simple and
nonabelian (the only abelian simple groups are the prime order p ones, i.e., Cp ). Since [G, G]
is always a normal subgroup, and only trivial if G is abelian, it follows that G = [G, G]. That
is, every nonabelian simple group is perfect, and hence has no nontrivial one-dimensional
representations.
Example: the alternating group An for n ≥ 5 (you might have seen that this is a simple
group). There are lots of other examples of finite simple groups, and they are all classified:
see https://en.wikipedia.org/wiki/List_of_finite_simple_groups.

The above example includes the “monster” group that I mentioned. We see that, for
perfect groups (such as nonabelian simple groups) it is in general an interesting question to
determine the minimal dimension of a nontrivial irreducible representation (since it cannot
be one).

26
In the remainder of this section we compute the abelianisations of the symmetric and
dihedral groups and recover the classification we already found of ( their one-dimensional
C2 , if n is odd,
representations. The main point is that (Sn )ab ∼ = C2 and (Dn )ab ∼
=
C2 × C2 , if n is even.
Therefore, there are exactly the number of one-dimensional representations we computed
earlier.
Since the explicit verification of this doesn’t contain any new representation theory, we
will skip it in lecture and it is non-examinable.
Example 2.13.16 (Non-examinable, skip in lecture). Let G = Sn . We already saw that
this has precisely two one-dimensional representations. By the corollary, Gab must have size
two. We prove this directly here. We prove more precisely that [G, G] = An , and this is
a subgroup of index two (for more on index two subgroups, if you are curious, see Remark
2.13.22 below).
We have the formula, for i, j, k distinct numbers in {1, . . . , n}:

(i, j)(j, k)(i, j)(j, k) = (i, k, j) (2.13.17)

Now the three-cycles are known to generate An (see Remark 2.13.20 for a proof of this). Hence
An ≤ [G, G]. Conversely, [G, G] ⊆ An since sign([g, h]) = sign(g)sign(h)sign(g −1 )sign(h−1 ) =
1 for all g, h ∈ G (more abstractly, sign : G = Sn → {±1} has abelian image and hence
[G, G] is in the kernel by Corollary 2.13.9).
Example 2.13.18 (Non-examinable, skip in lecture). Now, for G = Dn , by the preceding
results, we get that Gab must have size two if n is odd and size four if n is even. Let us prove
this directly. As before let x be the counter-clockwise rotation by 2π/n and y be a reflection
(say by the x-axis). Then [G, G] contains the element xyx−1 y −1 = x(yx−1 y −1 ) = xx = x2 .
Therefore [G, G] contains the cyclic subgroup Hn defined as follows:
(
{1, x2 , . . . , xn−2 } ∼
= Cn/2 , if n is even,
Hn := n−1 ∼
(2.13.19)
{1, x, . . . , x } = Cn , if n is odd.

This subgroup is clearly normal. Since |G/Hn | ≤ 4, it follows that G/Hn is abelian, and
hence [G, G] ≤ Hn by Corollary 2.13.9. Therefore [G, G] = Hn . Computing in more detail,
if n is even, Gab = G/Hn ∼
= C2 × C2 is isomorphic to the Klein four-group, and if n is odd,
Gab ∼= C2 .
Remark 2.13.20 (Group theory, non-examinable, skip in lecture). Here is a proof that the
three-cycles generate An . First, note that An is obviously generated by products of two
transpositions, i.e., (i, j)(k, `) for arbitrary i, j, k, l ∈ {1, . . . , n} with i 6= j and k 6= `. We
can assume that {i, j, k, `} has size ≥ 3 (otherwise the element is the identity). In the case
|{i, j, k, `}| = 3, then the product is a three-cycle. In the case i, j, k, ` are all distinct, the
element is generated by three-cycles by the formula

(i, j)(k, `) = (i, j, k)(j, k, `). (2.13.21)

27
Remark 2.13.22 (Group theory, non-examinable, skip in lecture). The commutator sub-
group is relevant to the study of the subgroups of a group of index two. Given any group
G, every index-two subgroup N ≤ G of a group G contains [G, G] since G/N is abelian, so
by the first isomorphism theorem, index-two subgroups of G are in bijection with index-two
subgroups of Gab .
In the case of Sn , since Gab has size two, we conclude that Sn has a unique subgroup
of index two, namely [G, G] = An . There is also a direct way to verify this fact: every
index-two subgroup contains the squares of all elements, which includes the three-cycles,
which generate An by Remark 2.13.20 below.
Returning to the dihedral groups, recall that Gab ∼ = C2 has a unique index-two subgroup

if n is odd, but Gab = C2 × C2 does not when n is even. Hence, when n is odd, then Dn has
a unique index-two subgroup. This is its commutator subgroup Hn of rotations as above.
However, when n is even, then Dn has multiple index-two subgroups: in addition to the
subgroup of all rotations, we can take two subgroups consisting of half of the rotations (the
ones in Hn by multiples of 4π/n) and half of the reflections (there are two choices here,
any of the two evenly-spaced collections of half of the reflection lines will do). This indeed
corresponds to the three order-two subgroups of the Klein four-group C2 × C2 ∼ = Gab . (This
also gives another proof that Gab is isomorphic to the Klein four-group rather than a cyclic
group of size four, since in the latter case we would have only had one index-two subgroup
of G.)
For general G, when Gab is finite (e.g., if G is finite), there is a unique index-two subgroup
if and only if Gab ∼= Cm × H for m even and |H| odd (we can take m to be a positive power
of two if desired).

2.14 Homomorphisms of representations and representations of


homomorphisms
Observe that if S, T : (V, ρV ) → (W, ρW ) are homomorphisms of representations, so is aS+bT
for any a, b ∈ C. So the set of homomorphisms of representations forms a vector space:

Definition 2.14.1. If V and W are representations of G over F , then HomG (V, W ) is the
vector space of homomorphisms of representations T : V → W . If V = W we denote
EndG (V ) := HomG (V, V ).

This allows us to restate Schur’s Lemma as follows: Let V and W be irreducible. Then
(
1, if V ∼
=W
dim HomG (V, W ) = (2.14.2)
0, otherwise.

Another way to see that HomG (V, W ) is a vector space is given as below. We first consider
the vector space of all linear maps, Hom(V, W ), from V to W . This is a representation of
G:

28
Definition 2.14.3. Given representations (V, ρV ) and (W, ρW ), define a linear action G ×
Hom(V, W ) → Hom(V, W ), i.e., a homomorphism ρHom(V,W ) : G → GL(Hom(V, W )), by

ρHom(V,W ) (g)(ϕ) = g · ϕ := ρW (g) ◦ ϕ ◦ ρV (g)−1 . (2.14.4)

Exercise 2.14.5. Verify that (2.14.4) really defines a representation (by Remark 2.4.2, verify
it is a linear group action of G).

Solution to exercise: The fact that it is associative follows from

g · (h · ϕ) = ρW (g) ◦ ρW (h) ◦ ϕ ◦ ρV (h)−1 ◦ ρV (g)−1 = ρW (gh) ◦ ϕ ◦ ρV (h−1 ) ◦ ρV (g −1 )


= ρW (gh) ◦ ϕ ◦ ρV (h−1 g −1 ) = ρW (gh) ◦ ϕ ◦ ρV (gh)−1 . (2.14.6)

The fact that 1 · ϕ = ϕ is immediate. To see that the action is linear, we use the identities
from linear algebra: S ◦ (a1 T1 + a2 T2 ) = a1 S ◦ T1 + a2 S ◦ T2 , and similarly (a1 T1 + a2 T2 ) ◦ S =
a1 (T1 ◦ S) + a2 (T2 ◦ S). Thus S1 ◦ (a1 T1 + a2 T2 ) ◦ S2 = a1 (S1 ◦ T1 ◦ S2 ) + a2 (S1 ◦ T2 ◦ S2 ).
Apply this to S1 = ρW (g) and S2 = ρV (g)−1 .
We will make use of the following definition later on:

Definition 2.14.7. Let (V, ρV ) be a representation of G. Define the G-invariant subspace


as V G := {v ∈ V | ρV (g)(v) = v, ∀g ∈ G}.

Exercise 2.14.8. Verify that V G ⊆ V is indeed a linear subspace. You can do this in two
ways: (i) Direct proof; (ii) by proving that V G is the intersection, over all g ∈ G, of the
eigenspace of ρ(g) of eigenvalue one (i.e., ker(ρ(g) − I)).

As a consequence, V G ⊆ V is in fact a subrepresentation, since V G is obviously fixed


under G (ρV (g)(v) = v for all v ∈ V G ). For every v ∈ V G , we have that C·v is a subrepresen-
tation isomorphic to the trivial representation, and conversely every such subrepresentation
is contained in V G (i.e., V G is the sum of all trivial subrepresentations of V ).
We now obtain an alternative proof that HomG (V, W ) is a vector space as a consequence
of the following:

Exercise 2.14.9. Show that HomG (V, W ) = Hom(V, W )G .

We can now explain the clever formula in Maschke’s theorem, which turns a projection
into a G-linear projection (i.e., a projection map which is a homomorphism of representa-
tions):

Proposition 2.14.10. Given any representation (V, ρV ), there is a G-linear projection S :


V → V G given by the formula
X
S(v) = |G|−1 ρV (g)(v). (2.14.11)
g∈G

29
Note as in Maschke’s theorem the necessity of being able to invert |G| here (which we
get by working over C).
In the proof of Maschke’s theorem we applied this proposition replacing V by Hom(V, V ),
v by a transformation T : V → V , and S(v) by Te.
Proof of Proposition 2.14.10. As in the proof of Maschke’s theorem, note that S(v) = v for
all v ∈ V G . Moreover, ρV (g)(S(v)) = S(v) by the same argument as (2.10.5). So the image
of S is exactly V G and S is the identity restricted to V G . By Lemma 2.8.16, we get that S
is a projection map.
So see that S is G-linear, note that again the same argument as in (2.10.5) shows that
S(ρV (g)(v)) = S(v). So ρV (g)(S(v)) = S(ρV (g)(v)) = S(v).

2.15 The decomposition of a representation


Let G be a finite group and (V, ρ) a finite-dimensional representation. Since V is a direct
sum of irreducible representations, up to isomorphism we can group together the isomorphic
representations and say that
V ∼
= V1r1 ⊕ · · · ⊕ Vmrm , (2.15.1)
where (Vi , ρi ) are pairwise nonisomorphic irreducible representations and ri ≥ 1. The pur-
pose of this subsection is to explain one way to compute the ri .
Proposition 2.15.2. ri = dim HomG (Vi , V ) = dim HomG (V, Vi ).
Proof. This follows from Schur’s Lemma together with the following basic lemma.
Lemma 2.15.3. For arbitrary representations (V1 , ρ1 ), (V2 , ρ2 ), (W, ρW ) of G, we have linear
isomorphisms

HomG (V1 ⊕ V2 , W ) ∼
= HomG (V1 , W ) ⊕ HomG (V2 , W ), (2.15.4)
HomG (W, V1 ⊕ V2 ) ∼= HomG (W, V1 ) ⊕ HomG (W, V2 ). (2.15.5)

Proof. Here are the linear maps:

S : HomG (V1 , W ) ⊕ HomG (V2 , W ) → HomG (V1 ⊕ V2 , W ), S(ϕ1 , ϕ2 )(v1 , v2 ) = ϕ1 (v1 ) + ϕ2 (v2 ),
(2.15.6)
T : HomG (W, V1 ) ⊕ HomG (W, V2 ) → HomG (W, V1 ⊕ V2 ), T (ϕ1 , ϕ2 )(w) = (ϕ1 (w), ϕ2 (w)).
(2.15.7)

To verify these are isomorphisms, we explicitly construct their inverses. For k ∈ {1, 2}, let
ik : Vk → (V1 ⊕ V2 ) be the inclusion and let qk : (V1 ⊕ V2 ) → Vk be the projection. Then we
can write:
S −1 (ϕ) = (ϕ ◦ i1 , ϕ ◦ i2 ), T −1 (ϕ) = (q1 ◦ ϕ, q2 ◦ ϕ). (2.15.8)
It is easy to see that S and T are linear and that S −1 and T −1 are inverses (S ◦ S −1 = I and
S −1 ◦ S = I, and similarly for T and T −1 ).

30
Remark 2.15.9. Taking G = {1}, we get the same statements as above without the G
present (just isomorphisms of ordinary linear Hom spaces). Conversely, given the statements
without the G, e.g., Hom(V1 , W ) ⊕ Hom(V2 , W ) ∼
= Hom(V1 ⊕ V2 , W ), we get the statements
with the G by taking G-invariants: Hom(V1 , W )G ⊕ Hom(V2 , W )G ∼ = Hom(V1 ⊕ V2 , W )G ,
since in general (U ⊕ W )G = U G ⊕ W G . Then recall that Hom(V, W )G ∼ = HomG (V, W )
(Exercise 2.14.9).
Corollary 2.15.10. The decomposition (2.15.1) is unique up to replacing each (Vi , ρi ) by
an isomorphic representation.
Proof. This follows because, up to isomorphism, each irreducible representation V 0 must
occur exactly r0 = dim HomG (V 0 , V ) times. If r0 > 0, since the Vi are all assumed to be
(pairwise) nonisomorphic, there is exactly one i such that V 0 ∼= Vi and ri = r0 .
= V1 1 ⊕ · · · ⊕ Vmrm ∼
In more detail, suppose that V ∼ r s
= W1 1 ⊕ · · · ⊕ Wnsn are two decompo-
sitions into nonisomorphic irreducible representations. Then
n n
s
M X
ri = dim HomG (Vi , V ) = dim HomG (Vi , Wj j ) = sj dim HomG (Vi , Wj ). (2.15.11)
j=1 j=1

Since ri > 0, must exist some j such that HomG (Vi , Wj ) 6= 0. By Schur’s Lemma, Wj ∼ = Vi ,
and since the Wj are nonisomorphic, this j is unique. Then sj = ri . Thus we get an injection
σ : {1, . . . , m} ,→ {1, . . . , n} such that Vi ∼
= Wσ(i) and ri = sσ(i) . Swapping the roles of Vi
and Wj we also have an injection τ : {1, . . . , n} → {1, . . . , m} such that Wj ∼ = Vτ (j) and
sj = rτ (j) . It is clear that σ and τ are inverse, so σ is a permutation. Since Vi ∼ = Wσ(i) and
ri = sσ(i) for all i, we get the desired statement.
This also allows us to form numerical criterion for the decomposition (2.15.1), and in
particular to be irreducible:
Proposition 2.15.12. Suppose (V, ρV ) decomposes as in (2.15.1). Then
dim EndG (V ) = r12 + · · · + rm
2
. (2.15.13)
In particular, (V, ρV ) is irreducible if and only if dim EndG (V ) = 1.
We remark that for the last statement, the “only if” is precisely Schur’s Lemma (Lemma
2.11.1), part (ii).
Proof of Proposition 2.15.12. Decompose V as in (2.15.1). Applying Lemma 2.15.3, we get
m
EndG (V ) ∼
= HomG (V1r1 ⊕ · · · ⊕ Vmrm , V ) ∼
M
= HomG (Vi , V )ri , (2.15.14)
i=1

and the dimension of the RHS is clearly r12 + · · · + rm


2
.
Exercise 2.15.15. Suppose that (V, ρV ) is a representation of G with dim EndG (V ) ≤ 3.
Then show that V is a direct sum of (pairwise) nonisomorphic irreducibles, and exactly
dim EndG (V ) of them.

31
2.16 The decomposition of the regular representation
Again assume G is finite. Recall exercise 2.7.6. By this, we obtain:

dim HomG (C[G], V ) = dim V, (2.16.1)

for every representation (V, ρ). Applying this to irreducible representations, Proposition
2.15.2 implies:

Corollary 2.16.2. There are finitely many nonisomorphic irreducible representations of G.


Calling them (V1 , ρ1 ), . . . , (Vm , ρm ) up to isomorphism, there exists a G-linear isomorphism

C[G] ∼
= V1dim V1 ⊕ · · · ⊕ Vmdim Vm . (2.16.3)

Taking dimensions, we obtain the following important identity:

Corollary 2.16.4. |G| = i (dim Vi )2 .


P

Proof. This is because dim(V ⊕ W ) = dim V + dim W and dim V r = r dim V .

Corollaryp2.16.5. Unless G is trivial, the dimension of every irreducible representation is


less than |G|.

Proof. For every irreducible representation (V, ρ) other than the trivial representation,

|G| = dim C[G] = dim(C ⊕ V dim V ⊕ · · · ) ≥ dim C ⊕ V dim V = 1 + (dim V )2 . (2.16.6)

2.17 Examples: S3 , dihedral groups and S4


Here we classify irreducible representations of S3 , the dihedral groups, and of S4 up to
isomorphism.
The following result will be important. Recall that the reflection representation of G = Sn
is the representation (V, ρ) with V = {(a1 , . . . , an ) | a1 + · · · + an = 0} ⊆ Cn and ρ
is obtained by restricting the permutation representation. Explicitly, ρ(σ)(a1 , . . . , an ) =
(aσ−1 (1) , . . . , aσ−1 (n) ).

Remark 2.17.1 (Omit from lecture, non-examinable). Caution: the inverses are needed
here since the formula really corresponds to permuting the components according to σ,
i.e., putting the quantity that was before in the i-th entry into the σ(i)-th entry: setting
bi = aσ−1 (i) to be the new entries in the i-th position, then bσ(i) = ai . We can also explicitly
verify that the formula above gives an action of Sn on Cn :

ρ(τ ◦ σ)(a1 , . . . , an ) = ρ(τ )(b1 , . . . , bn ) = (bτ −1 (1) , . . . , bτ −1 (n) ) = (aσ−1 ◦τ −1 (1) , . . . , aσ−1 ◦τ −1 (n)).
(2.17.2)

Proposition 2.17.3. The reflection representation (V, ρ) is irreducible.

32
Proof. Suppose W ⊆ V is a nonzero subrepresentation and let w = (a1 , . . . , an ) ∈ W be a
nonzero vector. Since a1 + · · · + an = 0, there must be two unequal entries, say ai 6= aj .
Then w0 := w − ρ((i, j))w = (ai − aj )(ei − ej ) where ei = (0, . . . , 0, 1, 0, . . . , 0) is the standard
basis vector with 1 in the i-th entry. Since W is a subrepresentation, w0 ∈ W . Rescaling,
we see ei − ej ∈ W . Also, for any k 6= `, then ρ((i, k)(j, `))(ei − ej ) = ek − e` ∈ W as well.
But these vectors span V , hence V = W .

Remark 2.17.4 (Non-examinable). Note that the proposition above is actually valid over
any field of characteristic not dividing n (since this is enough to guarantee that a1 +· · ·+an =
0 implies that not all of the ai are equal).

Example 2.17.5. We now classify the irreducible representations of S3 . By Example 2.13.1,


there are exactly two one-dimensional representations: the trivial and the sign representation.
We also have, by Proposition 2.17.3, the two-dimensional reflection representation, which is
also irreducible. Since 12 + 12 + 22 = 6 = |S3 |, these are all of the representations.

Example 2.17.6. We next classify the irreducible representations of S4 . Again, there are
two one-dimensional representations. If d1 = 1, d2 = 1, d3 , . . . , dr are the dimensional of the
irreducible representations, then 12 + 12 + d23 + · · · + d2r = 24, so d23 + · · · + d2r = 22 with
all squares appearing being 4, 9, or 16. We can’t have 16 appear since 6 is not a sum of
these squares. With 4 and 9 we can’t have an odd number of 9’s appearing, and also clearly
can’t have only 4’s appearing, so the only possibility is 4 + 9 + 9. Thus, up to reordering,
d3 = 2, d4 = 3, and d5 = 3.
We can realise these representations. One of the 3-dimensional representations is the
reflection representation, call it (V, ρ), with V = {(a1 , . . . , a4 ) | a1 + · · · + a4 = 0} ⊆ C4 .
Another one can be given by multiplying by the sign: (V, ρ0 ) with ρ0 (σ) = sign(σ)ρ(σ) (it
is either a permutation matrix or just like a permutation matrix but with −1’s replacing
1 throughout). This is actually an example of tensor product, as we will see later; in the
next exercise we will define this and show it gives an irreducible representation. Finally,
we need a two-dimensional irreducible representation. We can get it from the reflection
representation of S3 , since there is a surjective homomorphism q : S4 → S3 , given by the
action of S4 by permuting the elements of the conjugacy class {(14)(23), (13)(24), (12)(34)}
(or we can explicitly write the formula, q((12)) = (12), q((23)) = (23), q((34)) = (12) and
the elements (12), (23), (34) generate S4 ). Let (W, ρW ) be the reflection representation of S3 ,
ρW : S3 → GL(W ) with dim W = 2. Then we can take (W, ρW ◦ q), and since (W, ρW ) is
irreducible, so is this one by the next exercise.

Exercise 2.17.7. The above example motivates the following constructions: (i) Let (V, ρ)
be a representation of a group G and θ : G → C× a one-dimensional representation. Show
that (V, ρ0 ) given by ρ0 (g) = θ(g)ρ(g) is also a representation, and that it is irreducible if
and only if (V, ρ) is irreducible. (ii) Let (V, ρ) be a representation of G and ϕ : H → G a
homomorphism of groups. Show that (V, ρ ◦ ϕ) is a representation of H. In the case that
(V, ρ) is irreducible and ϕ is surjective, show that (V, ρ ◦ ϕ) is also irreducible.

33
 
2 3 0 −1
Example 2.17.8. Let G = C4 = {1, g, g , g } and ρ(g) = the rotation matrix.
 1 0
0 −i
Then for (i) we can let θ(g) = i ∈ C, and then ρ0 (g) = . For (ii) we can consider
i 0  
−1 3 0 1
the automorphism ϕ : G → G given by ϕ(g) = g = g , and then ρ ◦ ϕ = .
−1 0

Remark 2.17.9 (Omit from lecture, non-examinable). As a special useful case of part (ii)
of Exercise 2.17.7, we can compose any representation of G by an automorphism to get
another representation, and this preserves irreducibility. Recall from group theory: An
automorphism is inner if it is of the form Adg : G → G, Adg (h) = ghg −1 for some g ∈ G,
and otherwise it is called outer. The notation “Ad” stands for “adjoint”, referring to the
conjugation action (see also Definition 2.18.10 below). As an exercise, you can show that, for
an inner automorphism Adg , we have (V, ρ) ∼ = (V, ρ ◦ Adg ). So the operation of composing
(irreducible) representations by automorphisms to get new (irreducible) representations is
mostly interesting when the automorphism is outer.

Example 2.17.10. We can classify the representations of Dn for low n. For n = 3 we


have D6 = S3 so that is already done. Let’s try D8 . By Example 2.13.2, there are four
one-dimensional representations, so 8 = 12 + 12 + 12 + 12 + 22 shows that there can be exactly
one two-dimensional irreducible representation, which is then the one of Example 2.2.6.
Next let’s look at D10 . By Example 2.13.2, there are two one-dimensional representations.
So the sum of squares of the other representations is 10−2 = 8, which means there are exactly
two irreducible two-dimensional representations. One is Example 2.13.2, call it (C2 , ρ),
and the other is given by the construction of Exercise 2.17.7.(ii) using the automorphism
ϕ : D10 → D10 , given by ϕ(x) = x2 , ϕ(y) = y, where x, y are the generators of Example
2.13.2) [i.e., ϕ doubles the rotation angles and doubles the angles that the reflection axes
make with the x-axis]. Thus the other one is (C2 , ρ ◦ ϕ).
Let us see that these two-dimensional representations are not isomorphic. Indeed, ρ(x)
has eigenvalues e±2πi/5 , whereas ρ ◦ ϕ(x) = ρ(x2 ) has eigenvalues e2±4πi/5 , and these are
not equal. Hence ρ(x) and ρ ◦ ϕ(x) are not conjugate in GL(C2 ) = GL2 (C), so the two
representations cannot be isomorphic.
[Omit from lecture, non-examinable: on the other hand, if we had used the automorphism
ψ given by ψ(x) = x−1 , ψ(y) = y, then the representations would have been isomorphic, since
now ρ ◦ ψ(x) = ρ(x)−1 = ρ(x−1 ) = ρ(yxy −1 ). Since it is also true that ρ ◦ ψ(y) = ρ(y) =
ρ(yyy −1 ), we obtain that ρ ◦ ψ(g) = ρ(y) ◦ ρ(g) ◦ ρ(y)−1 for all g, hence ρ(y) : C2 → C2 is an
isomorphism (C2 , ρ) → (C2 , ρ ◦ ψ).]
Note that Exercise 2.17.7.(i) does not work to construct the second irreducible two-
dimensional representation. The nontrivial one-dimensional representation is χ : D10 → C× ,
χ sends rotations to 1 and reflections to −1 (χ(xk ) = 1 and χ(xk y) = −1 for all k). Now
unlike ρ ◦ ϕ above, χ · ρ(x) does have the same eigenvalues as ρ(x), so actually the two
will have to be isomorphic (as there are only two irreducible representations). To prove it
explicitly we can use matrices, and it is valid for any Dn : Suppose gθ is the rotation by angle

34
θ and hθ the reflection about the axis making angle θ with the x-axis. Then:
   −1      
0 −1 0 −1 0 −1 cos θ − sin θ 0 1 cos θ − sin θ
ρ(gθ ) = =
1 0 1 0 1 0 sin θ cos θ −1 0 sin θ cos θ
= ρ(gθ ) = χ(gθ )ρ(gθ ), (2.17.11)

   −1      
0 −1 0 −1 0 −1 cos 2θ sin 2θ 0 1 cos 2θ sin 2θ
ρ(hθ ) = =−
1 0 1 0 1 0 sin 2θ − cos 2θ −1 0 sin 2θ − cos 2θ
= −ρ(hθ ) = χ(hθ )ρ(hθ ). (2.17.12)

[Omit from lecture, non-examinable:In vectors,  we explain the above as follows: the map
0 −1
T : C2 → C2 given by the matrix T = defines an isomorphism (C2 , ρ) → (C2 , ρ0 )
1 0
for ρ0 (g) = χ(g)ρ(g). That is, T ◦ ρ = χ · ρ ◦ T . To see this we note that T ◦ ρ(x) = ρ(x) ◦ T ,
since T is a rotation and commutes with the rotation ρ(x), whereas T ◦ ρ(y) = ρ(y) ◦ T −1
since ρ(y) is a reflection and T a rotation, but T −1 = −I, and we conclude T ◦ ρ(y) =
−ρ(y) ◦ T = χ(y) · ρ(y) ◦ T . Since x and y generate (or by applying the same argument for
arbitrary rotations and reflections), this proves the statement.]
For general Dn , you will complete the classification of irreducible representations for
general n in the following exercise.

Exercise 2.17.13. Find all irreducible representations of Dn . Other than the one-dimensional
representations (see Example 2.13.2), show that they are all obtained from the one of Ex-
ample 2.2.6, (C2 , ρ), by the construction of Exercise 2.17.7.(ii). In more detail, consider
(C2 , ρ ◦ ϕj ) for the endomorphisms ϕj : Dn → Dn , ϕj (xa y b ) = xja y b (caution: these are
not automorphisms)! Show that they are irreducible for 1 ≤ j < n/2. Moreover show
that they are nonisomorphic for these values of j, by showing that tr(ρ(xj )) 6= tr(ρ(xk )) for
1leqj < k < n/2.

Here is a useful observation: Since the number of one-dimensional representations of a


finite group G equals the size of Gab (Corollary 2.13.13), it is in particular a factor of |G|. If
G is not itself abelian, this is a proper factor.

Exercise 2.17.14 (For problems class). Similarly classify the irreducible representations of
A4 . First using the previous observation show the possibilities are (a) 12 + 12 + 12 + 32 or (b)
12 + 12 + 12 + 12 + 22 + 22 . Then use the surjective homomorphism q : S4 → S3 , restricted to
A4 , to get q|A4 : A4 → A3 , a surjective homomorphism with abelian image. Using Corollary
2.13.9 show that we are in case (a). Then find the one-dimensional representations. Finally
prove that the restriction of the three-dimensional reflection representation of S4 to A4 is
irreducible, completing the classification.

35
2.18 The number of irreducible representations
Now we are ready to prove a fundamental formula. First some terminology:

Definition 2.18.1. A full set of nonisomorphic irreducible representations (V1 , ρ1 ), . . . , (Vm , ρm )


is one such that every irreducible representation is isomorphic to exactly one of the (Vi , ρi ).

Theorem 2.18.2. Let G be a finite group and (V1 , ρ1 ), . . . , (Vm , ρm ) a full set of irreducible
representations. Then m equals the number of conjugacy classes of G.

We will prove this theorem as a consequence of a strengthened form of Corollary 2.16.2.


Let us motivate this a bit. The formula gives a suggestive identity of the size of G with
the sum of the squares of the dimensions of the irreducible representations V1 , . . . , Vm . We
2
can
Lm reinterpret (dim Vi ) as dim End(Vi ). With this in mind the vector spaces C[G] and
i=1 End(Vi ) have the same dimension, hence are isomorphic. But what we really want is
a “canonical” isomorphism between them, i.e., an explicit isomorphism of the form

C[G] → End(V1 ) ⊕ · · · ⊕ End(Vm ), (2.18.3)

which does not depend on bases (it should be given L by a “nice” formula). Having such a
canonical isomorphism, could say that C[G] and m i=1 End(Vi ) have the same personality.
There is indeed an obvious candidate for the map in (2.18.3): for each i we have ρi : G →
GL(Vi ), and we can linearly extend this to ρi : C[G] → End(Vi ), just by
X X
ρi ( ag g) = ag ρi (g) ∈ End(Vi ). (2.18.4)
g g

Putting these together yields a candidate for the desired isomorphism (2.18.3).

Remark 2.18.5. What we have just proposed to do is called categorification in mathe-


matics: namely, replacing an equality of numbers (here, Corollary 2.16.4) by a “canonical”
isomorphism of vector spaces, such that the original equality is recovered by taking dimen-
sion of both sides. (Corollary 2.16.2 didn’t do this, since it depended on choosing bases, and
anyway the RHS were not literally the same.)

How will this imply Theorem 2.18.2? From the RHS of (2.18.3) we will be able to
recover m by taking G-invariants, since Schur’s Lemma says that End(Vi )G = EndG (Vi )
is one-dimensional for all i! That is, the dimension of the G-invariants of the RHS is m.
Caution: this doesn’t immediately solve our problem since the map (2.18.3) is actually not
G-linear, in spite of being canonical. But since it is canonical we will be able to put a new
G-action on C[G] that fixes this, and we will conclude Theorem 2.18.2.
Thus Theorem 2.18.2 will be a consequence of the following strengthening of Corollary
2.16.2 (or “categorification” of Corollary 2.16.4):

Theorem 2.18.6. The map Φ = (ρ1 , . . . , ρm ) : C[G] → End(V1 ) ⊕ · · · ⊕ End(Vm ) is a linear


isomorphism.

36
Caution: as before, this is not G-linear (yet).
Proof. The map is linear by definition. To check it is an isomorphism, by the dimension
equality Corollary 2.16.4, we only need to check it is injective. Suppose that f ∈ C[G] had the
property that ρi (f ) = 0 for all i. Since the (Vi , ρi ) are a full set of irreducible representations,
this implies that for every irreducible representation (W, ρW ), we have ρW (f ) = 0. But by
Maschke’s Theorem, every finite-dimensional representation of G is a direct sum of irreducible
representations. Hence for every finite-dimensional representation (W, ρW ) we have ρW (f ) =
0. Now set W = C[G]. Then ρC[G] (f ) = 0. Applying this to 1 ∈ G, we get ρC[G] (f )(1) = f ,
by definition. Thus f = 0, so Φ is indeed injective.
We now have to fix the problem that Φ is not G-linear. Let’s see what property it does
satisfy:

Lemma 2.18.7. The map Φ has the property


m
M
−1
Φ(ghg ) = ρEnd(Vi ) (g)(Φ(h)). (2.18.8)
i=1

Proof. Equivalently, we need to show that ρi (ghg −1 ) = ρEnd(Vi ) (g)(ρi (h)). By Definition
2.14.3, we have:
ρEnd(Vi ) (g)(f ) = ρi (g) ◦ f ◦ ρi (g)−1 . (2.18.9)
Now plugging in f = ρi (h) and using the fact that ρi is a homomorphism yields the statement.

This motivates the following new representation on the vector space C[G]:

Definition 2.18.10. The adjoint representation (C[G], ρad ) is given by


X X
ρad (g)( ah h) = ah ghg −1 . (2.18.11)
h∈G h∈H

The word “adjoint” refers to the conjugation action, as in Remark 2.17.9: g · h = Adg (h).

Exercise 2.18.12. (i) Verify that the adjoint representation is a representation. Similarly
show that the right regular representation, ρRreg : G → Aut(C[G]), ρRreg (g)(h) := hg −1 , is
a representation. Thus there are three different structures of a representation on the vector
space C[G]: the (left) regular one, the right regular one, and the ajoint one.

Then Lemma 2.18.7 and Theorem 2.18.6 immediately imply:

Corollary 2.18.13. The isomorphism Φ is G-linear as a map


m
M
(C[G], ρad ) → End(Vi ). (2.18.14)
i=1

37
Taking G-invariants yields:
Corollary 2.18.15. We have an isomorphism of vector spaces,
m
EndG (Vi ) ∼
M
(C[G], ρad )G → = Cm , (2.18.16)
i=1

To prove Theorem 2.18.2, we only need one more lemma. Let C1 , . . . , Cm0 be the conjugacy
classes of G.
Lemma 2.18.17. The G-invariants (C[G], ρad )G have a basis f1 , . . . , fm0 given by:
X
fi = g. (2.18.18)
g∈Ci

G −1
P f ∈ (C[G], ρad ) if and only if f = gf g for every g ∈ G. Now, take an
Proof. Note that
element f = h∈G ah h. Then for a given G ∈ G, we have
X X
gf g −1 = ah ghg −1 = ag−1 hg h.
h∈G h∈G

So f = gf g −1 for all g ∈ G if and only if ah = ag−1 hg for all g, h ∈ G. Equivalently, ah = ah0


if h, h0 are in the same conjugacy class. Now for each i let gi ∈ Ci be a representative. We
obtain that m
X
f= ag i f i .
i=1
G
Thus the elements fi span (C[G], ρad ) . They are obviously linearly independent.
Now taking dimensions in Corollary 2.18.15, we obtain m0 = m, i.e., the number of con-
jugacy classes in G equals the number of isomorphism classes of irreducible representations.
This completes the proof of Theorem 2.18.2.
Remark 2.18.19 (Non-examinable). In fact, both sides of the isomorphim in Theorem
2.18.6 have a multiplication as well, coming from the group multiplication on the LHS and
the composition of endomorphisms on the RHS. The map ψ respects these (more formally,
this says that ψ is ring isomorphism). This will be a central observation of the last unit
of the course, which we will use to give another proof of the formula for the number of
irreducible representations of G.

2.19 Duals and tensor products


Recall that, given a vector space V , the dual vector space is V ∗ := Hom(V, C). By Definition
2.14.3, when (V, ρV ) is a representation of G, we also obtain a representation (V ∗ , ρV ∗ ). Let
us write explicitly the formula: for f ∈ V ∗ , g ∈ G, and v ∈ V :

(ρV ∗ (g)(f ))(v) := f (ρV (g)−1 (v)) = f (g −1 · v). (2.19.1)

38
Example 2.19.2. Let ρ : G → C× be a one-dimensional representation. Let us compute
its dual. Let V = C. We obtain (V, ρV ) where ρV (g)(v) = ρ(g)v for all g ∈ G and v ∈ V .
Then the dual (V ∗ , ρV ∗ ) satisfies ρV ∗ (g)(f ) = f ◦ ρV (g)−1 = ρ(g)−1 f . That is, dualising
one-dimensional representations inverts ρ(g) for all g.

Remark 2.19.3 (Important). Let us work out the formula in terms of matrices for ρV ∗ (g).
Suppose B = {v1 , . . . , vn } is a basis of V , and let C = {f1 , . . . , fn } be the dual basis, given
by (
1, if i = j
fi (vj ) = δij , δij = (2.19.4)
0, if i 6= j.
Here δij is called the Kronecker delta function. We now compute:

ρV ∗ (g)(fi )(vj ) = fi (ρV (g −1 )(vj )). (2.19.5)

The LHS is the (j, i)-th entry of [ρV ∗ (g)]C . The RHS is, on the other hand, the (i, j)-th entry
of [ρV (g −1 )]B = ([ρV (g)]B )−1 . Putting this together we get:

[ρV ∗ (g)]C = (([ρV (g)]B )−1 )t , (2.19.6)

where the superscript of t denotes transpose.

Proposition 2.19.7. We have a canonical (and G-linear) inclusion V ,→ (V ∗ )∗ of vector


spaces, v 7→ ϕv , ϕv (f ) := f (v). This is an isomorphism if V is finite-dimensional.

The key point here in the proposition is not merely that we have isomorphisms but that
there is a nice formula for them not depending on bases. That means that the two isomorphic
quantities “have the same personalities” and we can interchange them, particularly when we
want to be able to keep track of what a representation is on both sides.
Proof of Proposition 2.19.7. The given map is obviously linear. To see it is injective, suppose
that v 6= 0. Then we can extend v to a basis of V , and therefore define a linear map
f ∈ V ∗ such that f (v) = 1 (and say f (v 0 ) = 0 for other elements of the basis). Then
ϕv (f ) = f (v) 6= 0, so ϕv 6= 0. To check G-linearity, we trace through the definitions:

ρ(V ∗ )∗ (g)(ϕv )(f ) = ϕv (ρV ∗ (g)−1 (f )) = ϕv (f ◦ ρV (g)) = f (ρV (g)(v)) = ϕρV (g)(v) (f ).

2.19.1 Definition of the tensor product of vector spaces


Next, given representations (V, ρV ) and (W, ρW ) of G, we would like to define the represen-
tation (V ⊗ W, ρV ⊗W ). To do this we first have to define the tensor product.

Definition 2.19.8. The tensor Pproduct V ⊗ W of vector spaces V and W is the quotient of
the vector space C[V × W ] := { ni=1 ai (vi , wi ) | ai ∈ C, vi ∈ V, wi ∈ W } (requiring the pairs

39
(vi , wi ) appearing be distinct), with basis V × W , by the subspace spanned by the elements
(called relations):

a(v, w)−(av, w), a(v, w)−(v, aw), (v+v 0 , w)−(v, w)−(v 0 , w),
(v, w+w0 )−(v, w)−(v, w0 ).
(2.19.9)
The image of (v, w) ∈ C[V × W ] under the quotient map C[V × W ]  V ⊗ W is denoted
v ⊗ w.
Note that the elements v ⊗ w span V ⊗ W , since a(v ⊗ w) = (av) ⊗ w.
Example 2.19.10. Let W = C. Then we have an isomorphism ϕ : V ⊗ C → V given
by (v ⊗ a) 7→ av. First let us check that this is well-defined: obviously we have a map
e : C[V × C] → V given by (v, a) 7→ av, and we just need to observe that the relations are
ϕ
satisfied. This is true because (v, a) 7→ av is bilinear; for instance, ϕ((a(v,
e b) + a0 (v 0 , b)) =
0 0 0
abv + a bv = ϕ(av
e + a v, b).
To see that ϕ is an isomorphism, note first thatPit is obviously surjective and linear. Let
us show it isP P x = i ai (vi ⊗ bi ) is in the kernel. Applying ϕ
injective. Suppose some element
we get that i ai bi vi = 0. But also x = i (ai bi vi ) ⊗ 1, applying linearity. So x = 0 ⊗ 1 = 0.
This definition is rather abstract, but we will give two other definitions in the following
proposition to make things easier to understand.
Proposition 2.19.11. (i) Suppose that (vi )i∈I is a basis of V and (wj )j∈J is a basis of W
(for some sets I and J which index the bases: for this course you are welcome to assume
they are finite). Then (vi ⊗ wj )i∈I,j∈J is a basis for V ⊗ W . In particular, dim(V ⊗ W ) =
(dim V )(dim W ).
(ii) There is a homomorphism

ι : V ∗ ⊗ W → Hom(V, W ), ι(f ⊗ w)(v) = f (v)w. (2.19.12)

If V is finite-dimensional, this homomorphism is an isomorphism.


In this course we will restrict to the case that V and W are finite-dimensional. Therefore,
we can think alternatively of V ⊗ W as a vector space with basis (vi ⊗ wj ) where (vi ) and
(wj ) are (finite) bases of V and W respectively, or we can think of it as Hom(V ∗ , W ).
Proof of Proposition 2.19.11. [non-examinable]
(i) [Prove only finite-dimensional case in lecture:] First we show that (vi ⊗wj ) span V ⊗W .
We already observed that V ⊗ W is spanned by elements v ⊗ w for arbitrary v ∈ V, w ∈ W .
But we can replace v by a linear combination of the P vi and w by a linear P combination of
the wj , and apply the definition of tensor product: i,j (ai vi ) ⊗ (bj wj ) = i,j ai bj (vi ⊗ wj ),
which proves the statement.
Next we prove linear independence. Suppose that z := nk=1 aik ,jk vik ⊗ wjk is zero, with
P
all of the aik ,jk nonzero and none of the pairs (ik , jk ) identical, and assume n is minimal to
get such a relation. If any of the ik are equal we can combine terms and reduce n, and the
same is true for the jk . So assume all the ik are unequal. Let f ∈ V ∗ be an element such

40
that f (vi1 ) = 1 and f (vik ) = 0 for k > 1. Then nk=1 aik ,jk f (vik ) ⊗ wjk = ai1 ,j1 wj,k 6= 0.
P
But the map F : V ⊗ W → W, F (v ⊗ w) = f (v)w is a well-defined linear map, since it is
well-defined and linear on C[V × W ], and sends all of the relations to zero. Then F (z) 6= 0,
but this contradicts z = 0 since F is a linear map.
(ii) To check this map is linear and well-defined, note first that we have a well-defined
linear map e ι : C[V ∗ × W ] → Hom(V, W ), given by e
ι(f, w)(v) = f (v)w for all v ∈ V, w ∈ W ,
extended linearly to C[V ∗ × W ]. We only have to check that the relations defining V ∗ ⊗ W
are in the kernel. This is an explicit check:
ι((af, w))(v) = f (av)w = af (v)w = ae
e ι(f, w)(v) = e
ι(a(f, w))(v), (2.19.13)
0 0 0
ι((f, w) + (f , w))(v) = f (v)w + f (v)w = (f + f )(v)w = e
e ι(f + f 0 , w)(v). (2.19.14)
We can similarly handle the other relations.
To prove it is an isomorphism if V is finite-dimensional, we give an explicit inverse in

this case. Let (vi ) be a basis of
P V and (fi ) the dual basis of V . Then we can define

Hom(V, W ) → V ⊗ W by T 7→ i fi ⊗ T (vi ). It is straightforward to verify that this is an
inverse.
The main thing is that we need to think of V ⊗ W as a vector space whose elements are
linear combinations (or sums) of elements of the form v ⊗ w, v ∈ V, w ∈ W , subject to the
defining relations (2.19.9) which in terms of tensor products can be written as the following
equalities for all v, v 0 ∈ V , w, w0 ∈ W , and a, a0 ∈ C:
(av + a0 v 0 ) ⊗ w = a(v ⊗ w) + a0 (v 0 ⊗ w); v ⊗ (aw + a0 w0 ) = a(v ⊗ w) + a0 (v ⊗ w0 ). (2.19.15)
Some of you might like the following more abstract way of thinking about tensor product:
Proposition 2.19.16 (The universal property of the tensor product). Given vector spaces
U, V, W , for every bilinear map F : V × W → U , i.e., one satisfying
F (av, w) = aF (v, w) = F (v, aw), (2.19.17)
F (v + v , w) = F (v, w) + F (v 0 , w), F (v, w + w0 ) = F (v, w) + F (v, w0 ),
0
(2.19.18)
there is a unique homomorphism Fe : V ⊗ W → U with the property Fe(v ⊗ w) = F (v, w).
Conversely, all homomorphisms V ⊗ W → U are of this form.
Proof. Construct the linear map Fe : C[V × W ] → U by linear extension of F . Because F is
bilinear, the kernel includes the elements (2.19.9), as in the preceding proof.

2.19.2 Natural isomorphisms involving tensor products


Proposition 2.19.19. (i) There are the following explicit isomorphisms, for U, V , and
W arbitrary vector spaces:

V ⊗W → W ⊗ V, (v ⊗ w) 7→ (w ⊗ v); (2.19.20)

(U ⊗ V ) ⊗ W ) → U ⊗ (V ⊗ W ), (u ⊗ v) ⊗ w 7→ u ⊗ (v ⊗ w); (2.19.21)

U ⊗ (V ⊕ W ) → (U ⊗ V ) ⊕ (U ⊗ W ), u ⊗ (v, w) 7→ (u ⊗ v, u ⊗ w), (2.19.22)

Φ : Hom(U ⊗ V, W ) → Hom(U, Hom(V, W )), Φ(f )(u)(v) = f (u ⊗ v). (2.19.23)

41
(ii) For arbitrary V and W , there is an explicit linear injection,

Φ : V ∗ ⊗ W ∗ → (V ⊗ W )∗ , Φ(f ⊗ g)(v ⊗ w) = f (v)g(w). (2.19.24)

This is an isomorphism if V is finite-dimensional.

Proof. (i) The maps are obviously linear. It is easy to construct an explicit formula for their
inverses. This is left as an exercise. For example, the inverse of the third one is given by the
sum of the injective map U ⊗ V ∼ = U ⊗ (V ⊕ {0}) ,→ U ⊗ (V ⊕ W ) and the same map with
V replaced by W .
(ii) This map is obviously linear. To see it is injective, by Proposition 2.19.11.(ii), we have
an injection V ∗ ⊗ W ∗ → Hom(V, W ∗ ), and by (i) we can compose this with Hom(V, W ∗ ) =
Hom(V, Hom(W, C)) ∼ = Hom(V ⊗ W, C) = (V ⊗ W )∗ to get a linear injection. It is easy to
check that the result is the same as the given map Φ. Finally, if V is finite-dimensional, by
Proposition 2.19.11.(ii) again, the injection is an isomorphism.
If we take dimensions of vector spaces, the first three isomorphisms in (ii) become the
commutativity, associativity, and distributivity rules on N. Namely, for a = dim U, b =
dim V, c = dim W , we get:

b · c = c · b, (a · b) · c = a · (b · c), a · (b + c) = a · b + a · c. (2.19.25)

There is a term for this in mathematics: we say that these isomorphisms “categorify” the
identities above.

2.19.3 Tensor products of representations


By the preceding, one way to think about tensor products of representations V and W , if
V is finite-dimensional, is simply as the representation Hom(V ∗ , W ). However it is useful to
write a formula in terms of tensor product symbols:

Definition 2.19.26. The tensor product representation (V ⊗ W, ρV ⊗W ) is given by

ρV ⊗W : G → GL(V ⊗ W ), ρV ⊗W (g)(v ⊗ w) = ρV (g)(v) ⊗ ρV (g)(w). (2.19.27)

(This definition also has the advantage of working even if V is not finite-dimensional,
although we won’t need this.)

Example 2.19.28. Let (V, ρV ), (V 0 , ρV 0 ) be one-dimensional representations. Let ρ, ρ0 : G →


C× be defined by the properties ρV (g)(v) = ρ(g)v and ρV 0 (g)(v 0 ) = ρ0 (g)v 0 for all g ∈ G, v ∈
V , and v 0 ∈ V 0 . Then (V ⊗ V 0 , ρV ⊗V 0 ) has the property ρV ⊗V 0 (v ⊗ v 0 ) = ρ(g)ρ0 (g)(v ⊗ v 0 ).
That is, the tensor product of the one-dimensional representations ρ, ρ0 : G → C× in terms
of matrices is the product,
(ρ ⊗ ρ0 )(g) = ρ(g)ρ0 (g). (2.19.29)

42
Remark 2.19.30. Combining Examples 2.19.28 and 2.19.2, we conclude that the set of
homomorphisms ρ : G → C× (matrix one-dimensional representations) form a group under
the tensor product operation (2.19.29) with inversion given by the matrix corresponding to
dualisation: the inverse ρ0 to ρ satisfies ρ0 (g) = ρ(g)−1 for all g ∈ G. Of course we didn’t need
tensor products to define this. However the tensor product and dualisation constructions
work on vector spaces, not just on matrices, and provide the conceptual explanation for this
operation.
Remark 2.19.31 (Important). It is useful to give a formula for the tensor product repre-
sentation in terms of matrices. Let B = {v1 , . . . , vm } be a basis of V and C = {w1 , . . . , wn }
be a basis of W . Let D := {vi ⊗ wj } be the resulting basis of V ⊗ W , indexed by pairs (i, j).
Take g ∈ G and let M = [ρV (g)]B and N = [ρW (g)]C be the matrices of the representation.
Then
X
ρV ⊗W (g)(vk ⊗ w` ) = ρV (g)(vk ) ⊗ ρW (g)(w` ) = Mik vi ⊗ Nj` wj . (2.19.32)
i,j

Thus, as mn × mn matrices Aij,k` for 1 ≤ i, k ≤ m and 1 ≤ j ≤ n, we have:

([ρV ⊗W (g)]D )ij,k` = (Mik Nj` ). (2.19.33)

We can define M ⊗ N as the LHS matrix (i.e., thinking of matrices as linear transformations
M : Cm → Cm , N : Cn → Cn by multiplication, then M ⊗ N : (Cm ⊗ Cn ) → (Cm ⊗ Cn )
is the resulting mn × mn matrix with entries labeled by pairs (a, b) with 1 ≤ a ≤ m and
1 ≤ b ≤ n). It is also called the Kronecker product matrix of M and N . Then the equation
above gives the standard formula for this product:

(M ⊗ N )ij,k` = Mik Nj` . (2.19.34)

Exercise 2.19.35. Show that, under the isomorphism ι : V ⊗ W → Hom(V ∗ , W ), that


Definition 2.19.26 agrees with Definition 2.14.3.
In fact, (2.19.27) is easier to deal with than the formula you get from taking two
hom spaces. So much so that it is useful to turn the situation around, using Proposi-
tion 2.19.11.(ii). Now instead of using the formula for the representation Hom(V, W ) we can
feel free to begin with V ∗ and once we understood its formula, to use V ∗ ⊗ W . Here is an
example which shows why this might be useful:
Example 2.19.36. Let (Cθ , θ) be a one-dimensional representation, given by Cθ = C and
θ : G → C× = GL1 (C). Let (W, ρW ) be an arbitrary representation. To compute the
representation on Cθ ⊗ W , first note that Cθ ⊗ W ∼
= W as vector spaces by Example 2.19.10.
In terms of this, we have

ρCθ ⊗W (g)(w) = ρCθ ⊗W (g)(1 ⊗ w) = θ(g)1 ⊗ ρW (g)(w) = θ(g)ρW (g)(w). (2.19.37)

So we just multiply ρW by θ: it’s just the construction of Exercise 2.17.7.(i)!

43
Note, on the other hand, if we want to compute the representation Hom(Cθ , W ), this vec-
tor space is also isomorphic to W , but the correct representation is given by Hom(Cθ , W ) ∼
=

Cθ ⊗ W so that we have to invert the θ(g): we get

ρHom(Cθ ,W ) (g)(w) = θ(g)−1 ρW (g)(w). (2.19.38)

This, I think, is easier to think about than computing directly from the formula of Definition
2.14.3 for Hom(Cθ , W ) in terms of linear maps Cθ → W .

2.20 External tensor product


An illustration of the power of tensor product is the following:

Definition 2.20.1. Let G and H be groups, (V, ρV ) be a representation of G, and (W, ρW )


be a representation of H. Form the representation, called the external tensor product and
often denoted V  W of G × H as follows: as a vector space, V  W = V ⊗ W is the ordinary
tensor product, but with homomorphism given by

ρV W : G × H → GL(V ⊗ W ), ρV W (g, h) = ρV (g) ⊗ ρW (h). (2.20.2)

Exercise 2.20.3. (i) Show that this construction recovers the tensor product of representa-
tions in the following sense: for G = H, we have

ρV ⊗W (g) = ρV W (g, g). (2.20.4)

(ii) More conceptually, define the diagonal map, ∆ : G → G × G by g 7→ (g, g), which is
a homomorphism. Then show that ρV ⊗W = ρV W ◦ ∆. In other words, V ⊗ W is obtained
from V  W via the map ∆ and the construction of Exercise 2.17.7.(ii).

Proposition 2.20.5. Let G and H be two finite groups. Then every irreducible represen-
tation of G × H is of the form V  W for (V, ρV ) and (W, ρW ) irreducible representations of
G and H, respectively. For V, V 0 , W, W 0 irreducible, we have V 0  W 0 ∼
= V  W if and only
0 ∼ 0 ∼
if V = V and W = W .

In fact the statement is true more generally if G and H are not finite, under the assump-
tion only that V and W are finite-dimensional (see Remark 2.20.13 below).
Thus we can list the irreducible representations of G × H from those of G and H by
taking external tensor products. This gives a generalisation of Exercise 2.12.7.
Proof of Proposition 2.20.5. (Maybe omit from the lectures and make non-examinable.) Sup-
pose that (V, ρV ) and (W, ρW ) are irreducible representations of G and H, respectively. They
are finite-dimensional by Proposition 2.8.6, and hence so is V  W . Now, we compute
EndG×H (V  W ). Note that

EndG×H (V  W ) = End(V  W )G×H = (End(V  W )G )H = EndG (V  W )H , (2.20.6)

44
so we first compute EndG (V  W ). There is a map

End(W ) → EndG (V  W ), T 7→ (I ⊗ T ), (I ⊗ T )(v ⊗ w) = v ⊗ T (w). (2.20.7)

We claim it is an isomorphism of H-representations. Given the claim we have:

EndG×H (V  W ) = EndG (V  W )H = End(W )H = EndH (W ) = C, (2.20.8)

and by Proposition 2.15.12, this implies that V  W is indeed irreducible.


Let us now prove the claim that (2.20.7) is an isomorphism of H-representations. It is
clear that the map is H-linear and injective, so we just have to show it is surjective. The
easiest (although not the best) way to do this is by dimension counting (see the remark
below for a better proof). Note that, as G-representations,
Lm V W ∼ = V dim W , since if we
take any basis w1 , . . . , wm of W , we get V ⊗ W = i=1 V ⊗ wi . Then dim EndG (V  W ) =
dim EndG (V dim W ) = (dim W )2 by Proposition 2.15.12 again. As this equals dim End(W )
and the map (2.20.7) is injective and linear, the map is surjective.
Now for the uniqueness, suppose that (V 0  W 0 ) ∼ = (V  W ). By the above, as G-
representations, we get (V 0 )dim W ∼ 2.15.10, this implies V 0 ∼
0 dim W
= V , and by Corollary = V.
0 ∼ 0
Similarly W = W . (For a better proof that doesn’t use bases of W and W , see the last
paragraph of the following remark.)
Finally, we need to show that all irreducible representations are of the form V  W for
V and W irreducible. We can deduce this by counting (for a better proof that doesn’t count
and therefore doesn’t require G to be finite, see Remark 2.20.13). Namely, if V1 , . . . , Vm
and W1 , . . . , Wn are the irreducible representations of G and H up to isomorphism (i.e.,
every irreducible representation Pis isomorphic to exactly one of these), by Corollary 2.16.4,
|G| = m 2 n 2
P
i=1 (dim V i ) and |H| = i=1 (dim Wi ) . Also, dim(Vi ⊗ Wj ) = (dim Vi )(dim Wj ) by
Proposition 2.19.11.(i). Thus we get
m
X n
X X
2
|G × H| = |G||H| = (dim Vi ) (dim Wj )2 = dim(Vi ⊗ Wj )2 . (2.20.9)
i=1 j=1 i,j

This implies that the Vi  Wj must be all the irreducible representations of G × H up to


isomorphism.

Remark 2.20.10 (Non-examinable). Here is a better proof of the claim that End(W ) →
EndG (V  W ) is surjective, that doesn’t use bases or dimensions. Let T ∈ EndG (V  W ).
For every w ∈ W and f ∈ W ∗ , we can consider the composition
I⊗w T I⊗f
V → V ⊗ W → V ⊗ W → V, (2.20.11)

where the first map has the form (I ⊗ w)(v) = v ⊗ w and the last map has the form
(I ⊗ f )(v ⊗ w) = f (w)v. This map is a G-homomorphism and hence is a multiple of the
identity for all f . So we see that T (v ⊗ w) = v ⊗ w0 for some w0 ∈ W . Let v 0 ∈ V be another

45
vector. We claim also that T (v 0 ⊗ w)P
= v 0 ⊗ w0 . Indeed, since V is irreducible, v 0 is in the
span of ρ(g)(v), so we can write v 0 = g∈G ag ρ(g)(v) for some ag ∈ C. Then
X X
T◦ (ag ρ(g) ⊗ I) = (ag ρ(g) ⊗ I) ◦ T, (2.20.12)
g∈G g∈G

and applying (2.20.12) to v ⊗ w, we get T (v 0 ⊗ w) = v 0 ⊗ w0 , as desired.


This also gives a better proof that V 0  W 0 ∼= V  W implies V 0 ∼ = V and W 0 ∼ = W.
0 0 0
Indeed, the above can be generalised to show that HomG (V  W , V  W ) = HomG (V , V ) ⊗
Hom(W 0 , W ), which by Schur’s Lemma is nonzero if and only if V 0 ∼ = V as G-representations.
0 0 ∼ 0 ∼ 0 ∼
Thus if V  W = V  W we get V = V . Similarly, W = W .

Remark 2.20.13 (Non-examinable). Actually, Proposition 2.20.5 is valid more generally:


we don’t need G and H to be finite, and only need to assume that V and W are finite-
dimensional. Together with Remark 2.20.10, the proof of the proposition shows that EndG×H (V 
W ) = C when V and W are finite-dimensional and irreducible. More generally (using the
last paragraph of Remark 2.20.10), it shows that HomG×H (V 0  W 0 , V  W ) is nonzero if
= V and W 0 ∼
and only if V 0 ∼ = W , in which case it is one-dimensional. So we have the final
statement, and only have to prove that V  W is irreducible when V and W are. We just
can’t use Proposition 2.15.12 to finish the proof because that required G × H to be finite.
So let’s find another proof of irreducibility.
Given any irreducible representation U of G × H, we claim that there is a surjective
homomorphism of representations (V  W ) → U for some irreducible representation V of G
and some representation W of H. Indeed, let V be any irreducible representation isomorphic
to a G-subrepresentation of U . Then we have a canonical map, ϕ : V  HomG (V, U ) →
U, (v  ϕ) 7→ ϕ(v). Let W 0 := HomG (V, U ). It is a representation of H via the action of
H on U (with trivial action on V ). So we get a homomorphism of G × H representations,
V  W 0 → U . By choice of V , this is nonzero, and since U is irreducible, it must be
surjective. We now show how to replace W 0 by an irreducible representation. Let W 00 ⊆ W 0
be the largest subspace such that V  W 00 is in the kernel of ϕ (we can take the sum of
all subspaces whose tensor product with V are in the kernel). Then we get a surjection
V  (W 0 /W 00 ) → U . Now let W ⊆ W 0 /W 00 be any irreducible subrepresentation (which
exists by finite-dimensionality of W 0 /W 00 ). Then V  W → U is a nonzero homomorphism,
which is therefore surjective by irreducibility of U .
Now let us return to the situation where V and W are irreducible. Suppose that U ⊆
V  W is an irreducible G × H-subrepresentation. Let V 0  W 0 → U be a surjective
homomorphism with V 0 and W 0 irreducible. Taking the composition we obtain a nonzero
homomorphism of G × H-representations, V 0  W 0 → V  W . By the last paragraph of
Remark 2.20.10, this implies V 0 ∼ = V and W 0 ∼ = W , so we can set V 0 = V and W 0 = W
to begin with. Now the composition T : V  W → V  W is a nonzero homomorphism of
representations. Since EndG (V  W ) = C, we conclude that T is a multiple of the identity.
But then T is surjective. Since the image of T is U , this implies U = V  W , so that V  W
is irreducible as desired.

46
To prove that all of the irreducible finite-dimensional representations of G × H are of the
form V  W for V, W irreducible, note that in the preceding paragraph we showed that for
every finite-dimensional irreducible representation U , there is a surjective homomorphism
of representations V  W → U for some irreducible V and W . But since V  W is itself
irreducible, we conclude that U ∼ = V  W.
Remark 2.20.14 (Non-examinable). Parallel to Remark 2.12.5, we can get a counterex-
ample to the proposition if we allow V and W to be infinite-dimensional. As in Remark
2.12.5, let V = C(x) and G = C(x)× , and let also W = V . Then V ⊗ W = C(x) ⊗ C(x)
is not irreducible, since there is a nonzero non-injective homomorphism of representations
ψ : C(x) ⊗ C(x) → C(x) given by multiplying fractions. This is clearly surjective and hence
nonzero, but f ⊗1−1⊗f is in the kernel for all f ∈ C(x). This element f ⊗1−1⊗f is nonzero
if f is nonconstant, since we can also write an injective homomorphism C ⊗ C(x) → C(x, y),
the field of fractions of polynomials in two variables x and y, by f (x) ⊗ g(x) 7→ f (x)g(y), and
then f ⊗ 1 − 1 ⊗ f maps to f (x) − f (y) which is nonzero if f is nonconstant. In fact this injec-
tive homomorphism realises C(x) ⊗ C(x) as the subring of C(x, y) consisting of elements of
the form ni=1 fi (x)gi (y) for rational fractions fi , gi of one variable. (For an alternative proof
P
that f ⊗1−1⊗f is nonzero for some f , take the map C(x)⊗C(x) → C(x), f ⊗g 7→ f (x)g(−x),
in which case again f ⊗ 1 − 1 ⊗ f 7→ f (x) − f (−x) which is nonzero whenever f is not even;
for instance when f = x itself.)

2.21 Summary of section


Here we recap the highlights of this section, in order.
We began by presenting many of the important examples of representations (Section 2.2),
for cyclic, symmetric, and dihedral groups.
We defined representations of a group G in two ways: (a) abstractly, as a pair (V, ρV ) of a
vector space and a homomorphism ρV : G → GL(V ) (Definition 2.4.1); and (b) via matrices,
as a homomorphism ρ : G → GLn (C) for some dimension n (Definition 2.4.7). We showed
how to go between these two definitions (Definition 2.4.8 and Example 2.5.1) and proved
that they give the same isomorphism classes of representations (Proposition 2.4.19). Both
definitions give the same notion of dimension, (ir)reducibility, (in)decomposability, etc.
We defined the key notion of representations from group actions (Definition 2.6.1), and
the important special case of the (left) regular representation (Example 2.7.1).
We proceeded to define subrepresentations (Section 2.8) and direct sums (Section 2.9)
of representations and defined a representation to be reducible or decomposable if it has a
nonzero proper subrepresentation or a nontrivial decomposition as a sum of subrepresen-
tations, respectively. Otherwise the representation is called irreducible or indecomposable,
respectively (Exception: the zero representation is neither reducible nor irreducible, and nei-
ther decomposable nor indecomposable: this is like the number one being neither prime nore
composite). We defined a representation to be semisimple if it is a direct sum of irreducible
subrepresentations (Definition 2.9.6). We observed that an indecomposable representation
is semisimple if and only if it is actually irreducible (Example 2.9.7).

47
We then proved the first and most important theorem on (complex) representations
of finite groups: Maschke’s Theorem (Theorem 2.10.2), that finite-dimensional such repre-
sentations are always semisimple. In fact the proof of the theorem constructs, for every
subrepresentation of a finite-dimensional representation of a finite group, a complementary
subrepresentation.
Next we proved the second most important result in the subject (but which is also valid
for infinite groups): Schur’s Lemma (Lemma 2.11.1). Part (i) shows that nonzero homo-
morphisms of irreducible representations are isomorphisms (which requires no assumptions
on the field or the dimension and is relatively easy and quick to prove). Part (ii), assumes
that the representations are finite-dimensional and, using that the field is complex, gives us
that there is at most one such isomorphism up to scaling: EndG (V ) = C · I for V a finite-
dimensional irreducible representation. This was also not difficult to prove, using mainly the
fact that square matrices over the complex numbers always have an eigenvalue (or better,
that an endomorphism of a finite-dimensional complex vector space has an eigenvalue), by
taking any root of the characteristic polynomial.
As an application of Schur’s Lemma, we proved that every finite-dimensional represen-
tation of an abelian group is one-dimensional (Proposition 2.12.1). As a special case we
classified these for cyclic (Corollary 2.12.6) and all finite abelian groups (Exercise 2.12.7),
seeing that the number of irreducible representations equals the size of the group.
We next studied one-dimensional representations, showing that one-dimensional represen-
tations of a group G are in explicit bijection with those of the abelianisation, Gab := G/[G, G]
(Definition 2.13.4 and Corollary 2.13.9). We classified one-dimensional representations for
symmetric and dihedral groups (Examples 2.13.1 and 2.13.2). Namely, these are the trivial
and sign representations of the symmetric groups, and two or four examples in the dihedral
cases, depending on whether n is odd or even, respectively (the generating rotation and re-
flection can get sent to ±1, except in the odd case where the rotation must get sent to 1). We
observed that perfect (which includes simple) groups can have no nontrivial one-dimensional
representations (Examples 2.13.14 and 2.13.15).
We then proceeded to define the representation Hom(V, W ) where V and W are them-
selves representations of a group G (Section 2.14). We defined, for every representation V , the
G-invariant subrepresentation V G (Definition 2.14.7), satisfying the property Hom(V, W )G =
HomG (V, W ). We presented an averaging formula for a G-linear projection V → V G , and ex-
plained that the proof of Maschke’s theorem crucially involves the projection Hom(V, W ) →
Hom(V, W )G = HomG (V, W ), in order to turn a linear projection V → W into a G-linear
projection, whose kernel produces the desired complementary subrepresentation.
We then considered decompositions. Assume that the group G is finite. By Maschke’s
theorem, if V is a finite-dimensional representation, there is an isomorphism V ∼ = V1r1 ⊕ · · · ⊕
Vmrm with the Vi nonisomorphic irreducible representations. We proved that this decomposi-
tion is unique up to replacing each Vi by an isomorphic representation (Corollary 2.15.10).
Namely, ri = dim HomG (Vi , V ) for all i (Proposition 2.15.2). Moreover we proved the for-
mula dim EndG (V ) = r12 + · · · + rm
2
. This includes a converse of Schur’s lemma: EndG (V ) is
one-dimensional if and only if V is irreducible.

48
PmApplying 2this to the regular representation, we proved the fundamental formula |G| =
i=1 (dim Vi ) , where V1 , . . . , Vm are all of the irreducible representations of G up to iso-
morphism (with Vi ∼ 6= Vj for i 6= j), and in particular that m is finite (Corollary 2.16.4).
We can use to classify irreducible representations for S3 , S4 , and the dihedral groups (see
Examples 2.17.5, 2.17.6, and 2.17.10). Along the way we proved that the reflection represen-
tation of Sn is always irreducible (Proposition 2.17.3) and gave some general constructions
of representations (Exercise 2.17.7).
Next we proved that the number of irreducible representations of a finite group equals its
number of conjugacy classes
Pm (Theorem 2.18.2). The key idea here was toL enhance the funda-
mental formula |G| = i=1 (dim Vi ) to an explicit isomorphism, C[G] = m
2 ∼
i=1 End(Vi ). This
is not G-linear at first, but it is if we equip C[G] with the adjoint representation (Definition
2.18.10). Then taking G-invariants and taking dimension yields the desired formula.
We then defined duals and tensor products of representations (Definition 2.14.3 and
Example 2.19.1; Definitions 2.19.8 and 2.19.26). The tensor product generalizes the con-
struction of Exercise 2.17.7.(i) in the case one of the representations is one-dimensional. We
showed that V ∗ ⊗ W ∼ = Hom(V, W ) as representations when V is finite-dimensional.
Finally, generalising this, we briefly introduced the external tensor product of represen-
tations (V, ρV ) and (W, ρW ) of two (distinct) groups G and H: the representation (V ⊗
W, ρV W ), also denoted V  W , of the product G × H (Definition 2.20.1). This is useful
because, if we know the irreducible representations of G and of H, then the irreducible repre-
sentations of G × H are precisely the external tensor products of irreducible representations.
Thus the number of irreducible representations of G × H equals the product of the number
for G and H (which also follows from the formula in terms of conjugacy classes). This gen-
eralizes the aforementioned classification (Exercise 2.12.7) of irreducible representations of
finite abelian groups (which are all one-dimensional).

3 Character theory
3.1 Traces and definitions
Character theory is a way to get rid of the vector spaces, and replace them by numerical
information depending only on isomorphism class. The main idea is as follows: Suppose that
two square matrices A and B are conjugate, B = P AP −1 . Then we have the fundamental
identity:
tr(B) = tr(P AP −1 ) = tr(P −1 P A) = tr(A). (3.1.1)
P
Here, recall from linear algebra that tr(A) := i Aii . Also, you should have seen that
tr(CD) = tr(DC)Pfor all matrices C and D such that CD (equivalently DC) is square.
Indeed tr(CD) = i,j Cij Dji = tr(DC). This verifies (3.1.1). We can then define:

Definition 3.1.2. The character of a matrix representation ρ : G → GLn (C) is the function
χρ : G → C, χρ (g) = tr ρ(g).

49
We then see that two equivalent representations give the same character (we will record
this observation in Proposition 3.1.9 below).
Here is another useful formula for the trace. Since trace is invariant under conjugation,
and there is some choice B of conjugate matrix which is upper-triangular with the eigenvalues
(with multiplicity) on the diagonal, we also get that, if A has eigenvalues λ1 , . . . , λn with
multiplicity,
tr(A) = λ1 + · · · + λn . (3.1.3)
To extend this to, abstract representations let’s recall how to define the trace of linear
endomorphisms:

Definition 3.1.4. If T : V → V is a linear endomorphism of a finite-dimensional vector


space V , then for any basis B of V , we set tr(T ) := tr[T ]B .

For this to make sense we have to verify that it is actually independent of the choice of
B: for any other basis B 0 , letting P = [I]B0 ,B be the change of basis matrix (see (2.4.11) and
(2.4.12)), we have

tr[T ]B0 = tr([I]B,B0 [T ]B,B [I]B0 ,B ) = tr(P −1 [T ]B P ) = tr[T ]B . (3.1.5)

As before we can verify that, if T : V → W and S : W → V are linear maps, then

tr(S ◦ T ) = tr(T ◦ S), (3.1.6)

by taking matrices and reducing to the aforementioned identity tr(CD) = tr(DC). Similarly
we deduce that, if T has eigenvalues λ1 , . . . , λn with multiplicity, then tr(T ) = λ1 + · · · + λn .
The latter definition is, in a sense, better, since it doesn’t really require using a basis at all.

Remark 3.1.7 (Non-examinable). It is also possible to define trace of endomorphisms in a


more direct way without using bases. For this recall that End(V ) = Hom(V, V ) ∼ = V∗⊗V,
since V is finite-dimensional (by Proposition 2.19.11.(ii)). It suffices therefore to define trace
on V ∗ ⊗ V . Here we can define it by tr(f ⊗ v) = f (v) for f ∈ V ∗ and v ∈ V , extended
linearly to all of V ∗ ⊗ V .

This motivates the following:

Definition 3.1.8. Given a finite-dimensional representation (V, ρV ) of a group G, the char-


acter is the function χ(V,ρV ) : G → C, given by χ(V,ρV ) (g) = tr ρV (g) for all g. To ease
notation this is also called χV or χρV .

Note that the definition of character only makes sense for finite-dimensional representa-
tions, so whenever we write χV we can assume V is finite-dimensional.

Proposition 3.1.9. Isomorphic abstract representations, or equivalent matrix representa-


tions, have the same character.

50
Proof. For matrices, if ρ, ρ0 : G → GLn (C) are equivalent, then ρ(g) and ρ0 (g) are conjugate
for all g (in fact, by the same invertible matrix P , but we won’t need this). So the result
follows from (3.1.1). For abstract representations, the result follows from the correspondence
between matrix and abstract representations (alternatively, if T : (V, ρV ) → (W, ρW ) is
an isomorphism, we can directly verify by (3.1.6) that tr ρW (g) = tr(T ◦ ρV (g) ◦ T −1 ) =
tr(T −1 ◦ T ◦ ρV (g)) = tr ρV (g).)
The first main result we will prove is that the converse is also true for finite groups:

Theorem 3.1.10. If G is finite, and (V, ρV ) and (W, ρW ) two finite-dimensional representa-
tions, then (V, ρV ) ∼
= (W, ρW ) if and only if χV = χW .
Shortly we will reduce this statement, by Maschke’s theorem, to the case of irreducible
representations, and then we will prove it using Proposition 2.15.12 together with an ex-
plicit formula for dim EndG (V ) in terms of the character χV . But first, we will explain the
fundamental beautiful properties of the character, some of which we will need.
We conclude with some examples.

Example 3.1.11. If (C, ρtriv ) is the trivial representation, then χC (g) = 1 for all g. That
is, χC = 1.

Example 3.1.12. Suppose that (V, ρV ) is a one-dimensional representation. Then ρV (g) =


χV (g)I for every g. This shows that, given V itself, the data of ρV and χV are equivalent.
If we take a matrix one-dimensional representation, ρ : G → C× , then we see that literally
ρ(g) = χρ (g). In particular χρ is multiplicative: χρ (gh) = χρ (g)χρ (h), and similarly χV is
multiplicative for every one-dimensional representation (V, ρV ).

Example 3.1.13. For any (V, ρV ), note that ρV (1) = I so χV (1) = dim V . Hence if V is at
least two-dimensional, then χV is not multiplicative: 12 = 1 whereas (dim V )2 = dim V only
if dim V ≤ 1.

Example 3.1.14. We  to Example 2.12.8: G = {±1} × {±1} = C2 × C2 , and ρ is
 return
a 0
given by ρ(a, b) = . Then χ(a, b) = a + b, so χ(1, 1) = 2, χ(−1, −1) = −2, and
0 b
χ(1, −1) = 0 = χ(−1, 1). Here it is clear that χ is not multiplicative.

Example 3.1.15. Let G act on a finite set X and take the associated representation
(C[X], ρ). Using the basis X of C[X] (put in any ordering), [ρ(g)]X is a permutation matrix.
The trace of a permutation matrix Pσ is the number of diagonal entries, i.e., the number of
basis vectors ei such that P ei = ei , or equivalently the number of i such that σ(i) = i. Thus
here we have χρ (g) = tr(ρ(g)) = |{x ∈ X | g · x = x}|. This can also be written as |X g |, by
the following definition.

Definition 3.1.16. For g : X → X, the fixed point set X g is defined as X g := {x ∈ X |


g(x) = x}. If G × X → X is an action then we use the same notation for g ∈ G, via
g(x) := g · x.

51
Remark 3.1.17 (Non-examinable, philosophical). The theorem shows that passing from
representations to characters is essentially the same as identifying isomorphic representa-
tions, at least in the case of finite groups. Of course, as I have said several times, identifying
isomorphic objects is “wrong”: isomorphic objects are still different and “can have differ-
ent personalities.” However, as we will see, character theory is very powerful and produces
numbers with a great deal of structure. It is very useful not merely for classifying representa-
tions, but also for applications to combinatorics and number theory. So while we shouldn’t
just throw away the representations and replace them by characters, it is very useful to
understand what information is left, and how it is organized, after passing to characters.

Remark 3.1.18 (Non-examinable). The trace is not the only number you can get from a
matrix invariant under conjugation: one can also take the determinant. More generally, the
trace and the determinant appear as coefficients of the characteristic polynomials of A and
B, which are equal when A and B are conjugate. Alternatively, the eigenvalues of A and B
are equal (with multiplicity); this is the same information though, since the eigenvalues are
the roots of the characteristic polynomial.
Surprisingly, in the case of groups, these other numbers don’t provide any additional
information. That is, for a group G, if we know the traces χV (g) of all elements g ∈ G, we
actually can recover the characteristic polynomials of every element as well (this is obviously
false if we only know the trace of a single element). In the case of finite groups it is a
consequence of Theorem 3.1.10, but here is a direct proof for arbitrary G. Suppose that
(V, ρV ) is m-dimensional, and that the eigenvalues of ρV (g) are λ1 , . . . , λm with multiplicity.
Then the eigenvalues of ρV (g k ) are λk1 , . . . , λkm with multiplicity. So χV (g k ) = λk1 + · · · + λkm
for all k. But knowing these sums for all k ≥ 0 actually determines the coefficients of the
characteristic polynomial (x − λ1 ) · · · (x − λm ), which are known to be polynomials in these
power sums λk1 + · · · + λkm (we only need 1 ≤ k ≤ m). So the character of ρV determines the
characteristic polynomial of ρV (g).

3.2 Characters are class functions and other basic properties


Here we observe that characters are not arbitrary functions, but have a highly constrained
structure:

Definition 3.2.1. A function f : G → C is a class function if f (g) = f (h−1 gh) for all
h ∈ G.

In other words, class functions are functions that are constant on conjugacy classes.

Definition 3.2.2. Let Fun(G, C) denote the vector space of all complex-valued functions
on G, and let Funcl (G, C) ⊆ Fun(G, C) denote the vector space of all class functions.

Remark 3.2.3. Note that this is also denoted CG and CG cl , but I think that these notations
could be confusing since it clashes with the superscript of G to denote G-invariants.

52
Exercise 3.2.4. Show that the set of class functions f : G → C form a vector space of
dimension equal to the number of conjugacy classes of G. In particular, every function is
a class function if and only if G is abelian (this is also clear from the definition). (Hint: a
basis is given in (3.5.7) below.)

Exercise 3.2.5. (i) Show that Fun(G, C) is a representation of G under the adjoint action
(g · f )(h) := f (g −1 hg). (ii) Show that using this action, Funcl (G, C) = Fun(G, C)G . (iii)
Show that there also two other actions on Fun(G, C), analogous to the right and left regular
representations, for which this isomorphism does not hold in general. (iv) In fact, show that
these are all related: these three representations on Fun(G, C) are the duals of the adjoint,
left, and right regular representations.

Proposition 3.2.6. Every character χV is a class function.

Proof. χV (h−1 gh) = tr(ρV (h−1 gh)) = tr(ρV (h)−1 ρV (g)ρV (h)) = tr(ρV (g)) = χV (g), using
(3.1.6).
To proceed, we need the following lemma:

Lemma 3.2.7. Suppose that T ∈ GL(V ) has finite order: T m = I for some m ≥ 1. Then
T is diagonalisable with eigenvalues which are m-th roots of unity, i.e., in some basis B of
V , we have [T ]B is diagonal whose entries are in {ζ ∈ C | ζ m = 1}.

Proof. This is linear algebra. Namely, T satisfies the polynomial xm − 1 = 0, which has
distinct roots (the m-th roots of unity). The minimal polynomial p(x) of T is a factor of
xm − 1, so also has roots of multiplicity one, which are m-th roots of unity. Therefore T is
diagonalisable to a diagonal matrix whose entries are m-th roots of unity. (In other words,
the Jordan normal form of T has diagonal entries which are m-th roots of unity, and nothing
above the diagonal since p(T ) = 0 and p has distinct roots.)

Remark 3.2.8 (Sketch in lecture). The lemma can also be proved using representation
theory. As in Exercise 2.4.22, we can define a representation ρ : Cm = {1, g, . . . , g m−1 } →
GL(V ) given by ρ(g k ) = T k . Then Maschke’s Theorem together with Corollary 2.12.6 show
that V = V1 ⊕ · · · ⊕ Vn for some one-dimensional representations V1 , . . . , Vn . Picking a basis
B := (v1 , . . . , vn ) such that vi ∈ Vi for all i, we get that [ρ(g)]B is diagonal and the entries
are obviously m-th roots of unity.

Proposition 3.2.9. Let (V, ρV ) be a finite-dimensional representation of G and g ∈ G an


element of finite order.

(i) χV (g −1 ) = χV (g).

(ii) |χV (g)| ≤ dim V , with equality holding if and only if g is a scalar matrix (=a scalar
multiple of the identity).

(iii) χV (1) = dim V .

53
Proof. (i) Let the eigenvalues of ρV (g) be, with multiplicity, ζ1 , . . . , ζn . Then, by (3.1.3),
χV (g) = ζ1 + · · · + ζn . Also, the eigenvalues of ρV (g −1 ) = ρV (g)−1 are ζ1−1 , . . . , ζn−1 . Since
each ζj is a root of unity, it follows that ζj−1 = ζj : after all, ζj ζj = |ζj | = 1. Thus

χV (g −1 ) = ζ1−1 + · · · + ζn−1 = ζ1 + · · · + ζn = χV (g). (3.2.10)

(ii) Applying the triangle inequality,

|χV (g)| = |ζ1 + · · · + ζn | ≤ |ζ1 | + · · · + |ζn | = n = dim V, (3.2.11)

with equality holding if and only if all of the ζj are positive multiples of each other. But
to have this, since |ζj | = 1, all the ζj have to be equal. (iii) This was already observed in
Example 3.1.13.
Exercise 3.2.12. (i) Find the functions f : C2 = {1, g} → C that satisfy the conditions
of the above lemmas. (ii) Give an example to show that they cannot all be characters.
(iii) Compute all functions which actually are characters. (Hint: T = ρ(g) will have to be
diagonalisable with entries ±1, since T 2 = I hence (T − I)(T + I) = 0.)

3.3 Characters of direct sums and tensor products


Next we explain how characters turn direct sums and tensor products into ordinary sums
and products, and we will also discuss duals and hom spaces.
As an introduction, recall the formulas

dim(V ⊕ W ) = dim V + dim W, dim(V ⊗ W ) = dim V dim W. (3.3.1)

Since χV (1) = dim V , we obtain that, for g = 1, with V and W finite-dimensional,

χV ⊕W (g) = χV (g) + χW (g), χV ⊗W (g) = χV (g)χW (g). (3.3.2)

Below we will explain that in fact this holds for all g!


Remark 3.3.3 (Non-examinable). This feature says that direct sums and tensor products of
representations “categorify” ordinary sums and products of characters, in that they replace
numbers by vector spaces in such a way that they reverse the operation of taking character.
This was already true for vector spaces under ⊕ and ⊗ by (3.3.1).
As a matter of notation, if we have two functions f1 , f2 , then f1 f2 is the product function
f1 f2 (g) := f1 (g)f2 (g). Similarly f is the function f (g) = f (g).
Proposition 3.3.4. Let (V, ρV ) and (W, ρW ) be finite-dimensional representations of a group
G. Then:
(i) χV ⊕W = χV + χW ;

(ii) χV ⊗W = χV χW .

54
Next suppose that G is finite. Then also:

(iii) χV ∗ = χV ;

(iv) χHom(V,W ) = χV χW .

Here f (g) := f (g) is the complex conjugation operation.


Proof. Throughout let B = {v1 , . . . , vm } and C = {w1 , . . . , wn } be bases of V and W ,
respectively. Fix g ∈ G and let M := [ρV (g)]B and N := [ρW (g)]C . Thus, χV (g) = tr M and
χW (g) = tr N . Then the computations reduce to finding the matrices for the representations
on the LHS and computing their traces.
(i) Let D := B t C be a basis of V ⊕ W (to be precise, for v ∈ B, then (v, 0) ∈ (V ⊕ W ) is
the corresponding basis element in D). Then we get the block-diagonal matrix for ρV ⊕W (g):
 
M 0
[ρV ⊕W (g)]D = , (3.3.5)
0 N

since ρV ⊕W (g) sends elements of B to linear combinations of B according to ρV (g), and


similarly for C and W . Then

χV ⊕W (g) = tr[ρV ⊕W (g)]D = tr M + tr N = χV (g) + χW (g). (3.3.6)

(ii) We take the basis D = {vi ⊗wj } of V ⊗W . By (2.19.33) we have [ρV ⊗W (g)]D = M ⊗N
where (M ⊗ N )ij,k` = Mik Nj` for i, k ∈ {1, . . . , m} and k, ` ∈ {1, . . . , n}. Thus
X X
χV ⊗W (g) = tr(M ⊗ N ) = (M ⊗ N )ij,ij = Mii Njj
i,j i,j
X X
= Mii Njj = tr(M ) tr(N ) = χV (g)χW (g). (3.3.7)
i j

(iii) Here we take the dual basis D = {f1 , . . . , fm } of V ∗ , given by fi (vj ) = δij . By
Remark 2.19.3, we get

χV ∗ (g) = tr[ρV ∗ (g)]D = tr(M −1 )t = tr M −1 = tr[ρV (g −1 )]B = χV (g −1 ), (3.3.8)

since M −1 = ρV (g −1 )B . The result now follows from Proposition 3.2.9.(i).


(iv) By Proposition 2.19.11.(ii), Hom(V, W ) ∼= V ∗ ⊗W , so by Proposition 3.1.9, the result
follows from (ii) and (iii).

Example 3.3.9. Here’s an example to show why (iii) and (iv) require G to be finite. Let
G = Z and a ∈ C× . Then we can take the one-dimensional representation ρ : G →
C× , ρ(m) = am . Then as in Example 3.1.12, χρ = ρ, but χρ (−1) = a−1 6= a in general.
[Note that (iii) and (iv) hold in this case if and only if |a| = 1, which is weaker than the
condition that a is a root of unity.]

55
Example 3.3.10. Let’s take G = Sn and let (Cn , ρperm ) be the permutation representation
and (V, ρV ) be the reflection representation (V ⊂ Cn ). Let (C, ρtriv ) denote the trivial
representation. Then Cn = V ⊕C·(1, 1, . . . , 1), with C·(1, 1, . . . , 1) = {(a, a, . . . , a) | a ∈ C}
isomorphic to the trivial representation. Hence χCn = χV + χC . By Examples 3.1.11 and
3.1.15, we get
χV (σ) = |{1, . . . , n}σ | − 1. (3.3.11)

Example 3.3.12. Now let (V, ρV ) be an arbitrary representation of a group, and (Cθ , θ)
a one-dimensional representation, for θ : G → C× a homomorphism. Then taking the
tensor product, we get χCθ ⊗V = θ · χV . This equals χV if and only if χV (g) = 0 whenever
θ(g) 6= 1, i.e., whenever g ∈
/ ker θ. In the case that G is finite, by Theorem 3.1.10 (still to be
proved), this gives an explicit criterion when the construction of Exercise 2.17.7.(i) produces
a nonisomorphic representation to the first one.

Example 3.3.13. As a special case of the preceding, by Theorem 3.1.10, for (C− , ρsign ) the
sign representation of Sn and (V, ρV ) the reflection representation, we see that C− ⊗ V ∼
=V
σ
if and only if |{1, . . . , n} | = 1 whenever σ is an odd permutation. We see that this is
true if and only if n = 3. (Even for n = 2 it is not true, and indeed in this case the
reflection and sign representations are isomorphic, and their tensor product is the trivial
representation which is not isomorphic to the reflection representation!) Thus for n ≥ 4,
the reflection representation and the reflection tensor sign are two nonisomorphic irreducible
representations of dimension n − 1.

Example P3.3.14. LetP (C[G], ρC[G] ) be the left regular representation of a finite group G,
ρC[G] (g)( h∈G ah h) = h∈G ah gh. This representation is induced by the group action G ×
G → G, so χC[G] (g) = |Gg | where Gg := {h ∈ G | gh = h}. But applying cancellation,
gh = h if and only if g = 1. Therefore Gg = ∅ for g 6= 1 and G1 = G. We get
(
|G|, g = 1,
χC[G] (g) = (3.3.15)
0, g 6= 1.
P
Example 3.3.16. The same applies to the right regular representation, g · ( h∈G ah h) =
−1
P
h∈G ah hg . So these two have the same character; so by Theorem 3.1.10 we see that the
left and right regular representations are isomorphic (we can also write an explicit isomor-
phism).

Example 3.3.17. On the other hand, the adjoint representation ρad (g)(h) = ghg −1 does
not have the same character: χρad (g) = |ZG (g)| where ZG (g) := {h ∈ G | gh = hg}, which
is a subgroup of G called the centraliser of g. Note that, since it is a subgroup, |ZG (g)| ≥ 1
(i.e., 1 ∈ ZG (g) for all g), so χρad 6= χC[G] unless G = {1}. So the adjoint representation is
not isomorphic to the left and right regular representations.

56
3.4 Inner product of functions and characters
Suppose f1 , f2 : G → C are functions and G is a finite group. Then we consider the (complex)
inner product X
hf1 , f2 i := |G|−1 f1 (g)f2 (g). (3.4.1)
g∈G

Remark 3.4.2 (Non-examinable). This makes sense if G is replaced by any finite set. There
is also an analogue for functions on more general spaces, where replace the finite sum by an
2
infinite sum or even by an integral. For example, there is the complex
R vector2 space L (R) of
square-integrable functions on R (functions f : R → C such that R |f (x)| dx < ∞), with
R
the inner product hf, gi = R f (x)g(x)dx.
Now it is easy to verify that h−, −i forms an inner product:
Definition 3.4.3. Let V be a complex vector space. A (Hermitian) inner product is a
pairing h−, −i : V × V → C satisfying, For all λ ∈ C and vectors u, v, w ∈ V :
(i) hu + v, wi = hu, wi + hv, wi and similarly hu, v + wi = hu, vi + hu, wi;

(ii) hλu, vi = λhu, vi = hu, λvi;

(iii) hu, vi = hv, ui;

(iv) hv, vi ≥ 0, with equality if and only if v = 0.


Properties (i) and (ii) are called sequilinearity. Property (iii) is called conjugate-symmetry.
And property (iv) is called positive-definiteness.
Lemma 3.4.4. The pairing h−, −i on Fun(G, C) is an inner product.
Proof. Note that (i),P(ii), and (iii) are immediate from the definition (3.4.1). For (iv), observe
that hf, f i = |G|−1 g∈G |f (g)|2 , which implies the statement.

This is essentially just the standard inner product hv, wi = v · w on the vector space CG ,
with components indexed by G instead of by {1, . . . , n}.
Recall from linear algebra the following:
Definition 3.4.5. Let V be a vector space with an inner product h−, −i. Then v, w ∈ V
are called orthogonal, written v ⊥ w, if hv, wi = 0. A set v1 , . . . , vn is orthonormal if
hvi , vj i = δij , i.e., vi ⊥ vj for i 6= j and hvi , vi i = 1 for all i.
We have the basic fact:
Lemma 3.4.6. If (v1 , . . . , vn ) is orthonormal, then it is linearly independent. Moreover, if
v = a1 v1 + · · · + an vn , then ai = hv, vi i.
Proof. The second statement is immediate from the linearity of h−, −i in the first component.
Then v = 0 implies ai = 0 for all i, hence the first statement also follows.

57
3.5 Dimension of homomorphisms via characters
We are now ready to prove a fundamental result:
Theorem 3.5.1. Let (V, ρV ) and (W, ρW ) be finite-dimensional representations of a finite
group. Then:
hχV , χW i = dim HomG (V, W ). (3.5.2)

3.5.1 Applications of Theorem 3.5.1


Before we prove the theorem, let’s look at some motivating consequences:
Corollary 3.5.3. Let G be a finite group and V1 , . . . , Vm a full set of irreducible represen-
tations.
(i) Let V and W be irreducible representations. Then
(
1, if V ∼
= W;
hχV , χW i = (3.5.4)
0, otherwise.

(ii) The characters χVi form an orthonormal basis of Funcl (G, C).
hχVi ,χV i
be a finite-dimensional representation. Then V ∼
Lm
(iii) Let V P = i=1 Vi . Moreover,
χV = m i=1 hχ Vi , χV iχVi .

(iv) Under the same assumptions as in (iii),


X X
hχV , χV i = ri2 = hχVi , χV i2 . (3.5.5)
i i

(v) A finite-dimensional representation V is irreducible if and only if hχV , χV i = 1.


Proof. (i) This is (2.14.2) together with Theorem 3.5.1.
(ii) The fact that the χVi form an orthonormal set is an immediate consequence of (i).
By Lemma 3.4.6, the characters are linearly independent. By Theorem 2.18.2, m equals the
number of conjugacy classes of G. By Exercise 3.2.4, this is the dimension of the vector
space of class functions (since there is another basis δC indexed by the conjugacy classes: see
also (3.5.7)). Thus this must form a basis.
(iii) The first statement is an immediate consequence of Proposition 2.15.2 together with
Theorem 3.5.1. The second statement then follows from Proposition 3.3.4.(i).
(iv) This is an immediate consequence of (ii) and (iii). Alternatively this is Proposition
2.15.12 together with Theorem 3.5.1.
(v) This is an immediate consequence of (iv).
Note that part (iii) immediately implies Theorem 3.1.10.
This motivates the following shorthand:

58
Definition 3.5.6. An irreducible character is a character of an irreducible representation.
Note that, in Exercise 3.2.4, we used an obvious basis of the vector space of class functions:
for each conjugacy class C ⊆ G, we can consider the function δC given by
(
1, if g ∈ C,
δC (g) = (3.5.7)
0, otherwise.

So the content of character theory is there are two natural bases of the vector space
of class functions, given by the Kronecker delta functions of conjugacy classes (3.5.7), or
by the irreducible characters (Corollary 3.5.3.(iii)).
Remark 3.5.8. The basis of Kronecker delta functions P is not |C|
orthonormal: it is still true
that hδC , δC 0 i = 0 (i.e., δC ⊥ δC 0 ), but hδC , δC i = |G|−1 g∈C 1 = |G| . So an orthonormal basis
q
is given by the functions |G| δ .
|C| C

Remark 3.5.9. There is an interpretation of the normalisation coefficients in terms of


the orbit-stabiliser theorem: note that C is an orbit of G under the conjugation action
G × G, (g, h) 7→ ghg −1 . Let h ∈ C; then C = G · h under this action. Hence |G| |C|
is the size
of the stabiliser of h. But the stabiliser is just thepcentraliser, ZG (h) = {g ∈ G | gh = hg}.
Thus the orthonormal basis consists of functions |ZG (h)|δC (for h ∈ C).
Aside from the normalisation issue, the bases δC are still quite different:
Example 3.5.10. Let G = Cn = {1, g, . . . , g n−1 } be a cyclic group. Then Funcl (G, C) =
Fun(G, C), with dimension equal to |G|. The basis δg , g ∈ G are just the functions which
are one on a single element g ∈ G and zero elsewhere (the “Kronecker delta functions”). On
the other hand, the irreducible representations are of the form (Cζ , ρζ ) where Cζ := C and
ρζ (g k ) = ζ k ∈ C× , for ζ n = 1. This is also the same as the basis χCζ = ρζ of characters since
taking the character of a one-dimensional matrix representation doesn’t do anything.
If we rather use the isomorphism G ∼ = Z/nZ = {0, . . . , n − 1}, then for ζ = eiθ (for
θ = 2πi`/n for some ` ∈ Z), we get χCζ (k) = eiθk , which is a basis of exponential functions.
The change of basis matrix between the Kronecker delta functions and these exponential
functions is the discrete Fourier transform, computed efficiently by the famous “fast Fourier
transform”. P
The formula for this transform f = (f (k))0≤k≤n 7→ (rζ )ζ n =1 , with f = ζ rζ χCζ , is given
by Corollary 3.5.3.(iii):
n−1
1X
rζ = hf, χCζ i = f (k)ζ −k . (3.5.11)
n k=0

Example 3.5.12. If G is a finite abelian group, almost the same thing as in the preceding
example holds, except now we have to take a product of cyclic groups and exponential
functions on a product of groups Z/ni Z. The change of basis is a product of discrete Fourier
transforms.

59
With the preceding example(s) in mind, the general change-of-basis matrix between Kro-
necker delta functions and irreducible characters can be thought of as a nonabelian Fourier
transform.
Example 3.5.13. Let’s consider the reflection representation (V, ρV ) of Sn . Since this is
irreducible, the formula (3.3.11) yields the identity
X
n! = n!hχV , χV i = (|{1, . . . , n}σ | − 1)2 . (3.5.14)
σ∈Sn

This is a purely combinatorial identity but does not seem very obvious from combinatorics
alone! For example, when n = 3 we get 6 = (3 − 1)2 + 3 · (1 − 1)2 + 2 · (0 − 1)2 , considering
first the identity, then the two-cycles, then the three-cycles. When n = 4 we get

24 = (4 − 1)2 + 6 · (2 − 1)2 + 8 · (1 − 1)2 + 3 · (0 − 1)2 + 6 · (0 − 1)2 ,

considering the identity, the two-cycles, the three-cycles, the products of two disjoint two-
cycles, and finally the four-cycles.
Example 3.5.15. Let’s return to G = Cn = {1, g, . . . , g n−1 } in the notation of Example
3.5.10. Corollary 3.5.3.(i) says that hχCζ , χCξ i = δζ,ξ . We can verify this explicitly:
n−1 n−1
1 X ` −` 1X
hχCζ , χCξ i = ζ (ξ) = (ζ/ξ)` , (3.5.16)
n `=0 n `=0

which is indeed δζ,ξ since ζ/ξ is an n-th root of unity.


Example 3.5.17. Let us keep the notation from the previous example. By Exercise 2.4.22,
an m-dimensional representation (Cm , ρ) is the same as a matrix T ∈ GLm (C) with T n = I,
by the correspondence ρ(g k ) = T k . Let us decompose this as

(Cm , ρ) ∼
M
= (C, ρζ )rζ . (3.5.18)
ζ n =1

By Corollary 3.5.3.(iii) (cf. (3.5.11)),


m−1
X
−1
rζ = rζ = hχρ , χCζ i = n tr(T ` )ζ −` , (3.5.19)
`=0

which is applying the discrete Fourier transform to χρ .


Of course, we can also compute the rζ with linear algebra: it is the multiplicity of ζ
as an eigenvalue of T , i.e., the multiplicity of ζ as a root of the characteristic polynomial
det(xI − T ) of T . This is equivalent to the above answer though: since T is diagonalisable
with rξ entries of ξ on the diagonal for all ξ with ξ n = 1, we have
X
tr(T ` ) = rξ ξ ` . (3.5.20)
ξ n =1

60
Therefore (3.5.19) becomes
n−1
X X X n−1
X
−1 −` ` −1
rζ = n ζ rξ ξ = n rξ (ξ/ζ)` , (3.5.21)
`=0 ξ n =1 ξ n =1 `=0

which is again true since ξ/ζ is an n-th root of unity. (This is just reproving that χCζ and
χCξ are orthogonal for ζ 6= ξ, as in Example 3.5.15.)
Remark 3.5.22 (Non-examinable). Actually, (3.5.19) and (3.5.20) are almost the same
transformations: this is saying (which we just proved) that the discrete Fourier transform
is almost inverse to itself, up to negating the coefficient in the sum, as well as the extra
normalisation of n−1 . To fix the second issue we can consider the normalised discrete Fourier
1
transform, call it F , by multiplying the RHS of (3.5.11) by n− 2 ; then F 2 (f )(x) = f (n − x),
i.e., F 2 reverses the order of a sequence. Thus F 4 is the identity. (This is actually the same
as the normalisation of Remark 3.5.8: it is what is needed to make F unitary.)

3.5.2 Proof of Theorem 3.5.1


The main idea of the proof is to count the dimension of HomG (V, W ) = Hom(V, W )G via
the projection Hom(V, W ) → Hom(V, W )G , onto G-invariants. The count is done using the
following linear algebra result:
Lemma 3.5.23. Let S : V → V be a linear projection onto W := im(S). Then dim W =
tr S.
Proof. By Lemma 2.8.16 we can write V = W ⊕ U for U := ker(S). Pick bases w1 , . . . , wm
and u1 , . . . , un of W and U , so that B := (w1 , . . . , wm , u1 , . . . , un ) is a basis of V . Then in
this basis we have the block matrix
 
Im 0
[S]B = (3.5.24)
0 0
with tr[S]B = m. Hence tr S = m = dim W .
In order to make the proof of the theorem more transparent, it is worth proving first the
special case where V = Ctriv is the trivial representation (this is not needed: see Remark
3.5.33 for a direct and simple proof). In this case HomG (Ctriv , W ) becomes just W G . Let us
record this statement, which is an analogue of (2.16.1), replacing the regular by the trivial
representation:
Lemma 3.5.25. For every representation (V, ρV ) of a group G, we have an isomorphism of
vector spaces
HomG (Ctriv , V ) ∼
= V G , ϕ 7→ ϕ(1). (3.5.26)
Proof. First, Hom(Ctriv , V ) ∼= V , under the linear map ϕ 7→ ϕ(1) (since ϕ(z) = zϕ(1)
for all z ∈ C). This map is compatible with the G action since G acts trivially on Ctriv .
Applying Exercise 2.14.9, we get HomG (Ctriv , V ) = Hom(Ctriv , V )G = V G . (Alternatively,
ϕ : Ctriv → V is G-linear if and only if ϕ(1) is G-invariant, since 1 ∈ Ctriv is G-invariant.)

61
Proof of Theorem 3.5.1. The main idea of the proof is contained in the case V = Ctriv ; let
us assume this. By Lemma 3.5.25, we need to show
hχCtriv , χW i = dim W G . (3.5.27)
Recall the G-linear projection S : W  W G from (2.14.11), given by S = |G|−1 g∈G ρ(g).
P

By Lemma 2.8.16 we have tr S = dim W G . We can now complete the proof with a quick
computation:
X X
hχCtriv , χW i = |G|−1 χW (g) = |G|−1 tr(ρW (g))
g∈G g∈G
X
= tr(|G|−1 ρW (g)) = tr S = tr S = dim W G = dim W G . (3.5.28)
g∈G

Let us deduce the general case from this (see Remark 3.5.33 for a direct proof). First note
that
hf1 , f2 i = h1, f1 f2 i, (3.5.29)
where 1 is the constant function, 1(g) = 1 for all g ∈ G. Now by Example 3.1.11 and
Proposition 3.3.4.(iv), setting f1 = χV and f2 = χW , this becomes
hχV , χW i = hχCtriv , χHom(V,W ) i. (3.5.30)
On the other hand, by Lemma 3.5.25, we have
HomG (Ctriv , Hom(V, W )) = Hom(V, W )G = HomG (V, W ), (3.5.31)
using Exercise 2.14.9 for the last equality.
Therefore hχV , χW i = dim HomG (V, W ) follows from
hχCtriv , χHom(V,W ) i = HomG (Ctriv , Hom(V, W )), (3.5.32)
which is the case of the theorem with V = Ctriv and W set to Hom(V, W ).
Remark 3.5.33. The reduction to V = Ctriv was not required; we did this to explain what
is going on (in the spirit of the explanation of part of the proof of Maschke’s theorem given
by Proposition 2.14.10 and the surrounding text). Without making this reduction the proof
is actually shorter: the point is that hχV , χW i = tr S where now
X
S(ϕ) = |G|−1 ρW (g) ◦ ϕ ◦ ρV (g)−1 (3.5.34)
g∈G

is the projection operator Hom(V, W ) → HomG (V, W ) = Hom(V, W )G used in the proof
of Maschke’s theorem (see also the discussion after Proposition 2.14.10). Then on the
one hand,
P Lemma 2.8.16 implies that tr S = dim HomG (V, W ). On the other hand, S =
−1
|G| g∈G ρHom(V,W ) , which implies that
X
tr S = |G|−1 χHom(V,W ) (g)) = hχV , χW i, (3.5.35)
G∈G

applying Proposition 3.3.4.(iv) for the second equality.

62
3.6 Character tables
3.6.1 Definition and examples
The idea of a character table is very simple: list the irreducible characters. For finite groups,
this determines all of the characters by Maschke’s theorem.
Let G be a finite group and let V1 , . . . , Vm be a full set of irreducible representations
(Definition 2.18.1). Recall that the number m also equals the number of conjugacy classes
(Theorem 2.18.2): label these C1 , . . . , Cm ⊆ G, and let gi ∈ Ci be representatives.

Definition 3.6.1. The character table of G is the table whose i, j-th entry is χVi (gj ).

Note that the definition does not depend on the choice of Vi up to isomorphism (by
Proposition 3.1.9) nor on the choice of representatives gj (by Proposition 3.2.6). However, it
does depend on the choice of ordering of the Vi and of the Cj . So really the table itself only is
well-defined up to reordering the rows and the columns, although in practice we will always
indicate with the table the representations corresponding to each row and the conjugacy
classes corresponding to each column.
Another way to interpret the character table is that it gives the (transpose of the) change-
of-basis matrix between the basis of Kronecker delta functions δCi (see (3.5.7)) and the basis
of irreducible characters χVi (which up to reordering does not depend on the choice of the
irreducible representations Vi ).

Example 3.6.2. The character table for Cn is given as follows. Let ξ := e2πi/n (one could
equally use any primitive n-th root of unity). For each row we list ζ = 1, ξ, ξ 2 , . . ., then
χCζ (h) for the group element h = 1, g, g 2 , . . . , g n−1 listed above each column.

h=1 h=g h = g2 ... h = g n−1


χC1 (h) 1 1 1 ... 1
χCξ 1 ξ ξ2 ... ξ n−1

χCξ2 1 ξ2 ξ4 ... ξ 2(n−1)


.. .. .. .. .. ..
. . . . . .
n−1 2
χCξn−1 1 ξ ξ 2(n−1) ... ξ (n−1)

The n×n matrix here is also the Vandermonde matrix for the n-th roots of unity (1, ξ, ξ 2 , . . . , ξ n−1 ):
see, e.g., https://en.wikipedia.org/wiki/Vandermonde_matrix. It represents the dis-
crete Fourier transform, and as you will see in Exercise 3.6.3, it is essentially unitary up to
normalisation.

Exercise 3.6.3. Let A be the matrix appearing in Example 3.6.2, i.e., Akl = ξ kl for ξ a
t t
primitive n-th root of unity. Verify directly that AA = nI, i.e., A = nA−1 . Deduce that
t
the matrix U := n−1/2 A is unitary: U = U −1 . (Note that this is a consequence of Corollary
3.5.3.(i), but we will use this argument to generalise to all finite groups soon.)

63
Remark 3.6.4. Note that the matrix above is symmetric, but this is an artifact of the
ordering we chose of the rows and the columns. If we had picked a different ordering of the
rows (irreducible characters) without correspondingly changing the ordering of the columns,
then the matrix would not have been symmetric. For a non-abelian group, the character
table cannot be symmetric: see Exercise 3.6.10.

Example 3.6.5. It is worth giving an example of a non-cyclic abelian group. Let G =


C2 ×C2 = {(1, 1), (a, 1), (1, b), (a, b)}. Then by Exercise 2.12.7 the irreducible representations
are of the form ρζ,ξ for ζ, ξ ∈ {±1}, defined by ρζ,ξ (ap , bq ) = ζ p ξ q for p, q ∈ {0, 1}. (In other
words, ρζ,ξ = ρζ ⊗ ρξ .) The character table is then:

h = 1 h = (a, 1) h = (1, b) h = (a, b)


χ1,1 (h) 1 1 1 1
χ−1,1 (h) 1 −1 1 −1
χ1,−1 (h) 1 1 −1 −1
χ−1,−1 (h) 1 −1 −1 1

In fact, this 4 × 4 matrix is just the Kronecker product of the two 2 × 2 matrices for the
factors C2 . See Exercise 3.6.9.

Example 3.6.6. We compute the character table for S3 . The irreducible representations,
by Example 2.17.5, are the trivial, sign, and reflection representations. The character of the
one-dimensional representations (the trivial and sign representations) are just the values of
the representation (as one-by-one matrices). For the reflection representation the character
is χV (σ) = |{1, 2, 3}σ | − 1 by Example 3.3.10. Let C = Ctriv be the trivial representation
and C− be the sign representation. Let V denote the reflection representation. We get:

h = 1 h = (12) h = (123)
χC (h) 1 1 1
χC− (h) 1 −1 1
χV (h) 2 0 −1

Note that, as required by Corollary 3.5.3.(i), the rows are orthogonal by the inner product
h(a, b, c), (a0 , b0 , c0 )i = 16 (aa0 + 3bb0 + 2cc0 ), where 61 = |G|−1 and 1, 3, 2 are the sizes of the
conjugacy classes of 1, (12), and (123), respectively. (In fact, this relation allows you to
deduce any one of the rows from the other two.)

Exercise 3.6.7. Let G and H be finite groups and V and W representations of G and H.
Show, by the proof of Proposition 3.3.4.(i), that

χV W (g, h) = χV (g)χW (h). (3.6.8)

This specialises to Proposition 3.3.4.(ii) when we recall that ρV ⊗W (g) = ρV W (g, g) (2.20.4).

64
Exercise 3.6.9. Maintain the notation of Exercise 3.6.7. Show that, choosing the represen-
tations and conjugacy classes appropriately, the character table of G × H is the Kronecker
product (2.19.34) of the character tables for G and H. (Hints: if (Vi ), (Wj ) are full sets of
irreducible representations for G and H, use Proposition 2.20.5 to see that (Vi  Wj ) is a
full set of irreducible representations for G × H. Then apply Exercise 3.6.7. Finally for (Ci )
and (Dj ) the conjugacy classes of G and H, then (Ci × Dj ) forms the conjugacy classes of
G × H.)

Exercise 3.6.10. In this exercise we will show that, if G is non-abelian, then there is no
ordering of the rows and columns to get a symmetric matrix.

(i) If G is nonabelian, prove that there is an irreducible representation of some dimension


n > 1. (Hint: the number of one-dimensional representations is |Gab | < |G|).

(ii) Show that the only irreducible character with all positive values is the trivial character.
(Hint: use the orthogonality, Corollary 3.5.3.(i), with V = Ctriv ).

(iii) Now prove that there is no ordering of the rows and columns to give a symmetric
character table. (Hint: without loss of generality, put the trivial representation in the
first row; show using (ii) that {1} is the first column but by (i) that the result is still
not symmetric.)

3.6.2 Row and column orthogonality; unitarity


Let’s maintain the notation of the previous subsection. Observe that the orthogonality
relation hχVi , χVj i implies that the rows of the character table must be orthogonal, in the
following sense:
X m
X
−1 −1
δij = hχVi , χVj i = |G| χVi (g)χVj (g) = |G| |Ck |χVi (gk )χVj (gk ), (3.6.11)
g∈G k=1

which just means we need to take into account the sizes of the conjugacy classes. When G is
abelian, then |Ck | = 1 for all k, and we get that hχVi , χVj i is just |G|−1 times the dot product
of the i-th row with the complex conjugate of the j-th row, which explains Exercise 3.6.3.
Let A be the matrix given by the character table: Aij = χVi (gj ). Rewriting (3.6.11), we
have m m
X X
−1 −1
|G| |Ck |Aik Ajk = |G| |Ck |χVi (gk )χVj (gk ) = δij . (3.6.12)
k=1 k=1

To turn this into a genuine dot product with complex conjugation, define the renormalised
matrix U by: s
|Cj |
Uij := Aij . (3.6.13)
|G|

65
Then we obtain m
X
Uik Ujk = δij . (3.6.14)
k=1

In other words, (3.6.14) states that U U t = I, i.e., U is invertible with U −1 = U t . So U is a


unitary matrix :
Definition 3.6.15. A unitary matrix is U ∈ GLn (C) such that U −1 = U t .
Lemma 3.6.16. The following are equivalent: (a) U is unitary; (b) the rows of U are
orthonormal with respect to hv, wi = v · w (3.6.14); (c) the columns are orthonormal with
respect to this inner product, i.e.,
m
X
Uki Ukj = δij . (3.6.17)
k=1

Proof. Part (b) says that U U t = I and part (c) says that U t U = I. So the equivalence is
the result from linear algebra that AB = I if and only if BA = I, for square matrices A and
B of the same size (which is implicit in the definition of the inverse matrix).
From (3.6.17) we deduce the column orthogonality relations:
m p m p m
X |Ci ||Cj | X |Ci ||Cj | X
δij = Uki Ukj = Aki Akj = Aki Akj
k=1
|G| k=1
|G| k=1
p m
|Ci ||Cj | X
= χVk (gi )χVk (gj ). (3.6.18)
|G| k=1

Example 3.6.19. As a special case, let gi = gj = 1. Then we obtain


m
X m
X
1 = |G|−1 χVk (1)χVk (1) = |G|−1 (dim Vk )2 , (3.6.20)
k=1 k=1

i.e., the sum of squares formula (Corollary 2.16.4), proved in yet another way!
Example 3.6.21. More generally, letting gi = gj , we obtain
m
|G| X
= |χVk (gi )|2 . (3.6.22)
|Ci | k=1

This allows one to obtain |Ci |, the size of the conjugacy class, directly from the character
P |ZG (gi )|,2see also
table. Note that the LHS can also be interpreted as the size of the centraliser
Remark 3.5.9. In particular, gi is central if and only if |Ci | = 1, i.e., m k=1 |χVk (gi )| = |G|.
By Proposition 3.2.9.(ii), this is equivalent to the statement that |χVk (gi )| = dim Vk for all k,
or that ρVk (gi ) is a scalar matrix for all k. This can also be proved without using character
theory: see Remark 2.12.4.

66
Remark 3.6.23. Using the row and column orthogonality, we can fill in a single missing row
or column. Indeed, in a unitary matrix, the condition that the last column vn is orthogonal
to the span of the preceding columns v1 , . . . , vn−1 shows that vn ∈ Span(v1 , . . . , vn−1 )⊥ which
is one-dimensional; this fixes vn up to scaling (by a number of absolute value one). In the
case of the character table we know the first entry of the column is one (if we put the
trivial representation first) and this fixes the scaling. Similarly for a missing row, the same
argument applies except now the first entry is equal to the dimension, which we can compute
from the sum of squares formula.
Concretely, the way to compute vn up to scaling, for a unitary Pnmatrix, is to take any
vector v not in the span of v1 , . . . , vP
n−1 ; then by Lemma 3.4.6, v = i=1 ai vi with ai = v · vi .
n−1
Then vn is a scalar multiple of v − i=1 ai vi . The same technique works for a missing row.
Applying this to the unitary matrix U obtained from the character table A by (3.6.13), we
can get a missing row or column up to scaling, and then we simply rescale as indicated in
the preceding paragraph.
In the case of a missing row, we can also use an easier technique: we know that
m
X
χC[G] = |G|δ1 = dim Vi χVi , (3.6.24)
i=1

so if we know all of the χVi except one, we can get the remaining one (first computing the
remaining dimension by the sum of squares formula). This yields concretely for gj 6= 1:
X
χVi (gk ) = −(dim Vi )−1 dim Vj · χVj (gk ). (3.6.25)
j6=i

3.6.3 More examples


You do not need to memorise any of this information for the exam.
Example 3.6.26. Let us compute the character table of D8 . Recall the description D8 =
{xa y b | 0 ≤ a ≤ 3, 0 ≤ b ≤ 1} with x the counterclockwise 90◦ rotation and y the re-
flection about the x-axis. The conjugacy classes of rotations are {1}, {x, x3 }, and {x2 }
(since yxy −1 = x−1 = x3 but x2 is central), and the conjugacy classes of reflections are
0
{y, x2 y} and {xy, x3 y} (since xyx−1 = x2 y shows that in general xa y and xa y are conjugate
if and only if 2 | (a − a0 )). This makes five conjugacy classes. As explained in Example
2.17.10, there are four one-dimensional representations (by Example 2.13.2) and one two-
dimensional irreducible representation up to isomorphism (the one of Example 2.2.6). Call
the one-dimensional representations (C±,± , ρ±,± ), where ρ+,+ is the trivial representation,
ρ−,+ is the representation sending reflections to −1 and rotations to 1, ρ+,− sends the gen-
erating rotation x to −1 and the generating reflection y to 1, and ρ−,− sends both x and y
to −1. Call the two-dimensional representation C2 . The trace of a rotation matrix by angle
θ is 2 cos θ, so χC2 (x) = 0 whereas χC2 (x2 ) = −2. Also the trace of a reflection matrix is
always zero since it has one and minus one as eigenvalues (or from the explicit description).
Putting this together, we get:

67
{1} {x, x3 } {x2 } {y, x2 y} {xy, x3 y}
Size of class: 1 2 1 2 2
ρ+,+ 1 1 1 1 1
ρ−,+ 1 1 1 −1 −1
ρ+,− 1 −1 1 1 −1
ρ−,− 1 −1 1 −1 1
χC2 2 0 −2 0 0

Example 3.6.27. We can compute the character table also of Q8 = {±1, ±i, ±j, ±k} (with
multiplication given by i2 = j 2 = k 2 = −1, ij = k, jk = i, ki = j, and (−1)2 = 1). The
conjugacy classes are {1}, {−1}, {±i}, {±j}, and {±k}. By coursework, the one-dimensional
representations are ρ±,± determined by the values on i and j which are ±1 (so the two
subscripts of ρ give the signs of the images of i andj, respectively).
 There
 is one
 two-
i 0 0 −1
dimensional representation, ρ(±1) = ±I, ρ(±i) = ± , ρ(±j) = ± , and
  0 −i 1 0
0 −i
ρ(±k) = ± . We get:
−i 0

{1} {−1} {±i} {±j} {±k}


Size of class: 1 1 2 2 2
ρ+,+ 1 1 1 1 1
ρ−,+ 1 1 −1 1 −1
ρ+,− 1 1 1 −1 −1
ρ−,− 1 1 −1 −1 1
χC2 2 −2 0 0 0
Remark 3.6.28. Notice this is the same table as for D8 if we swap the columns accordingly:
in the D8 case we can instead order the columns as {1}, {x2 }, {y, x2 y}, {x, x3 }, {xy, x3 y}.
This is remarkable since Q8 ∼ 6= D8 : for instance, in Q8 there are six elements of order four,
whereas in D8 there are only two. It shows that the character table does not determine the
group up to isomorphism.
Actually in this case, the two character tables have to coincide since 8 is a small number
for a nonabelian group (e.g., the only possible sum of squares of dimensions is 12 + 12 + 12 +
12 + 22 = 8 since 1 has to occur for the trivial representation but we can’t have only 1’s since
the group is nonabelian). See Exercise 3.6.35.
Example 3.6.29. Let us compute the character table for A4 . As explained in Exercise
2.17.14, there are three irreducible one-dimensional representations, (Cζ , ρζ ◦q|A4 ), for ζ 3 = 1,
defined as follows. Let Cζ = C. There is a surjection q : S4 → S3 , q(12) = (12), q(23) =
(23), q(34) = (12), which restricts to a surjection q|A4 : A4 → A3 . Then A3 ∼ = C3 so its
irreducible representations are ρζ for ζ 3 = 1 (Corollary 2.12.6). Then as outlined in Ex-
ercise 2.17.14 it follows that A3 ∼ = (A4 )ab and there is one three-dimensional irreducible
representation. The three-dimensional irreducible representation can be given by restricting
the reflection representation (V, ρV ) of S4 to the group A4 , i.e., (V, ρV ◦ i) for i : A4 → S4

68
the inclusion. Now there are four conjugacy classes in A4 , given by the classes [1] = {1},
[(12)(34)] = {(12)(34), (13)(24), (14)(23)}, and the three-cycles must split into two conju-
gacy classes: [(123)] = {(123), (134), (142), (243)}, [(132)] = {(132), (124), (143), (234)}. For
another way to compute the latter two conjugacy√classes, see the following remark.
Taking traces we get, for ω := e2πi/3 = − 21 + i 23 (using (3.3.11)):

[1] [(12)(34)] [(123)] [(132)]


Size of class: 1 3 4 4
C1 1 1 1 1
Cω 1 1 ω ω2
Cω 2 1 1 ω2 ω
V 3 −1 0 0

Remark 3.6.30 (Non-examinable). Here is another way to compute the conjugacy classes of
A4 . In general, every conjugacy class of Sn can split into at most two conjugacy classes in An ,
since a conjugacy class is the orbit of the group under the conjugation (adjoint) action, and
[Sn : An ] = 2. In the case the conjugacy class splits into two conjugacy classes, the two classes
must have the same size (since the two are conjugate in Sn ). Now, since characters have the
same value on every element of a conjugacy class, the image of each conjugacy class uner q in
A3 must be the same element (every element of A3 takes a different value under a nontrivial
one-dimensional representation). Since q((123)) = (123) and q((132)) = (132), this implies
[(123)] ⊆ q −1 ((123)) and [(132)] ⊆ q −1 ((132)). Since there can be only two conjugacy classes
of three-cycles, there are exactly two. Thus [(123)] consists of the three-cycles in q −1 ((123))
and similarly for [(132)]. In fact all elements of q −1 ((123)) and q −1 ((132)) are three-cycles
(which is also implied by the fact that the sizes of these fibers are [A4 : A3 ] = 4 each).

Example 3.6.31. The character table of S4 : let C, C− be the trivial and sign representa-
tions, V the reflection representation, V− = V ⊗ C− the other three-dimensional irreducible
representation, and U the two-dimensional irreducible representation (the reflection repre-
sentation of S3 composed with the map S4  S3 ), see Example 2.17.6.

[1] [(12)] [(123)] [(12)(34)] [(1234)]


Size of class 1 6 8 3 6
χC 1 1 1 1 1
χC− 1 −1 1 1 −1
χU 2 0 −1 2 0
χV 3 1 0 −1 −1
χ V− 3 −1 0 −1 1

Remark 3.6.32. In order to compute the character table for the groups An , we need to
analyse the conjugacy classes. If σ ∈ An , then [σ]An is either all of [σ]Sn or half of it,
since [σ]Sn = [σ]An ∪ [(12)σ(12)] (as Sn = An ∪ (12)An ). The only question is whether σ is
conjugate to (12)σ(12) or not. Now, [σ]An is an orbit in An under the action An × An → An

69
by conjugation. The stabiliser of σ is the centraliser ZAn (σ) := {τ ∈ An | τ σ = στ }. By
the orbit-stabiliser theorem, |[σ]An | = |ZA|An(σ)|
|
. Similarly |[σ]Sn | = |ZS|Sn(σ)|
|
. Therefore we
n n
conclude that [σ]An = [σ]Sn if and only if [ZSn (σ) : ZAn (σ)] = 2; otherwise [σ]An is half of
[σ]Sn and ZSn (σ) = ZAn (σ). In other words, [σ]Sn splits into two conjugacy classes if and
only if it does not commute with an odd permutation.
Example 3.6.33. Consider G = A5 . The even conjugacy classes in S5 are [1], [(123)], [(12)(34)],
and [(12345)]. The first conjugacy class cannot split into two for A5 . The second and third do
not commute either since (123) commutes with the odd permutation (45) and (12)(34) com-
mutes with (12). But ZS5 (12345) = h(12345)i which does not contain any odd permutations.
Therefore (12345) does split into two conjugacy classes. We get a total of five conjugacy
classes of sizes 1, 20, 15, 12, and 12. Next as shown in problems class, the dimensions of the
irreducible representations are 1, 3, 3, 4, 5. Let us recall why. Since A5 is simple, it has only
one one-dimensional representation. For the other dimensions, we seek a, b, c, d ≥ 1 such
that a2 + b2 + c2 + d2 = 59. Taking this modulo 4, three of these must be odd (since squares
are 1 or 0 modulo 4), and the remaining one even. Suppose a ≤ b ≤ c ≤ d. We have d ≤ 7.
In fact d 6= 7, since a, b, c ≥ 2. If d = 6 we have a2 + b2 + c2 = 23, obviously impossible
since the only possible values of a, b, c are three. So d ≤ 5. We cannot have d = 3 since
4 + 9 + 9 + 9 < 59. We also cannot have d = 4 since 9 + 9 + 9 + 16 < 59. So d = 5. Then
a2 + b2 + c2 = 34. Again c 6= 5 since a2 + b2 = 9 is impossible. Also c > 3 since 4 + 9 + 9 < 34.
So c = 4. Then we get a = b = 3. So the dimensions of the nontrivial representations are:
3, 3, 4, 5.
Now we can construct some of these irreducible representations. One is the restriction
of the reflection representation of S5 (either check it using the character, or better you can
prove that the reflection representation of Sn restricts to an irreducible representation of
An for all n ≥ 4, by a similar proof to how we showed the reflection representation was
irreducible in the first place.) This gives the four-dimensional irreducible representation,
and the character is given by the formula (3.3.11).
We can use geometry to construct a three-dimensional irreducible representation since we
know that A5 acts as the group of rotations of the icosahedron or dodedecahedron (see the
third problems class sheet and https://en.wikipedia.org/wiki/Icosahedral_symmetry#
Group_structure). We actually don’t need to know much about this representation, only
that an element of order n must act as a rotation about some axis by some angle 2πm/n with
gcd(m, n) = 1. The trace of such a rotation is 2 cos(2πm/n) + 1 = ζ + ζ −1 + 1 for ζ = e2πim/n
(the 1 is for the axial direction). In the case n = 2, 3, we get −1 and 0, respectively. In the
case of 5 we get two possibilities: for ξ = e2πi/5 , we get either 1 + 2 cos(2π/5) = 1 + ξ + ξ −1 or
1 + 2 cos(4π/5) = 1 + ξ 2 + ξ −2 . These values are well-known but let us give a quick derivation
(it is the only way I can remember the values). Let z := ξ + ξ −1 . Then 1 + ξ + ξ 2 + ξ 3 + ξ 4 = 0
implies ξ√ 2
+ ξ −2 = −1 − z. But 2
√ z = 2+ξ +ξ
2 −2
= 1 − z implies z 2 + z − 1 = 0, hence
−1± 5 −1+ 5
z = 2
. In fact√ z = 2
since it is easy to see from geometry that z is positive. √
−1− 5
Then ξ + ξ = 2 . Thus the possible trace values for the order five elements are 1±2 5 .
2 2

One of these are assigned to each of the conjugacy classes of elements of order five. In
fact, since these conjugacy classes are obtained from each other by conjugating by an odd

70
permutation (e.g., (12)), we see that the only difference between the two is relabeling the
five compounds being permuted by an odd permutation: thus both possibilities occur and
there are two nonisomorphic rotation representations, depending on how we label the five
compounds. Another way to say this is that the two resulting representations are (C3 , ρ)
and (C3 , ρ ◦ Ad(12) ), for (C3 , ρ) one rotation representation. For lack of a better notation,
call these two representations C31 and C32 .
This gives all of the irreducible representations except for the dimension five one. We get
the partial table (we choose the second five-cycle to be (12345)2 = (13524)):
[1] [(12)(34)] [(123)] [(12345)] [(13524)]
Size of class 1 15 20 12 12
χC 1 1 1 1√ 1√
1− 5 1+ 5
χC31 3 −1 0 2√ 2√
1+ 5 1− 5
χC32 3 −1 0 2 2
χC4 4 0 1 −1 −1
χC5 5 ? ? ? ?
To fill in the last row, the easiest technique is to use (3.6.25), for k > 1:
1
χC5 (gk ) = − (1 + 3χC31 (gk ) + 3χC32 (gk ) + 4χC4 (gk )). (3.6.34)
5
This yields:
[1] [(12)(34)] [(123)] [(12345)] [(13524)]
Size of class 1 15 20 12 12
χC 1 1 1 1√ 1√
1− 5 1+ 5
χC31 3 −1 0 2√ 2√
1+ 5 1− 5
χC32 3 −1 0 2 2
χC4 4 0 1 −1 −1
χC5 5 1 −1 0 0
The remaining entry has all integer entries, which makes it seem likely it is related to the
action of A5 on a set: the character of such a representation C[X] has all nonnegative integer
values by Example 3.1.15, and it always has the trivial representation as the subrepresenta-
tion where all coefficients are equal (just like for the reflection representation), so χC[X] − χC
is also a character. In order to have χC5 = χC[X] − χC , X should have size six, and A5 acts
with (12)(34) fixing no elements, (123) fixing two, and five-cycles fixing one element. Such
a set X exists! It is given by the set of pairs of opposite vertices of the icosahedron (there
are twelve vertices and hence six pairs of opposite vertices). This is acted on by A5 as the
group of rotational symmetries. Thus, χC5 is the character of the subrepresentation of C[X]
with coefficients summing to zero, analogously to the reflection representation.
Exercise 3.6.35. Show that, up to reordering rows and columns, the character table of Q8
is the only one that can occur for a nonabelian group of size eight. Here is an outline of one
way to do this:

71
Step 1: Show that the dimensions of the irreducible representations are 1, 1, 1, 1, 2.
Step 2: Conclude that |Gab | = 4.
Step 3: Prove the lemma: if N / G is a normal subgroup of size two, then it is central
(i.e., for all n ∈ N and g ∈ G, gn = ng, that is, N is a subgroup of the center of G.). Using
the lemma, prove that [G, G] is central.
Step 4: Prove that Gab ∼ = C2 × C2 . For this, observe first that either this is true or
Gab ∼ = C4 , and to rule out the latter possibility, prove the lemma: if Z < G is a central
subgroup with G/Z cyclic, then G is itself abelian.
Step 5: Using the lemma in Step 4, show that [G, G] is the entire center of G. Conclude
that there are two conjugacy classes of size one in G. Now prove the other conjugacy classes
all have size 2 (show the sizes are factors of |G and use Theorem 2.18.2).
Step 6: Now using Steps 4 and 5, show that the rows for the four one-dimensional
representations are uniquely determined up to ordering the columns as well as these rows.
Step 7: Finally, use orthogonality to show that this determines the whole table.

3.7 Kernels of representations and normal subgroups


The next proposition shows that the kernel of a representation can be read off from the
character.
Proposition 3.7.1. Let (V, ρV ) be a representation of a finite group. Then ker ρV = {g ∈
G | χV (g) = dim V }.
Proof. It is clear that g ∈ ker ρV implies χV (g) = dim V . We only need to show the converse.
Suppose χV (g) = dim V . By Proposition 3.2.9.(ii), |χV (g)| = dim V if and only if χV (g) is
a scalar matrix; so ρV (g) = λI for some λ ∈ C. But then χV = λ dim V , so that λ = 1.
Therefore g ∈ ker ρV .
Remark 3.7.2. This is one of the few places we needed Proposition 3.2.9.(ii)! So maybe
you should review it: the result followed by applying the triangle inequality (viewing C as
R2 ) to χV (g) = ζ1 + · · · + ζn where ζi are the eigenvalues of ρV (g) with multiplicity (which
are roots of unity, hence of absolute value one).
Now we show that every normal subgroup is an intersection of kernels of irreducible
representations. Together with Proposition 3.7.1, this shows that we can read off all normal
subgroups (in terms of unions of conjugacy classes) directly from the character table. Let
G be a finite group and V1 , . . . , Vm be a full set of irreducible representations, and let Ki :=
ker ρVi .

T Let N / G be any normal subgroup. Let J := {j ∈ {1, . . . , m} | N ≤


Proposition 3.7.3.
Ki }. Then N = j∈J Kj .
To prove the proposition we need two lemmas, interesting in themselves.
LemmaT3.7.4. Suppose V ∼
Lm ri
= i=1 Vi . Let J := {j ∈ {1, . . . , m} | rj ≥ 1}. Then
ker V = j∈J Kj .

72
Proof. This follows because ρV (g) = I if and only if ρVj (g) = I for every j ∈ J.
The next lemma is a generalisation of part of the argument of Theorem 2.18.6 (there we
used the statement below for the case H = {1}). For H ≤ G a subgroup, recall that the left
translation action G × G/H → G/H is given by g · g 0 H := gg 0 H.

Lemma 3.7.5. Let N /G be a normal subgroup. Let (C[G/N ], ρC[G/N ] ) be the representation
associated to the left translation action. Then ker ρC[G/N ] = N .

Caution that the hypothesis that H is normal is necessary (obviously, kernels are always
normal).
Proof of Lemma 3.7.5. Given n ∈ N and g ∈ G, normality implies ngN = gN . Hence
n ∈ ker ρC[G/N ] . On the other hand, if g ∈
/ H, then gN 6= N , so ρC[G/N ] (g) 6= I.
Proof of Proposition 3.7.3. By Lemma 3.7.5, ker ρC[G/N ] = N . By Lemma 3.7.4, it follows
0
is as defined there. Note that Kj ≤ N for j ∈ J, hence J 0 ⊆ J.
T
that N = j∈J 0 KTj where J T
Therefore N = j∈J 0 Kj ⊇ j∈J Kj . Since N ≤ Kj for all j ∈ J, the reverse inclusion is
clear.
As a result of all of the above, we can give an explicit way to read off the normal subgroups
from the character table:

Corollary 3.7.6. The normal subgroups of G are precisely the subgroups NJ of the form

NJ := {n ∈ G | χVj (n) = χVj (1), ∀j ∈ J}, J ⊆ {1, . . . , m}. (3.7.7)


T
Proof. By Proposition 3.7.1, since dim Vj = χVj (1), we have NJ = j∈J Kj . Since Kj is
normal, so is NJ . Conversely, by Proposition 3.7.3, every normal subgroup is obtained in
this way.

Example 3.7.8. Let’s take the example of D8 (note that Q8 has the same character table!).
By Example 3.6.26, the proper nontrivial normal subgroups of the form Ki are {1} ∪ {x2 } ∪ C
where C ∈ {{x, x3 }, {y, x2 y}, {xy, x3 y}}. There are three of these, all of size four. The
intersection of any two of them is {1, x2 } = Z(D8 ).

Example 3.7.9. Next consider Q8 . Of course, the character table is the same as that
of D8 so the answer has to be the same, but the elements are written by different labels.
By Example 3.6.27, the proper nontrivial normal subgroups Ki are {1} ∪ {−1} ∪ C for
C ∈ {{±i}, {±j}, {±k}}. There are again three of these, of size four. The intersection of
any two of them is {±1} = Z(Q8 ).

Example 3.7.10. Consider now A4 . Looking at Example 3.6.29, the only proper nontrivial
Ki are both equal to {1, (12)(34), (13)(24), (14)(23)}, which is the commutator subgroup
[A4 , A4 ] (i.e., the kernel of q|A4 : A4 → A3 ∼
= (A4 )ab ).

73
Example 3.7.11. Let’s take the example of S4 . By Example 3.6.31, we can get two
nontrivial proper subgroups Nj : the subgroup K2 = A4 for V2 = C− and the subgroup
K3 = {1, (12)(34), (13)(24), (14)(23)} for V3 = U . Their intersection is again K3 , so these
are all of the proper normal subgroups.
Example 3.7.12. Finally, let’s look at A5 . Here we already know this group is simple so
has no proper nontrivial normal subgroups, and indeed from Example 3.6.33 we see that all
the Kj are trivial (except K1 = A5 of course)!

3.8 Automorphisms [mostly non-examinable]


The material of this subsection will only be sketched in lecture and is non-examinable, in the
sense that you do not need to know any of it to solve the exam questions. However, exam
problems could still be related to automorphisms and their action on the character table.
Indeed, we even discussed similar results already (particularly in Exercise 2.17.7.(ii) and on
CW 1).
The character table also includes partial information about automorphisms of a group
G. Indeed, it is obvious that, if we have an automorphism ϕ : G → G, then it will induce
a symmetry of the character table, by relabeling the irreducible representations and the
conjugacy classes. Let us make this explicit.
Recall that, if ϕ : G → G is an automorphism and (V, ρV ) an irreducible representation,
then (V, ρV ◦ ϕ) is also an irreducible representation (by Exercise 2.17.7.(ii)). Therefore, we
get a permutation of the rows of the character table:

σϕ ∈ Sm , (Vi , ρVi ◦ ϕ−1 ) ∼


= (Vσϕ (i) , ρVσϕ (i) ). (3.8.1)

On characters, this says that


χVi (ϕ−1 (g)) = χVσϕ (i) (g). (3.8.2)
Of course, ϕ also induces a permutation of the columns:

τϕ ∈ Sm , ϕ(Ci ) = Cτϕ (i) . (3.8.3)

Putting the previous equations together and replacing g by ϕ(gj ), we get

χVi (gj ) = χVσϕ(i) (gτϕ (i) ). (3.8.4)

When are these permutations trivial? Recall the following from group theory (also mentioned
in Remark 2.17.9):
Definition 3.8.5. An inner automorphism is one of the form Adg : G → G, Adg (h) = ghg −1 .
Lemma 3.8.6. The permutations σϕ and τϕ are trivial if ϕ : G → G is an inner automor-
phism, i.e., ϕ = Adg , Adg (h) = ghg −1 for some g ∈ G.
Proof. It is clear that ϕ = Adg preserves all of the conjugacy classes, so τϕ = 1. But then
by (3.8.4), χVi = χVσϕ (i) for all i, which implies σ is also trivial.

74
Remark 3.8.7. The proof actually shows that σϕ and τϕ are trivial if and only if ϕ(Ci ) = Ci
for all i. Such an automorphism is called a class-preserving automorphism. It is a nontriv-
ial fact that there exist finite groups with class-preserving automorphisms that are not inner:
see, e.g., https://groupprops.subwiki.org/wiki/Class-preserving_not_implies_inner#
A_finite_group_example.
Definition 3.8.8. The group Aut(G) is the group of all automorphisms of G. The subgroup
Inn(G) of inner automorphisms is the group {Adg | g ∈ G}.
Note that Inn(G) E Aut(G). Moreover, there is a surjective homomorphism G →
Inn(G), g 7→ Adg , with kernel Z(G). Thus G/Z(G) ∼
= Inn(G) (by the first isomorphism
theorem).
Definition 3.8.9. The group of outer automorphisms Out(G) is defined as Aut(G)/ Inn(G).
Definition 3.8.10. Let the character table symmetry group be the subgroup CTS(G) ≤
Sm × Sm of all pairs (σ, τ ) satisfying (3.8.4).
Remark 3.8.11. We could have used Sm−1 × Sm−1 , since the trivial representation and
trivial conjugacy class can’t change by applying an automorphism.
Proposition 3.8.12. There is a homomorphism Ψ : Out(G) → CTS(G) ≤ Sm × Sm , given
by [ϕ] 7→ (σϕ , τϕ ).
Proof. It remains only to show that this map is multiplicative, but that follows from the
definitions.
In general, computing all automorphisms of a group is an important and difficult problem.
Since the inner ones are easy to describe, the problem reduces to computing the outer
automorphisms. The proposition can be a useful tool, since the codomain CTS(G) is easy to
compute from the table. However, Ψ need not be injective or surjective, although it is often
injective and its kernel can be described (Remark 3.8.7). It is easy to produce an example
where Ψ is not surjective (see Example 3.8.14), but difficult to produce one where it is not
injective (see Remark 3.8.7).
Example 3.8.13. Consider G = Q8 following Example 3.6.27. The symmetry group of this
character table is S3 : we can’t move the row corresponding to the trivial representation nor
the one corresponding to the two-dimensional representation, but the three rows correspond-
ing to nontrivial one-dimensional representations can all be permuted with a corresponding
permutation of the columns, without changing the table.
Here the automorphism group is easy to compute: if ϕ is an automorphism, ϕ(i) ∈
{±i, ±j, ±k}, and ϕ(j) can also be any of these other than ±ϕ(i). That makes a total of 6·4 =
24 automorphisms. The inner automorphism group is isomorphic to G/Z(G) = G/{±1} ∼ =
C2 × C2 , so Out(G) has size six. There is a surjective homomorphism Out(G) → S3 given by
permutations of the collection of conjugacy classes {C3 , C4 , C5 } = {{±i}, {±j}, {±k}}, which
is therefore an isomorphism.
So the homomorphism Ψ : Out(G) → CTS(G) is an isomorphism.

75
Example 3.8.14. Now consider G = D8 following Example 3.6.26. The character table
is the same as for Q8 so CTS(G) ∼ = S3 again. But now the outer automorphism group is
smaller: first note that since x, x = x−1 are the only elements of order 4, any automorphism
3

ϕ satisfies ϕ(x) = x±1 . Therefore ϕ(y) = xa y for some a. Conversely any such assignment
extends to an automorphism. So | Aut(D8 )| = 2 · 4 = 8. The inner automorphism group is
again of size four, so | Out(D8 )| = 2. There is only one nontrivial outer automorphism, and
one representative (modulo inner automorphisms) is the automorphism ϕ(x) = x, ϕ(y) = xy.
This gives a nontrivial symmetry of the character table: the one swapping the last two
columns and the third and fourth rows. So Ψ : C2 ∼ = Out(D8 ) → CTS(D8 ) ∼ = S3 is injective
but not surjective.

Example 3.8.15. Let’s consider A4 , following Example 3.6.29. The second and third row
can be permuted under an automorphism, provided we also permute the third and fourth
columns. So CTS(A4 ) ∼ = C2 . Indeed there is an automorphism realising this symmetry: the
map ϕ(σ) = (12)(σ)(12), a non-inner automorphism which comes from conjugation inside
the larger group S4 . So Ψ : Out(A4 ) → CTS(A4 ) ∼ = C2 is surjective. We claim that it is also
injective, so that Out(A4 ) ∼
= C2 as well, generated by conjugation by an odd element of S4 .
To prove the claim, by Remark 3.8.7, it is equivalent to show that every class-preserving
automorphism is inner. Let ϕ be class-preserving. By composing with an inner automor-
phism we can assume that ϕ((123)) = (123). Now Ad(123) induces a cyclic permutation
of the elements of [(123)] other than (123) itself, so composing with this we can also as-
sume ϕ((134)) = (134). But (123) and (134) generate A4 (taking conjugation they get the
whole class [(123)], and taking inverses gives [(132)]; the three-cycles generate A4 by Remark
2.13.20).

Example 3.8.16. Let’s consider S4 , following Example 3.6.31. There are no rows which
can be permuted. So CTS(S4 ) = {1} is trivial. We claim that Ψ is injective, and hence
Out(S4 ) = {1}.
To prove this, we need to show again that any class-preserving automorphism is inner. By
composing such an automorphism ϕ with an inner one, we can assume that ϕ((12)) = (12).
Then (23) maps to a transposition (ab) which does not commute with (12), so of the form
(1m) or (2m) for 3 ≤ m ≤ 4. Composing with Ad(12) if necessary we can assume it is (1m),
and composing with a permutation of {3, 4} we can assume m = 3, so ϕ((23)) = (23) as
well. Then ϕ((34)) has to be a transposition commuting with (12), which is not (12), so
ϕ((34)) = (34) as well. Therefore ϕ fixes (12), (23), and (34), but these elements generate
S4 , so we see actually that ϕ is the identity.

Example 3.8.17. Now consider G = A5 . Looking at Example 3.6.33, we see again that
CTS(A5 ) ∼= C2 , generated by the outer automorphism of conjugation by an odd permutation
such as (12). So Ψ : Out(A5 ) → CTS(A5 ) ∼ = C2 is surjective. Similarly to Example 3.8.15,
we can prove also that Ψ is injective, so that Out(A5 ) ∼
= C2 , generated by the adjoint action
of an odd permutation in S5 .

Example 3.8.18. For general G = An , Sn , with n 6= 2, 6, it turns out that Out(An ) ∼


= C2 ,

76
generated again by the conjugation by an odd element of Sn , and Out(Sn ) = {1}. The map
Ψ is always injective.
The interesting thing happens when n = 6. Let’s look at the character table of S6 . This
was generated by Magma. The conjugacy classes here are, in order,

([1], [(12)], [(12)(34)(56)], [(12)(34)], [(123)], [(123)(456)],


[(1234)(56)], [(1234)], [(12345)], [(123)(45)], [(123456)]). (3.8.19)

Note below that the reflection representation is V6 .


C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11
|Ci | 1 15 15 45 40 40 90 90 144 120 120
V1 1 1 1 1 1 1 1 1 1 1 1
V2 1 −1 −1 1 1 1 1 −1 1 −1 −1
V3 5 −1 3 1 −1 2 −1 1 0 −1 0
V4 5 −3 1 1 2 −1 −1 −1 0 0 1
V5 5 1 −3 1 −1 2 −1 −1 0 1 0
V6 5 3 −1 1 2 −1 −1 1 0 0 −1
V7 9 3 3 1 0 0 1 −1 −1 0 0
V8 9 −3 −3 1 0 0 1 1 −1 0 0
V9 10 2 −2 −2 1 1 0 0 0 −1 1
V10 10 −2 2 −2 1 1 0 0 0 1 −1
V11 16 0 0 0 −2 −2 0 0 1 0 0

It is easy to see that the symmetry group of this table is C2 : first, the symmetry group
is determined by which column the second column (the class [(12)]) maps to, since the
only permutation of rows fixing the second column is the swap of V6 and V7 , which can’t
happen since the dimensions aren’t equal. Then, visibly, the only column that we can
swap this one with is the third column. If we do that we have to use the row permutation
σ = (36)(45)(9, 10) ∈ S11 . Doing this row permutation we indeed get a symmetry with
column permutation τ = (23)(56)(10, 11). [Observe that the even conjugacy classes are
preserved, which one can see already by using row V2 which cannot be swapped with any
other row.]
Thus this leads us to suspect there is an automorphism of S6 which swaps conjugacy
classes [(12)] with [(12)(34)(56)], [(123)] with [(123)(456)], and [(123)(45)] with [(123456)].
Indeed this is the case: it is called the exotic automorphism of S6 (defined up to composing
with an inner automorphism), and there are many beautiful constructions of it. This is the
only symmetric group with Out(Sn ) 6= {1}!
Next let’s look at the character table of A6 , which is more manageable. Again I generated
this with Magma. The conjugacy classes here are, in the order appearing below,

[1], [(12)(34)], [(123)], [(123)(456)], [(1234)(56)], [(12345)], [(13245)]. (3.8.20)

77
C1 C2 C3 C4 C5 C6 C7
|Ci | 1 45 40 40 90 72 72
V1 1 1 1 1 1 1 1
V2 5 1 2 −1 −1 0 0
V3 5 1 −1 2 −1 0√ 0√
1+ 5 1− 5
V4 8 0 −1 −1 0 2√ 2√
1− 5 1+ 5
V5 8 0 −1 −1 0 2 2
V6 9 1 0 0 1 −1 −1
V7 10 −2 1 1 0 0 0
Here it is easy to check that
CTS(A5 ) = {[1, 1], [((23), (34))], [((45), (67))], [((23)(45), (34)(67))]} ∼
= C2 × C2 . (3.8.21)
The element ((45), (67)) is Ψ(Ad(12) ) (coming from the adjoint action of S6 ) whereas the
other two nontrivial elements are Ψ applied to restrictions of exotic automorphisms of S6 .
Note that the latter yield two distinct elements of Out(A6 ), since we can compose any one
exotic automorphism with Ad(12) to get another exotic automorphism (and restricted to A6
they are only the same modulo Inn(S6 ), not modulo Inn(A6 )).

4 Algebras and modules


4.1 Definitions and examples
Definition 4.1.1. A C-algebra is a tuple (A, mA , 1A ) of a vector space A, a multiplication,
mA : A × A → A, and an element 1A ∈ A, satisfying the following for all a, b, c ∈ A and
λ ∈ C:
ˆ Bilinearity: λ mA (a, b) = mA (λa, b) = mA (a, λb) and:
– Distributivity: mA (a, b + c) = mA (a, b) + mA (a, c).
ˆ Associativity: mA (a, mA (b, c)) = mA (mA (a, b), c) for a, b, c ∈ A;
ˆ Identity: mA (1A , a) = a = mA (a, 1A );
We will usually write mA (a, b) as a · b or simply ab. We omit the subscript A of 1A and just
write 1 whenever there is no confusion. The algebra itself is denoted by A rather than by
(A, mA , 1A ) with the additional structure understood.
Observe that there are three operations on an algebra: multiplication, scalar multiplica-
tion, and addition (since the latter two come from being a vector space).
Remark 4.1.2. The multiplication and scalar multiplication are distinct operations: the
first is a map A × A → A and the second C × A → A. Nonetheless there is a relation: for
λ ∈ C, we have the element λ1A ∈ A, and for all a ∈ A, λa = (λ1A )a. We will state this
more formally in Example 4.1.10.

78
Example 4.1.3. C itself is an algebra with its usual multiplication.
Example 4.1.4. Given two algebras A and B, the direct sum A ⊕ B := A × B is an algebra,
with componentwise multiplication:

(a, b) · (a0 , b0 ) = (aa0 , bb0 ) (4.1.5)

In particular, C ⊕ C is an algebra.
Example 4.1.6. The vector space C[x] := { m i
P
i=0 ai x | ai ∈ C, am 6= 0}∪{0} of polynomials
with complex coefficients is an algebra with the usual polynomial multiplication, scalar
multiplication, and addition.
Example 4.1.7. The vector space A = C[ε]/(ε2 ) := {a + bε | a, b ∈ A} with multiplication
(a + bε)(c + dε) = ac + (ad + bc)ε is an algebra. The multiplication is determined by the
identities 1 + 0ε = 1A and ε2 = 0. This is just like the preceding example but cutting off
(setting to zero) terms x2 and higher. This algebra is important in differential calculus, since
it expresses the idea of working only to first order (ε is so small that actually ε2 = 0).
Definition 4.1.8. An algebra homomorphism ϕ : A → B is a linear map which is multiplica-
tive and sends 1A to 1B . It is an isomorphism if there is an inverse algebra homomorphism
ϕ−1 : B → A.
Exercise 4.1.9. As for groups and vector spaces (and group representations), show that
ϕ : A → B is an isomorphism if and only if it is bijective. The point is that, if ϕ is bijective,
the inverse map ϕ−1 : B → A must be an algebra homomorphism.
Example 4.1.10. If A is any algebra, then there is always an algebra homomorphism C → A
given by λ 7→ λ1A . This explains the observation in Remark 4.1.2.
All of the preceding examples have been commutative. Now let’s consider some noncom-
mutative examples.
Example 4.1.11. The vector space of n × n matrices, Matn (C), is an algebra under matrix
multiplication.
Example 4.1.12. Let V be a vector space. Then End(V ) is an algebra, with multiplication
given by composition.
Example 4.1.13. Given a basis B of V with dim V = n finite, the map writing endomor-

phisms in bases produces an algebra isomorphism End(V ) → Matn (C), S 7→ [S]B . You
already know this is a vector space isomorphism, and it is multiplicative since [S]B [T ]B =
[S ◦ T ]B (2.4.10). So Examples 4.1.11 and 4.1.12 are isomorphic (for dim V = n).
Example 4.1.14. Let G be a finite group. The vector space C[G] is an algebra with the
multiplication linearly extended from the group multiplication. Explicitly:
X  X  X X  
ag g bg g = ah bh−1 g g . (4.1.15)
g∈G g∈G g∈G h∈G

79
Remark 4.1.16 (Non-examinable). As in Remark 2.6.6, the above definition also extends to
the case G is infinite, provided
Pwe require that all sums have all but finitely many coefficients
ag and bg nonzero (C[G] = { g∈G ag g | ag ∈ C, all but finitely many ag = 0}).

Finally, let us give some geometric examples, of commutative algebras of functions.

Example 4.1.17. (Gelfand-Naimark duality) For X ⊆ Rn , say for example the sphere
S 2 ⊆ R3 , we can consider the algebra A of continuous (or differentiable) complex-valued
functions on X. It turns out that, in suitable cases (X is compact Hausdorff), the study
of X is equivalent to that of A (Gelfand-Naimark duality)! Unlike the group and matrix
algebras, A is commutative.

Example 4.1.18. (Algebraic Geometry) Similarly, given a collection of polynomial equa-


tions f1 = · · · = fm = 0 in n variables x1 , . . . , xn , we can consider the zero locus X ⊆
Cn . Again, it turns out that the geometry of X is completely captured by the algebra
A = C[x1 , . . . , xn ]/(f1 , . . . , fm ), defined as the set of polynomials in n variables x1 , . . . , xn
modulo the relation that all polynomials fi (and their multiples) are zero. The addition,
multiplication, and scalar multiplication are all as for polynomials without relations. This
is a central idea of algebraic geometry. Note as in the preceding example that A is always
commutative.

Definition 4.1.19. Given an algebra A, we define the opposite algebra by reversing the
order of multiplication: Aop = A, 1Aop = 1A , but

mAop (a, b) = mA (b, a). (4.1.20)

Remark 4.1.21. In the case of a group, we can also define the “opposite group” Gop by
reversing the multiplication, but it isn’t really interesting since inversion gives an isomor-
phism G → Gop , g 7→ g −1 . For algebras we can’t invert in general (e.g., not all matrices are
invertible) so this is not an option.

Example 4.1.22. case of the group algebra, there is an isomorphism ι : C[G] →
IntheP C[G]op
−1
P
given by ι g∈G ag g = g∈G ag g , i.e., inverting the elements g ∈ G of the group.

Remark 4.1.23. Caution: the map ι is not inverting all elements of C[G]! As a trivial
example, 0 has no inverse: 0 · v =P0 6= 1 for all v ∈ C[G] (and indeed for any algebra). For
a less trivial example, e := |G|−1 g∈G g has the property eι(e) = e2 = e 6= 1 if G 6= {1}.

Example 4.1.24. We can also produce an isomorphism Matn (C)op → Matn (C) by applying
t
matrix transpose: M 7→ M .

Example 4.1.25 (Non-examinable). By the preceding example and Example 4.1.13, if V


is any vector space, we can produce an isomorphism End(V )op ∼ = End(V ) from a choice
of basis. This isomorphism depends on the choice of basis. However, one can produce a
basis-independent isomorphism End(V )op →∼
End(V ∗ ) by dualising endomorphisms, which
as we have seen becomes the transpose operation on matrices after choosing a basis.

80
4.2 Modules
Modules are essentially the same thing as representations, but working more generally over
algebras rather than groups (in the case of the group algebra, it will just reduce to the case
of group representations).

Definition 4.2.1. A left module over an algebra A is a pair (V, ρV ) where ρV : A →


End(V ) is an algebra homomorphism. A right module over A is a left module over Aop .
A homomorphism of modules T : V → W is a linear map compatible with the actions:
T ◦ρV (a) = ρW (a)◦T . Such a homomorphism is also called an A-linear map. Let HomA (V, W )
denote the vector space of homomorphisms and EndA (V ) the vector space of endomorphisms.

This notation is close to that of group representations. Just as we can also think of a
representation V of G as a map G × V → V , one traditionally defines left and right modules
instead as bilinear, associative maps A × V → V and W × A → W , respectively. The corre-
spondence is again (a, v) 7→ ρV (a)(v) and (w, a) 7→ ρW (a)(w), for algebra homomorphisms
ρV : A → End(V ) and ρW : Aop → End(W ).

Exercise 4.2.2. Show that the traditional definition of left and right modules is equivalent
to Definition 4.2.1.

We won’t really need to use right modules or opposite algebras much, but it would be
a mistake not to see the definition from the start, due to their general importance and the
symmetry of the definition (at least in the traditional sense).
Just as for representations, we have:

Definition 4.2.3. A submodule W of a module V is a linear subspace such that ρV (a)(W ) ⊆


W for all a ∈ A. A module is called simple, or irreducible, if it does not contain a proper
nonzero submodule. A quotient module of V is a quotient V /W for W ⊆ V a submodule.

It is clear that, if W ⊆ V is a submodule, then ρV : A → End(V ) determines a ho-


momorphism ρW : A → End(W ), by restricting each ρV (a) to W . Similarly, we obtain
ρV /W : A → End(V /W ) by ρV /W (a)(v + W ) := ρV (a)(v) + W .
As before we have:

Proposition 4.2.4. If T : (V, ρ) → (V 0 , ρ0 ) is a homomorphism of left modules, then ker(T )


and im(T ) are submodules of V and V 0 , respectively. The first isomorphism theorem holds:
im(T ) ∼
= V / ker(T ).
The proof is the same as for groups, but we provide it for illustration:
Proof. If T (v) = 0 then T (ρV (a)v) = ρV 0 (a)T (v) = 0 for all a ∈ A, thus ker(T ) is a
submodule of V . Also, for all v ∈ V , ρV 0 (a)T (v) = T (ρV (a)v) ∈ im(T ), so im(T ) is a
submodule of V 0 .
Then, Schur’s Lemma 2.11.1 still holds, with the same proof:

81
Lemma 4.2.5 (Schur’s Lemma). Let (V, ρV ) and (W, ρW ) be simple left modules for an
algebra A.
(i) If T : V → W is an A-linear map, then T is either an isomorphism or the zero map.
(ii) Suppose V is finite-dimensional. If T : V → V is A-linear then T = λI for some
λ ∈ C.
We omit the proof this time.
Ln
Definition 4.2.6. A direct sum of modules i=1 Vi is the direct sum of vector spaces,
equipped with the product action, a(v1 , . . . , vn ) = (av1 , . . . , avn ). A module isomorphic
to a direct sum of two nonzero modules is called decomposable, and otherwise it is called
indecomposable. A module isomorphic to a direct sum of simple modules is called semisimple.
Example 4.2.7. Let A = Matn (C). Then V = Cn is a left module, with ρCn : Matn (C) →
End(Cn ) the map we have seen before (in Section 2.5): ρCn (M )(v) = M v. In other words
A × V → V is the left multiplication map. This module is simple, since given any v, v 0 ∈
V = Cn , there exists M ∈ A = Matn (C) such that M v = v 0 .
Example 4.2.8. Similarly, let A = End(V ). Then V is itself a module with ρV : A →
End(V ) the identity map. Let dim V = n be finite and take a basis B of V . Then the
isomorphism End(V ) ∼ = Matn (C) of Example 4.1.13 produces the previous example: we

have [S]B [v]B = [S(v)]B . So the algebra isomorphism and the linear isomorphism V → Cn
are compatible with the module structures on either side. In particular, V is also an simple
left module over End(V ).
Example 4.2.9. Let A = C[G] be a group algebra and V a representation. Then we obtain
a left module (V, ρeV ) by extending ρV : G → GL(V ) linearly to ρeV : C[G] → End(V ). (A
linear combination of elements of GL(V ) is always still in the vector space End(V ) ⊃ GL(V )
although it need not be invertible.)
Conversely, if (V, ρeV ) is a left module over A = C[G], then (V, ρeV |G ) is a representation.
This is because ρeV (g)eρ(g −1 ) = I = ρeV (g −1 )e
ρ(g), which implies that ρe(g) ∈ GL(V ).
These operations are inverse to each other. Informally speaking, representations of groups
are equivalent to left modules of their group algebras (this can be formalised and is true, but
we won’t explain it).
Example 4.2.10. Consider again the algebra A = C[ε]/(ε2 ) of Example 4.1.7. Then A itself,
as a left A-module, is indecomposable but not simple. Indeed, the only proper nonzero left
submodule of A is {bε | b ∈ C}. This can be seen because every element of A is a multiple
of any polynomial of the form a + bε with a 6= 0.
Exercise 4.2.11. Prove the claims of the previous example.
Example 4.2.12. Given left modules (V, ρV ), (W, ρW ) over algebras A, B, we obtain a left
module (V ⊕ W, ρV ⊕W ) over A ⊕ B with ρV ⊕W (a, b) = ρV (a) + ρW (b). (This actually includes
V and W themselves if we set the other one to be the zero module, the approach taken in
CW2, #5).

82
Exercise 4.2.13. Expanding on CW2, #5.(a), show that every left module U over A ⊕ B
is of the form Example 4.2.12, for unique submodules V, W ⊆ U . Namely, V = (1, 0)U and
W = (0, 1)U .

4.3 Group algebras as direct sums of matrix algebras


Let G be a finite group and V1 , . . . , Vm a full set of irreducible representations. Thanks to
Theorem 2.18.6, we have a linear isomorphism
m
Φ : C[G] ∼
M
= End(Vi ), Φ(g) = (ρV1 (g), . . . , ρVm (g))∀g ∈ G, (4.3.1)
i=1

extended linearly to all of C[G].


Theorem 4.3.2. This isomorphism Φ is an algebra isomorphism.
Proof. This follows from the fact that each ρVi : C[G] → End(Vi ) is an algebra homomor-
phism (Example 4.2.9), by the definition of the direct sum (Example 4.1.4).

4.4 Representations of matrix algebras


By Maschke’s theorem, finite-dimensional representations of finite groups are direct sums
of copies of the irreducible representations. On the other hand representations of (finite)
groups are the same as of their group algebras. By Theorem 4.3.2, the group algebra of a
finite group is a direct sum of matrix algebras, one for each irreducible representation. This
leads us to suspect (by Exercise 4.2.13) the following theorem:
Theorem 4.4.1. Let n ≥ 1. Then every finite-dimensional left module over Matn (C) is
isomorphic to (Cn )r for some r ≥ 0.
By Example 4.2.8, this immediately implies the following:
Corollary 4.4.2. If V is any finite-dimensional vector space, then every left module over
End(V ) is isomorphic to V r for some r ≥ 0.
In terms of dimension, we can restate the result as follows:
Corollary 4.4.3. (a) If V is a finite-dimensional left module over Matn (C) then U ∼ =
n dim U/n
(C ) . (b) If W is a finite-dimensional left module over End(V ), with V finite-
dimensional, then W ∼= V dim W/ dim V .
We are going to give more general results that imply Theorem 4.4.1 later. One can also
give a proof directly using Maschke’s theorem together with Theorem 4.3.2: see CW2, #5.
Therefore we will omit a direct proof in lecture.
Nonetheless, in these notes, we will provide a nice direct proof which has the advantage
of being generalisable to other situations (replacing the element e11 ∈ Matn (C) in the proof
by elements e ∈ A satisfying e2 = 1 and AeA = A).

83
Proof of Theorem 4.4.1. (Non-examinable) Let (V, ρV ) be a representation of A = Matn (C).
For ease of notation we write av := ρV (a)v. Consider the vector subspace U := Span(e11 v)v∈V ⊆
V . Let ∼ n
Lm(u1 , . . . , um ) be a basis of U . We claim that (a) Aui = C for every i, and (b)
V = i=1 Aui . This clearly implies the theorem.
For part (a), fix i. Let u0i be such that ui = e11 u0i . First of all, multiplying by e11 , we see
that
e11 ui = e211 u0i = e11 u0i = ui . (4.4.4)
So we can actually assume u0i = ui . Next,

eij ui = eij e11 ui = δj1 ei1 ui . (4.4.5)

So Aui is spanned by vji := ej1 ui , for 1 ≤ j ≤ n. Now,

ek` vji = ek` ej1 ui = δj` ek1 ui = δj` vki . (4.4.6)

This is the same formula as the action of A = Matn (C) on Cn , i.e., we have a surjective
module homomorphism
Cn → Aui , ej 7→ vji . (4.4.7)
Since Cn is simple (Example 4.2.7), Schur’s Lemma (Lemma 4.2.5) implies that this map is
either zero or an isomorphism. But it cannot be zero since ui 6= 0. So it is an isomorphism.
This proves (a).
For (b), since a basis of each Aui is given by vji , it suffices to show that the vji together
form a basis of V . Let us first show the spanning property. For general v ∈ V , we have
n
X n
X
v = Iv = ejj v = ej1 e11 e1j v. (4.4.8)
j=1 j=1
Pm
For each j, we have e 11 e 1j v ∈ U , so can write e 11 e1j v = i=1 λij ui for some λij ∈ C. Then
ej1 e11 e1j v = m
P
λ v
i=1 ij ji . By (4.4.8) we obtain v ∈ Span(vji ).
Finally we show the linear independence property. Suppose that
X
λji vji = 0, λji ∈ C. (4.4.9)
i,j

Fix j ∈ {1, . . . , n}, and multiply on the left by e1j . We obtain (by (4.4.6)):
m
X
λji ui = 0. (4.4.10)
i=1

Since the ui are linearly independent, we get that λji = 0 for all i. Since j was arbitrary, all
the λji = 0 as desired.
The theorem can be interpreted as saying that all of the Matn (C) have the same module
theory: all left modules are direct sums of copies of a single simple module. This notion can
be formalised and we say that the Matn (C) are all Morita equivalent for all n.

84
Remark 4.4.11 (Non-examinable). Two algebras A, B are called Morita equivalent if there
is an equivalence of categories between their categories of left modules. This means, essen-
tially, that there is a way to associate a left B-module (W, ρW ) to every left A-module (V, ρV )
and vice-versa, as well as a compatible way to associate homomorphisms of left B-modules
to homomorphisms of left A-modules, such that if we apply this association twice we get
something isomorphic to what we started with.
Remark 4.4.12. By Theorems 4.3.2, 4.4.1, and Exercise 4.2.13, two group algebras of
finite groups are Morita equivalent if and only if they have the same number of irreducible
representations (i.e., the same number of conjugacy classes).

4.5 Semisimple algebras


By Maschke’s theorem, group algebras C[G] of finite groups G have the property that all
finite-dimensional left modules are direct sums of simple modules. Theorem 4.4.1 states
that the same is true for matrix algebras. By Example 4.2.13, we can take direct sums (or
summands) of algebras with this property to get new ones. It is interesting to characterise
such algebras. The next theorem shows how to do this and also that they all have the same
behaviour as C[G] itself:
Theorem 4.5.1. The following are equivalent for a finite-dimensional algebra A:
(i) A is semisimple as a left module over itself;
(ii) Every finite-dimensional left A-module is semisimple.
In this case, every simple left module is isomorphic to a submodule of A.
Definition 4.5.2. A finite-dimensional algebra A is semisimple if it satisfies the equivalent
conditions of the theorem.
Remark 4.5.3 (Non-examinable, omit from lectures). Another equivalent condition is:
(iii) A is a direct sum of simple algebras, i.e., algebras that have no two-sided ideals other
than 0 or A.
Here, A two-sided ideal is a vector subspace I ⊆ A such that I · A ⊆ I and A · I ⊆ A.
Definition (iii) gives another explanation for the term “semisimple” that explains what it
would mean to be actually “simple”. The fact that (iii) is equivalent to (i) and (ii) can be
deduced from the results of this subsection.
In the case that A is infinite-dimensional, definitions (iii) and (ii) are no longer equivalent.
It is still true that (iii) implies (ii), but the reverse implication is false. For example, take the
algebra A = Chx, Di/(Dx − xD − 1), thought of as differential operators with polynomial
d
coefficients in one variable (where D = dx ). Then A has no proper nonzero two-sided ideals,
so it satisfies (iii). It has no finite-dimensional representations at all, so also satisfies (ii).
Now take any algebra B. Then A ⊗ B is also an algebra with no finite-dimensional modules.
But it need not be a direct sum of algebras admitting no proper nonzero two-sided ideals.
For instance, B = C[ε]/(ε2 ) does the trick. So it satisfies (ii) but not iii).

85
This cannot be fixed by removing the word “finite-dimensional” from (ii): in this case
(iii) no longer implies it. Take the algebra A = Chx, Di/(Dx − xD − 1) already given.
There are many infinite-dimensional, non-semisimple left A-modules, such as V = C[x, x−1 ]
df
with action x · f = x and D · f = dx . The latter example has a unique simple submodule,
−1
C[x] ⊆ C[x, x ] = V , so it is indecomposable but not simple.

To prove Theorem 4.5.1, we begin with some general results, interesting in themselves.

Definition 4.5.4. If U ⊆ V is a submodule, then a complementary submodule is a submod-


ule W ⊆ V such that V = U ⊕ W .

Proposition 4.5.5. Let A be an algebra and V = V1 ⊕ · · · ⊕ Vm for simple left modules


L Vi .
Then every submodule U ⊆ V has a complementary submodule of the form VI = i∈I Vi ,
I ⊆ {1, . . . , m}.

Proof. Let I ⊆ {1, . . . , m} be maximal such that U ∩ VI = {0}. We claim that V = U ⊕ VI .


If not, for some i, we have Vi 6⊆ U + VI . Since Vi is simple, Vi ∩ (U + VI ) = {0} and hence
U ∩ (Vi + VI ) = {0}, contradicting maximality.

Corollary 4.5.6. Every submodule and quotient module of a finite-dimensional semisimple


left module is also semisimple. In the notation of the proposition, they are all isomorphic to
VI for some I.

Proof. Use the notation of the proposition. We need to show that U and V /U are semisimple.
The composition VI → V → V /U is an isomorphism, since V = VI ⊕U . But VI is semisimple.
Hence V /U is semisimple. By the same reasoning, U → V → V /VI is an isomorphism. By
the first statement, V /VI is semisimple, and isomorphic to VJ for some J (we can take J = I c ,
the complement of I).

Lemma 4.5.7. Let V be an n-dimensional left module over an algebra A (with n finite).
Then V is a quotient module of An .

vn be a basis. Then we obtain a surjective module homomorphism An → V


Proof. Let v1 , . . . , P
by (a1 , . . . , an ) 7→ ni=1 ρV (ai )vi .
Proof of Theorem 4.5.1. The implication (ii) ⇒ (i) is obvious, so we only have to show (i)
⇒ (ii). Assume (i). Let V be a finite-dimensional module. It is a quotient of An for some
n ≥ 1 by Lemma 4.5.7. Since An is semisimple, the result follows from Corollary 4.5.6.
We can apply this to give another proof of Theorem 4.4.1:

Lemma 4.5.8. The matrix algebra Matn (C), as a module over itself, is isomorphic to (Cn )n .

Proof. Let 1 ≤ i ≤ n and let Vi ⊆ Matn (C) be the subspace of matrices which are zero
n
outside the first column. Then Vi is a submodule isomorphic
Lnto C , since left-multiplication
on Vi is identical to on a single column. And, Matn (C) = i=1 Vi .

86
Proof of Theorem 4.4.1. Let V be a finite-dimensional left module over A = Matn (C). By
Lemma 4.5.8, Am ∼= (Cn )mn for all m ≥ 1. The result follows from Theorem 4.5.1.
Next we show that all finite-dimensional semisimple algebras are sums of matrix algebras:
Theorem 4.5.9 (Artin-Wedderburn). The following properties are equivalent for a finite-
dimensional algebra A:
(i) A is semisimple;
(ii) A is isomorphic to a direct sum of matrix algebras A ∼
Lm
= i=1 Matni (C);
Lm
(iii) There is a full set V1 , . . . , Vm of simple left A-modules and the map Φ : A → i=1 End(Vi )
is an isomorphism.
Corollary 4.5.10. If A is semisimple, and V1 , . . . , Vm a full set of simple left A-modules,
then dim A = (dim V1 )2 + · · · + (dim Vm )2 .
To prove the theorem we need the following lemmas:
Lemma 4.5.11. There is an algebra isomorphism ι : EndA (A) ∼
= Aop , ϕ 7→ ϕ(1).
Proof. Every ϕ ∈ EndA (A) satisfies ϕ(a) = aϕ(1) for all a ∈ A, and hence ϕ is uniquely
determined by ϕ(1), which can be arbitrary. This proves that ι is a linear isomorphism.
Then
ι(ϕ ◦ ψ) = ϕ ◦ ψ(1) = ϕ(ψ(1) · 1) = ψ(1)ϕ(1) = mAop (ι(ϕ), ι(ψ)). (4.5.12)

Lemma 4.5.13. For arbitrary left modules (V1 , ρ1 ), (V2 , ρ2 ), and (W, ρW ), we have linear
isomorphisms
HomA (V1 ⊕ V2 , W ) ∼
= HomA (V1 , W ) ⊕ HomA (V2 , W ), (4.5.14)
HomA (W, V1 ⊕ V2 ) ∼= HomA (W, V1 ) ⊕ HomA (W, V2 ). (4.5.15)
We omit the proof, as it is the same as for Lemma 2.15.3.
Proof of Theorem 4.5.9. First, (iii) implies (ii).
LmNext, (ii) implies (i) by Lemma 4.5.8, by
taking direct sums. Assume (i). Write A ∼ = i=1 Viri , a direct sum of simple submodules
with ri ≥ 1 and the Vi pairwise nonisomorphic. Now let us apply EndA (−). By the lemmas
and Schur’s Lemma, we obtain
n n
∼ ∼
M M
A op
= EndA (Viri ) = Matri (C). (4.5.16)
i=1 i=1

Now we obtain (ii) by taking the opposite algebras of both sides and applying Example
4.1.24.
Finally we show (ii) implies (iii). Theorem 4.5.1 and Lemma 4.5.8 (or L
simply Theorem
4.4.1 along with Exercise 4.2.13) shows that the simple left modules over i Matni (C) are
indeed the Cni , which have dimension ni . Thus Vi := Cni form a full set of simple left
modules, and End(Vi ) = Matni (C).

87
4.6 Character theory
Definition 4.6.1. Let (V, ρV ) be a finite-dimensional representation of an algebra A. The
character χV : A → C is defined by χV (a) = tr ρV (a).

Remark 4.6.2. As in the case of groups, characters are not arbitrary functions. Indeed,
χV (ab) = χV (ba), which shows that χV lives in the space of Hochschild traces on A,

A∗tr := {f : A → C | f is linear and f (ab − ba) = 0, ∀a, b ∈ A}. (4.6.3)

We will see in Exercise 4.6.5 below that, for a semisimple algebra, the characters of simple
left modules form a basis for this space. In the case that A = C[G], the space A∗tr identifies
with the space of class functions (Example 4.6.4 below), so this reduces to Corollary 3.5.3.(ii)
(except, there is no orthonormality statement anymore, as we have no inner product in the
setting of algebras).

Example 4.6.4. If A = C[G] is a group algebra, then first of all A∗ → ∼


Fun(G, C), via the
∗ ∗
map f 7→ f |G . Next, Atr ⊆ A identifies under this isomorphism with the subspace of class
functions: indeed, f (ab) = f (ba) for all a, b ∈ C[G] if and only if f (gh) = f (hg) for all
g, h ∈ G, which by substituting gh−1 for g is true if and only if f (g) = f (hgh−1 ) for all
g, h ∈ G.

Exercise 4.6.5. Show that, for A = Matn (C), then A∗tr = C · Tr, constant multiples of
the usual trace map. Hint: it is equivalent to show that, for all f ∈ A∗tr and every trace-
zero matrix a, then f (a) = 0. To see this, note that, for i 6= j, eij = eii eij − eij eii and
eii − ejj = eij eji − eji eij . Thus, if f ∈ A∗tr , then f (eij ) = 0 = f (eii − ejj ). Since every
trace-zero matrix a is a linear combinations of eij and eii − ejj , this implies that f (a) = 0.

As before, we have:

Proposition 4.6.6. If V ∼
= W , then χV = χW .
Proof. It is a consequence of the fact that trace is invariant under conjugation (by an in-
vertible transformation). More precisely, if ρW (a) ◦ T = T ◦ ρV (a) for T invertible, then

tr ρW (a) = tr(T −1 ◦ ρV (a) ◦ T ) = tr(T ◦ T −1 ◦ ρV (a)) = tr ρV (a). (4.6.7)

Remark 4.6.8 (Non-examinable). The following results (as well as Exercise 4.6.5) could
also be deduced from similar results for group algebras of finite groups, since every matrix
algebra is a direct summand of a group algebra of a finite group (as in CW2, #5). But
we will give direct proofs. One reason to do this is that, in practice, finite-dimensional
semisimple algebras often arise where there is no finite group present, so it isn’t natural to
make use of one (even though it is true that one could always realise the semisimple algebra
as a summand of some group algebra of a finite group, for a “large enough” group).

88
Theorem 4.6.9. If a finite-dimensional algebra A is semisimple, then χV = χW implies
V ∼
= W.
The theorem follows from the following result of independent interest:

Theorem 4.6.10. Let A be a finite-dimensional semisimple algebra and (V1 , ρV1 ), . . . , (Vm , ρVm )
a full set of simple left modules. Then the characters χVi are linearly independent.
Lm ni
Proof. By Theorem 4.5.9, we can assume A = P i=1 Matni (C) and Vi = C . Computing
m
the trace, we get χCni (a1 , . . . , am ) = tr ai . Now if j=1 λj χCnj = 0, then plugging in ai = I
and aj = 0 for j 6= i, we get ni λi = 0. Thus λi = 0.
Proof
Lm ofriTheorem∼4.6.9.Lm Let Then V ∼
V1 , . . . , Vm be a full set of simple left A-modules. P =
si m
PmW = i=1 Vi for some ri , si ≥ 0. Taking characters, we get χV = i=1 ri χC i
i=1 Vi and
n

and χW = i=1 si χCni . Now, χV = χW implies, by linear independence of the χCni (Theorem
4.6.10), that ri = si for all i, and hence V ∼ = W.
Theorem 4.6.11. For A a finite-dimensional semisimple algebra, with full set of simple left
A-modules V1 , . . . , Vm , the characters χVi form a basis for A∗tr .

Again, by Example 4.6.4, this theorem specialises to the statement of Crollary 3.5.3.(ii)
without orthonormality, in the case A = C[G].
Proof of Theorem 4.6.11. By Theorem 4.5.9, A ∼ Lm
4.6.5, A∗tr ∼
= i=1 Matni (C). By Exercise =
∗ ∼
Lm m ∗
P m
i=1 Matni (C)tr = C . Explicitly, every f ∈ Atr has the form f (a1 , . . . , am ) = i=1 λi Tr(ai )
for some λi ∈ C. In particular, dim A∗tr = m. But m also equals the number of simple left
modules, by Theorem 4.5.9. By Theorem 4.6.10, the characters χCni are linearly indepen-
dent, and since there are m of them, they form a basis of A∗tr . The result then follows from
Theorem 4.5.9.

Remark 4.6.12. For a non-semisimple algebra, it is no longer true that the necessarily
linearly independent characters of simple left A-modules form a basis for A∗tr . Indeed, suppose
that A is commutative: ab = ba for all a, b ∈ A. Then A∗tr = A∗ . But in general there are
fewer than dim A∗ simple left A-modules. For example, for A = C[ε]/(ε) (Example 4.1.7),
there is only one simple left module up to isomorphism, C · ε, but dim A = 2 > 1.

Remark 4.6.13. As noted before, we cannot form a character table for a general algebra
(even if it is finite-dimensional and semisimple algebra) since we don’t have a collection of
elements to take traces of. Indeed, for many finite groups G, H, the character tables can be
different (so G ∼6= H, but nonetheless there exists an isomorphism ϕ : C[G] ∼
= C[H] that does
not send G to H. Via ϕ, the characters C[G] → C, C[H] → C of irreducible representations
of G are the same, but since ϕ does not send G to H, this is no contradiction with the
character tables being different.

89
Remark 4.6.14. As in Proposition 3.3.4, with the same proof, we have, for V, W finite-
dimensional left modules over a general algebra A,

χV ⊕W = χV + χW . (4.6.15)

We can’t literally do the same thing for V ∗ and V ⊗ W , however, since these are not in
general left modules over A when V and W are.
Rather, V ∗ is a right module over A, given by dualising endomorphisms: the map ρV ∗ :
Aop → End(V ∗ ) is defined by ρV ∗ (a) = ρV (a)∗ . This is an algebra homomorphism since,
as in Example 4.1.25, the dualisation map End(V ) → End(V ∗ ) is anti-multiplicative, i.e.,
(S ◦ T )∗ = T ∗ ◦ S ∗ . In other words, ev have an algebra isomorphism End(V )op →∼
End(V ∗ ),
and therefore we get an algebra homomorphism Aop → End(V )op → End(V ∗ ). In any case,
tr(S ∗ ) = tr(S), so if we think of V ∗ as a right module over A, then χV ∗ = χV , rather than
getting complex conjugates as we did in the group case. The reason why we had complex
∼ ∼
conjugates is because we also used the inversion map Gop → G, inducing C[G]op → C[G], to
turn V ∗ back into a left module over C[G].
As for V ⊗ W , there is no way to make this into any sort of module over A in general.
However, if (V, ρV ) is a module over A and (W, ρW ) a module over of B, then V  W is a
module over A ⊗ W with ρV W (a ⊗ b) = ρeV (a) ⊗ ρeW (b). Then as before, χV W (a ⊗ b) =
χV (a)χW (b).

Remark 4.6.16. If V is a finite-dimensional left module and W a submodule, then we


can take a basis C of W and extend it to a basis B of V . In this basis ρBV (a) is block
upper-triangular of the form: !
ρCW (a) ∗
B\C ,
0 ρV /W (a)

where we view B \ C as a basis for V /W by adding W . As a consequence we deduce the


more general identity:
χV = χW + χV /W . (4.6.17)
The same identity is valid for group representations (by the same argument, or simply by
letting the algebra be the group algebra C[G]).

4.7 The center


Related to the characters of an algebra is its center:

Definition 4.7.1. The center Z(A) of an algebra A is the collection of elements z ∈ A such
that za = az for all a ∈ A.

It is immediate that this is preserved by isomorphisms:

Proposition 4.7.2. If ϕ : A → B is an isomorphism of algebras, then it restricts to an


isomorphism ϕZ(A) : Z(A) → Z(B) of the centers.

90
Now we turn to our main examples: the group algebra and the matrix algebra.
Proposition 4.7.3. LetPG be a finite group. The center of the group algebra, Z(C[G]),
consists of the elements g∈G ag g such that the function g 7→ ag is a class function (ahgh−1 =
ag for all h). The dimension equals the number of conjugacy classes of G. A basis is given
by (2.18.18).

P
Remark 4.7.4. Under the isomorphism C[G] → Fun(G, C) assigning to v = g∈G ag g the
element ϕv , ϕv (g) = ag , the center maps to the class functions. So we can identify the center
with the class functions in this way.
Proof of Proposition 4.7.3. Note that z ∈ Z(C[G]) if and only if z = gzg −1 for all g. Thus
Z(C[G]) = (C[G], ρad )G , the invariants under the adjoint action. The result follows from
Lemma 2.18.17.
Proposition 4.7.5. Let V be a finite-dimensional vector space. Then Z(End(V )) = C · I,
the scalar multiples of the identity endomorphism.
Proof. Since End(V ) ∼ = Matdim V (C), this can be done explicitly with matrices (left as an
exercise). We give a proof with representation theory. By Example 4.2.8, V is a simple left
module over End(V ). Let A := End(V ). If z ∈ Z(A), then ρV (z) : V → V is A-linear:
ρV (z) ◦ ρV (a) = ρV (a) ◦ ρV (z). By Schur’s Lemma (Lemma 4.2.5), ρV (z) is a multiple of the
identity. But the map ρV : A = End(V ) → End(V ) is the identity. So z is also a multiple of
the identity.
Since End(V ) ∼
= Matdim V (C), we immediately conclude:
Corollary 4.7.6. The center of the n × n matrix algebra, Z(Matn (C)), consists of all scalar
matrices, C · I.
Putting these results together, we obtain:
Lm
Corollary 4.7.7. The Lisomorphism Φ : C[G] → i=1 End(Vi ) restricts to an isomorphism
∼ m
Φ|Z(C[G]) : Z(C[G]) → i=1 C · IVi .

Proof. This is a direct consequence of Theorem 4.3.2 and Propositions 4.7.2 and 4.7.5.
Applying Proposition 4.7.3, we obtain another (although similar) proof of Theorem 2.18.2:
the dimension of source, Z(C[G]) equals the number of conjugacy classes, whereas the di-
mension of the target is the number of irreducible representations.
Corollary 4.7.8. The following are equivalent:
(i) a ∈ C[G] is central;
P
(ii) a = g∈G λg g where λg = λhgh−1 for all g, h ∈ G;

(iii) For every irreducible representation (V, ρV ), the transformation ρV (a) is a scalar mul-
tiple of the identity.

91
Proof. The equivalence of (i) and (ii) is Proposition 4.7.3. The equivalence between (i) and
(iii) is a consequence of Corollary 4.7.7.

Remark 4.7.9. Just like our proof of Theorem 2.18.2 before, the equivalence between parts
(ii) and (iii) of Corollary 4.7.8 can be seen without using algebras at all. Indeed instead of
taking center, as in the proof of Theorem 2.18.2, we can take G-invariants. L We get that an
∼ m
element of C[G] is G-invariant
Lm if and L
only if its image under Φ : C[G] → i=1 End(Vi ) is
G m L m
G-invariant, i.e., in i=1 End(Vi ) = i=1 EndG (Vi ) = i=1 C · IVi .

Remark 4.7.10. Notice that, for a semisimple finite-dimensional algebra A, there is a close
resemblance between Z(A) and A∗tr . Indeed, they are both m-dimensional vector spaces,
where m is the number of nonisomorphic simple left A-modules. In fact we can write an
isomorphism:
A∗tr → Z(A)∗ , f 7→ f |Z(A) . (4.7.11)
This property of the center being dual to the Hochschild traces is a general property one
sees for what are called Frobenius or Calabi–Yau algebras, which are active subjects of
mathematical research.

4.8 Projection operators


Given a representation ρW : G → GL(W ) of a group G, we continue to let ρeW : C[G] →
End(W ) be the extension to C[G].

Theorem 4.8.1. Let (V, ρV ) be an irreducible representation of a finite group G. Consider


the element
dim V X
PV := χV (g)g ∈ C[G]. (4.8.2)
|G| g∈G

Then for every finite-dimensional representation (W, ρW ) of G, the transformation ρeW (PV ) :
W → W is a G-linear projection with image the sum of all subrepresentations of W isomor-
phic to V .

Example 4.8.3. If (V, ρV ) = (C, 1) is the trivial representation, we obtain the element
−1
P
PC = |G| g∈G g, which is often called the symmetriser element. Note that ρ eW (PC ) =
−1
P
|G| g∈G ρW (g) is the projection we found long ago in (2.14.11).

Example 4.8.4. For a nontrivial example, let (V, ρV ) = P


(C− , ρ− ) be the sign representation
of Sn (so C− = C and ρ− (σ) = sign(σ). Then PC− = n!1 σ∈Sn sign(σ). This element is also
called the antisymmetriser element.

Exercise 4.8.5. Verify explicitly that the operator ρeW (PC− ) = n!1 σ∈Sn sign(σ)ρW (σ) is
P
a G-linear projection W → W with image the collection of all vectors w ∈ W such that
ρW (σ)(w) = sign(σ)w.

92
Proof of Theorem 4.8.1. The element PV is invariant under the adjoint action, i.e., χV (g) =
χV (hgh−1 ) for all h ∈ G. Therefore PV ∈ Z(C[G]) by Proposition 4.7.3.
First assume that W is irreducible. By Corollary 4.7.7, ρW (PV ) = λV,W I for some
λV,W ∈ C. Let us compute it. Taking traces,

dim V X
λV,W dim W = χV (g)χW (g) = dim V hχW , χV i. (4.8.6)
|G| g∈G

By orthonormality of irreducible characters,


(
1, if V ∼
=W
λV,W = (4.8.7)
0, otherwise.

Now,
Lm let W be a general finite-dimensional representation. By Maschke’s theorem W =
i=1 Wi for some irreducible subrepresentations Wi . For each Wi , we get that ρWi (PV ) is
the identity or zero, depending on whether Wi ∼ = V or not. Hence ρW (PV ) is the projection
onto the sum of those summands isomorphic to V . This contains all subrepresentations
isomorphic to V , since ρW (PV )(w) = w if w is contained in a subrepresentation isomorphic
to V . The opposite containment is clear. Therefore ρW (PV ) is the projection onto the sum
of all subrepresentations isomorphic to V .

Remark 4.8.8. By Remark 4.7.9, we don’t need to use algebras to prove Theorem 4.8.1.
Simply observe that PV is G-invariant for the adjoint action and apply Remark 4.7.9. We
could have included this result therefore in Section 3.

4.9 General algebras: linear independence of characters


Finally, we demonstrate how the results on finite-dimensional semisimple algebras actually
imply results for general algebras, not even finite-dimensional. Actually it is surprising how
much one can deduce in the general case from the semisimple case.

Theorem 4.9.1. Let A be an algebra and (V1 , ρ1 ), . .L . , (Vm , ρm ) nonisomorphic simple finite-
dimensional left A-modules. Then the map Φ : A → m i=1 ρVi given by Φ(a) = (ρ1 (a), . . . , ρm (a))
is surjective and the characters χV1 , . . . , χVm : A → C are linearly independent.

Proof. Let B := im Φ. Then V1 , . . . , Vm are all leftLmodules over B, via ρ0Vi : B → End(Vi ),
m
given by the composition of the inclusion B ,→ j=1 End(Vj ) with the projection to the
factor End(Vi ). By definition, for all a ∈ A, ρ0Vi (Φ(a)) = ρVi (a). Hence, the condition for
U ⊆ Vi to be an A-submodule, i.e., preserved by ρVi (a) for all a ∈ A, is equivalent to being a
B-submodule, i.e., preserved by ρ0Vi (Φ(a)) for all a ∈ A. Similarly, the condition for a linear
map Vi → Vj to be A-linear is equivalent to the condition that it is B-linear. Thus, since the
Vi are simple and mutually nonisomorphic as A-modules, the same holds Lm as B-modules. As
left B-modules, End(Vi ) ∼
= Vidim Vi , which is semisimple. ThereforeLm i=1 End(Vi ) is itself a
semisimple B-module. By Corollary 4.5.6, this implies that B ⊆ i=1 End(Vi ) is semisimple

93
as a left B-module. By Theorem 4.5.9, this implies that B is semisimple. As the V1 , . . . , Vm
are mutually nonisomorphic simple left B-modules, they can be extended to a full set of
simple left B-modules (by Theorem 4.5.9). But dim B ≤ (dim V1 )2 + · · · + (dim Vm )2 , which
by Corollary 4.5.10 implies that we Lhave equality and the V1 , . . . , Vm are already a full set of
simple left B-modules. Thus, B = m i=1 End(Vi ) and Φ is surjective, as desired.
For the statement on linear independence of characters, note that if m
P
i=1 λi χVi = 0 for
χVi = tr ◦ρVi : A → C, then the same relation holds for the characters χ0Vi = tr ◦ρ0Vi : B → C,
as ρVi (a) = ρ0Vi (Φ(a)) for all a ∈ A. But the characters χ0Vi are linearly independent by
Theorem 4.6.10. So λi = 0 for all i.

94

You might also like