Year 1 MT Vacation Work, Vectors and Matrices
Year 1 MT Vacation Work, Vectors and Matrices
Your college tutors will normally set vacation work. However, below are a few suggestion on what you might
do to revise and consolidate the material as well as deepen your understanding.
• basic vector manipulations: adding of vectors, scalar multiplication, linear combinations, checking linear
independence, checking if vectors form a basis, finding coordinates of a vector relative to a basis
• scalar, vector and triple product: being able to manipulate expressions involving these products, definition
of length of vectors and angles between vectors, applications to geometry such as finding perpendicular
vectors
• lines and planes: describing lines and planes with vectors, Cartesian and vector form, how to set them up
and convert between them, calculating intersections of lines and planes, finding minimal distances from
lines/planes to points or between two lines in 3d
• basic matrix manipulations: addition and scalar multiplication of matrices, multiplication of matrices,
transpose and hermitian conjugate
• more advanced matrix manipulations: row reductions, computing the rank of a matrix using row reduction,
computing the inverse of a matrix using row reduction or the co-factor method
• systems of linear equations: understanding the structure of solutions, being able to find solutions using
the various methods available: ”explicit calculation”, row reduction on augmented matrix, using inverse
matrix, Cramer’s method (Be sure you can apply all these methods but also to identify the method which
works best and fastest for you on relative small systems of linear equations.), solving systems of linear
equations with parameters
• determinants: knowing basic properties (multi-linearity, anti-symmetry), product law for determinants,
determinant of transpose matrix, working out determinants for small matrices and expansion by row
or column, determinant as a criterion for invertibility of a matrix, application to inversion of matrices
(co-factor method) and solving systems of linear equations (Cramer’s rule).
• scalar product: idea of an ortho-normal basis, carry out Gram-Schmidt procedure, definition and inter-
pretation of orthogonal and unitary matrices, working with 2d rotations
1
• eigenvectors and eigenvalues: computing eigenvalues and eigenvectors for specific matrices and using these
to diagonalise the matrices, application to computing functions of matrices and quadratic forms
2
2 A Mini-Project
This project is exploring some of the algorithmic and computational aspects of the subject. As a physicist,
being computer-literate is part of the standard-repertoire so if you have had little experience so far this is an
opportunity to make a start.
The basic task is: Write a code in an advanced programming language – preferably c or c++ – which
calculates the rank of a matrix of arbitrary size (and not necessarily quadratic) using Gaussian elimination. To
avoid having to deal with tricky numerical issues work over the field Fp = {0, 1, . . . , p − 1}, where p is a prime
number. In this field, addition and multiplication of two integers k, l are defined by
Here, k mod p denotes the remainder of a division of k by p. So, in practice, your matrices will contain integers
of limited size (from 0 to p − 1) and whenever an addition or a multiplication leads to a result ≥ p or < 0 it is
brought back into the range 0 to p − 1 by the mod operation in Eq. (1). Keep the prime p as an arbitrary but
fixed constant in your program.
Some steps along the way are:
• If you don’t have a c compiler installed on your computer yet this is a good time to do so. Free, open-
source compilers, such as gnu’s c compiler, are available. If you are unfamiliar with the language, a quick,
no nonsense introduction is, for example, B. W. Kernighan and D. M. Ritchie, “The C Programming
Language”.
• Understand the arithmetics of the finite field Fp with addition and multiplication as in Eq. (1) and
implement this computationally.
• Go over Gaussian elimination and translate it into a detailed algorithm which can be programmed. Your
code should work for matrices of arbitrary size. Devise some checks to make sure your code works correctly.
• Computing time should increase as nα with the typical size n of the matrix and some characteristic power
α. Find the power α for your code, check that it conforms with general expectations and think about any
optimisations of your code which might decrease α.
3
3 A Challenging Mini-Project
This project provides an opportunity to practice many of the techniques of linear algebra which we have intro-
duced in the context of a more advanced setting. It is, of course, entirely voluntary and for your amusement only.
The underlying problem has arisen in the context of a “real-life” research project but it can be formulated in
elementary terms. The problem is non-trivial and currently unsolved so don’t despair if you don’t get anywhere.
Consider m vectors xr = (xr,0 , . . . , xr,nr ) of variables xr,i , each with dimension nr + 1, where r = 1, . . . , m,
and the (ring of) polynomials R = C[x1 , . . . , xm ] in all so-defined variables. Then focus on the two vector
spaces V = Rk and W = Rl of polynomials with multi-degrees k and l in those variables. A polynomial has
multi-degree k = (k1 , . . . , km ) if its total degree in the variables xr,0 , . . . , xr,nr is kr . For example, let m = 2,
n1 = n2 = 1 and denote the two vectors by x1 = (x0 , x1 ) and x2 = (y0 , y1 ), so that R = C[x0 , x1 , y0 , y1 ]. Then,
the multi-degree (2, 1)-polynomials are given by
where mi are fixed polynomials (for example monomials) of multi-degree l − k and ci ∈ C. There is a subtlety
if a component, lr − kr , of the multi-degree l − k happens to be negative. In this case, the coordinates xr,i in f
should be replaced by the partial derivatives ∂xr,i = ∂/∂xr,i . With this refinement the polynomials f (c) define
linear maps
f (c) : V → W (4)
between our two polynomial vector spaces V and W , simply by virtue of mapping v ∈ V into f (c)v ∈ W . For
example, if
W = C[x0 , x1 , y0 , y1 ](1,2) = Span{x0 y02 , x0 y0 y1 , x0 y12 , x1 y02 , x1 y0 y1 , x1 y12 } , (5)
then the map f should have degree (−1, 1) and the most general such map can be written as
Pick a basis (for example of monomials) for each of V and W . Relative to this basis the maps f (c) can
be represented by matrices M (c) : Cdim(V ) → Cdim(W ) whose entries are linear in the coefficients ci . As an
illustration, for the example defined by Eqs. (2), (5), (6), and with the monomial basis given there, the matrices
M (c) take the form
2c1 c2 0 0 0 0
2c3 c4 0 2c1 c2 0
0 0 0 2c3 c4 0
M (c) =
. (7)
0 c1 2c2 0 0 0
0 c3 2c4 0 c1 2c2
0 0 0 0 c3 2c4
This is of course just a small example. For higher polynomial degrees and more variables the matrices become
very large (think hundreds times hundreds). Also note, although the above example has led to a quadratic
4
matrix, in general M (c) will not be quadratic.
The problem that needs to be solved concerns the rank of the matrix M (c). Of course this rank depends on
the values of the coefficients ci . The main question is:
What are the special loci in c space where rk(M (c)) ≤ p for any given integer p = 0, 1, 2, . . . ?
Let me illustrate this question for the example (7). The “generic” rank of M (c) in Eq. (7) is six. Its rank is
less or equal than 5 on the locus in c space defined by
!
det(M (c)) = −16c32 c33 + 48c1 c22 c4 c23 − 48c21 c2 c24 c3 + 16c31 c34 = 0 . (8)
This locus is three-dimensional (in four-dimensional c space) and, it turns out, it can equivalently and more
simply be defined as the locus where c2 c3 − c1 c4 = 0. Actually on this locus the rank equals 4. Further, a rank
lower than 4 can only be achieved in a trivial way by setting all ci = 0, in which case the matrix M (c) is entirely
zero and has vanishing rank.
Everything is reasonably straightforward for the small example above but what if the size of the matrix
M (c) is 20 × 20 or even larger, depending on O(10) coefficients ci ? In this case the determinant (or anything
similar) is completely useless as it would lead to 20! or more terms. Can anything be said in general about the
special loci in c space for such large cases? Is there an effective algorithm which allows their computation?