Lec 22

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Fundamentals of MIMO Wireless Communication

Prof. Suvra Sekhar Das


Department of Electronics and Communication Engineering
Indian Institute of Technology, Kharagpur

Lecture - 22
Important Results from Linear Algebra

In this particular lecture we will take at some of the important tenets from linear algebra,
because in the lecture some statistical properties of h we used some of the important
results from the linear algebra as well as when we go down further we will be again
using linear algebra a lot. So, essentially one can consider linear algebra is one of the pre
requisites, so for MIMO communication that is natural for anywhere you taken subject
on this particular topic.

What we do is we summarize some of the important tenets of linear algebra, so that is the
useful for you. Using these you can always refer back to some important text. I can give
you one important author who has been contributing a lot and the books quite easy to
read. And this does not necessarily mean that this is only book available there are many
others up to your own choice, but a book by Gilbert Strang is quite useful there are many
other books also in the market, but that is quite easy to follow.

(Refer Slide Time: 01:18)

So, we get started with our job in the linear algebra and let us consider a vector. So, will
be dealing with vectors will be representing vectors with a single underline. This appears
to be multiple lines this will be single underline that is a vector; let us consider it has two
elements v 1 and v 2. This is the basic representation of vector. And you can have a
scalar multiplication c times v, if c is scalar it results in c times v 1 and c times v 2 that
means the components. Typically in two dimensional spaces you well aware of the it is
the two axis this is the point component on one component at the other, so this I would
write v 1 comma v 2 at this point or you could denote it by the vector v 1 v 2. That
means, this is a two components on the two directions and a scalar multiplication means
it simply scales to a certain size this one scales it scales also in that direction.

And of course, we have to remember that minus v plus v is not equal to the number 0, it
is not equal to the number 0 but it is equal to the 0 vector which means it is 0 0. That is
very important that means, we are talking about this zero zero point it is not equal to
simply a 0 look at the components on each of the direction is 0. If we take the dot
product of v with w is equal to both contains v 1 v 2 as a two elements and w 1 w 2 are
the two elements it is v 1 w 1 plus v 2 w 2 that is what you get. And if v and w are
orthogonal you going to get the answer as if v and v is orthogonal to w. Orthogonal
means I am indicating at 90 degrees. The dot product is 0 because what we are doing
here is if this is 1vector and this is another vector we have basically dot product means
taking the projection of this vector.

So, that is basically projection of one on another and once you are taking projection if
this is at 90 degrees in that case the projection of this will be at the point zero, so the
result is you do not get any component on the projection so it is a 0. If you take v times v
that means, vector of v dot v this is basically the length squared or it is indicated as v
norm squared. And you could also write as the length of a vector length of a vector v
could be given as a square route of the dot product with itself which could also be written
as sum over v I squared.

Basically, v on square plus v 2 square square route of that is the length of the vector and
you also have another thing called the unit vector. The unit vector is a vector of length 1
and the unit vector for v could be defined as v divided by v length of the vector. This is
the squared length. This is the length, length means square route of this; so that is how
you would do it. And if you have two vectors v 2 unit vectors let say v and u and I take
the dot product of it what you going to get is cos theta. So, again this is going to tell you
because their length of 1 and this basically gives the angle between them. So, basically
this lies between plus and minus 1 and if there are at 90 degrees the cos theta is basically
cos of 90 which is 0. That is also again shown by this particular result. So, mod of v dot
u lies is less than or equal to 1 or v dot u lies in the range of minus 1 to plus 1.

(Refer Slide Time: 05:45)

With this proceed on to write the Cauchy Schwarz inequality which will be used
elsewhere also as v dot w is less than or equal to the length of v times the length of the
w. So, this is used in results on match filter and other places. Moving forward, if we have
linear equations let say x plus 2y plus 3z is equal to 6. Suppose this is one linear equation
we have 2x plus 5y plus 2z equals to 4. And let say 6x minus 3y plus z is equal to two in
this linear equation could be written down in the matrix form as 1 which comes from
here 2 which comes from here and 3 which comes from here then 2 5 2; 2 5 2, 6 minus 3
1 6 6 minus 3 1 and here it is x y z.

So, what do you have x if you do this product one times x plus two times y 1x plus 2y
plus 3z is equal to 6. 2x plus 5y plus 2z is equal to 4 you can read 2x plus 5y plus 2z.
Finally, here 6x minus 3y plus z is equal to 2. So, if you write this as a matrix equation
we could write this as A times let say sum capital X is equal to let say b. So, this is the
vector b, this is the vector X this is the matrix a. So, if you solve this then you can find
the values of x y and z this could also be viewed as if we are writing 1 to 6 as one vector
times a coefficient plus 2 5 minus 3 times y plus 3 2 1 times z is equal to 6 4 and 2 we
could also (Refer Time: 07:57) do it on this way.
That means, we are talking about some combination of vector which is going to produce
this particular vector, so this is one vector, one vector, one vector which is getting
multiplied by some components some weights so that is produces this vector. So, you
could also view it in this form. In this contest the I matrix is basically 1 0 0, 0 1 0, 0 0 1
this is a 3 plus 3 I matrix. I n would be 1 0 0 up to n 0 1 0 0, 0 0 1, this goes on up to n.
This is only diagonals of one and defiantly we have I times x is equal to x this is very
important result. Basically, you can clearly see 0 1 0, 0 0 1 times x1 x2 x3 what would
you get x1 if you look at the second row 0 times x1 1 times x2 0 times x3 x2 0 times x1 0
times x2 1 times x3. So, basically I x is equal to x this is again standard result.

(Refer Slide Time: 09:10)

We move further to look at symmetric matrices. So, when we talk about symmetric
matrices those are those matrices if you take the transpose of them they are the matrix
itself. Basically, if I write 1 2 3 and I take a transpose of this it is basically 1 2 and 3, so
that means the there is a certain diagonal and this components are same, so if we have it
like that its matrix. So, basically I can write a 11, a 12, a 13 and I would also have this
component a 12, a 22 a 23 and again I should have a 13. Basically a 31 is the same as a
13, I have a 23 is a 32 as same as a 23 a 33. So, this is a symmetric matrix because if I
transpose this row becomes column and this row becomes this column, this row becomes
this column.
So, if you transpose this it is the same as itself that is a very that is a symmetric matrix.
That is very very useful and we often come across them especially when we are taking H
H hermitian of course this is hermitian symmetric that is a one level hide than this. If we
take a transpose a that we are taking symmetric matrices if a is a symmetric matrices it
could be decomposed in to LDU, where L is a lower triangular matrix, D is the diagonal
matrix; diagonal matrix means it has only d 1 d 2 elements in the diagonal rest of them
are all 0 diagonal matrices are very very useful. This is an upper triangular matrix; these
diagonal entries are very very useful in the sense that have you had very simple
multiplication to perform. Even lower triangular upper triangular matrixes are very very
useful, because you can solve the solution of these iterative steps. If it is diagonal you get
direct solutions.

Next we move to inverse of matrix it is represented by A inverse. Inverse would if an


inverse exist then we would have A inverse a is equal to I that means, if I multiply the
inverse with the matrix I want to get identity matrix that is the fundamental definition
what A inverse should be if it exist at all. And you could also say that if I have A inverse
times Ax, basically if I am looking at the solution of an equation I going to get A inverse
of b, so what I am writing over here is what I wrote here.

(Refer Slide Time: 12:01)

If I look at this particular expression what I wrote there if I would write I multiply A
inverse on both the sides times x inverse times b is whatever I written over here. So,
what do I get A inverse a is I of x is equal to A inverse b. And we have also said that I x
is equal to x that means, I times x is x so basically x is equal to A inverse b. So, if the
inverse exists we have the solution to this set of equation as x equals to A inverse b. And
if the inverse exists is the unique solution to this particular problem as described by the
set of equations. So, inverse is very very useful. If inverse exist we can use it to solve a
particular set of linear equations.

(Refer Slide Time: 12:55)

If matrix is invertible of course we have said that A inverse a is equal to I and AA


inverse is also equal to the. Identity matrix and the truth is there is only one inverse it is
unique you do not have two different inverse for the same matrix. And therefore, when I
do x equals to A inverse b I get single unique solution for the same so that is that is very
true. And now suppose I take this equation that Ax equals to 0 instead of Ax equals to b,
so, this is a very very particular case. That I have A as a matrix just look at try to
understand this, if we have A as a matrix all this entries are 0; x1, x2, x3 I have certain
matrix over here.

So, in this case if x is non-zero is one possible way of getting x 0 is I put x as 0, because
this is a 11 a 12 a 13, a 21 a 22 a 23, a 31 a 32 a 33. So, look at this equation it is a 11
times x1, a 22 times x2, a 13 times x3. So, this row a 11 times x1 plus a 12 times x2 plus
a 13 times x3 is equal to 0 as per the first row. Now this will be 0 if x1 x2 x3 are 0. So, if
they are 0 this will be 0 all of them will be 0. Now we are saying that suppose x1 x2 x3
are non-zero, if they are non-zero yet it is going to 0 that means, there is a certain
combination of x1 x2 x3 such that this whole product is going to 0 same with the other
products.

Then this vector x is known as the null space of A. And all such solutions they could be
x vector 1 they could be x another vector that means, another combination of x which
could leads to 0. So, all these from the null space of A that means, when it is multiplied
with this matrix they turn to be 0. And why it is called as the null space and what is
important to remember is that I cannot get back x if I am going to take the inverse from
this because just imagine that I multiply A inverse. Suppose I multiply A inverse, what
do I have I have x equals to something multiplied by 0. So, I cannot really get back to x
even if I have something over here. So, no matrix can bring back x from 0 this is very
very important to understand. This is called the null space because whatever is the matrix
it is going in to 0 we cannot recover x once we have got to the 0 state. So, this is one of
the important matrices that will be used.

(Refer Slide Time: 15:57)

If we have a diagonal matrix, so diagonal matrix is like d 1 0 0 0, d 2 and dot dot dot dot;
the A inverse is easily found by 1 by d 1, 1 by d 2, 1 by d n rest of them are all zeros, this
is very useful. If I have a diagonal matrix finding inverse is very easy, so that is why
would like to have diagonal matrices. If I have A inverse I take the transpose of it I could
write it as A transpose inverse. Now if A is symmetric, if A is equal to A transpose and
A transpose is equal to A I could write this as A inverse. So that means, that the inverse
of the symmetric matrix is also symmetric that is what we can see. See A inverse
transpose is equals to A inverse. Let say A inverse is equal to b, sol what we have is b
transpose is equal to b.

So, I have taken A inverse as b this is b transpose is equal to b, so that means the if a
symmetric matrix the inverse is also symmetric so that is also very very useful.
Whenever we have symmetric matrix it is very very useful. For rectangular matrix R if R
is a rectangular matrix then we have let say R T times R transpose times R this is a
square matrix and this is a symmetric matrix. This is a symmetric because it is clear R T
times R if I take the transpose of it what do I get R transpose times R transpose times, R
transpose, R transpose, transpose is basically R. So, what you have R T times R whole
transpose is equal to R T times R basically that is a symmetric matrix.

(Refer Slide Time: 18:04)

Now the problem; we have the subspace concept. So, subspace of a vector space if we
talk about subspace of a vector space is a set of vectors is a set of vectors including the 0
vector. It is basically contents two tenets; the first tenet is the sum of the two vectors is in
the subspace, and the second tenet is some scalar times a vector is in the subspace.
Essentially, we can say a linear combination of these two vectors is in this subspace. So,
combinations of this remain in the subspace, so that is a vector subspace. So, if c is 0 and
d is 0 then this turn out to be 0. So, including 0 vectors is subspace. If 0 is not containing
in it, it is not a subspace.

Then there are many other spaces the column space, the rows space and so on and so
forth I will not look in to that will go into determinate which we use. So, determinant is
one of the very important things. So, if we have square matrix, for a square matrix
determinant is a single number is very very important. Determinant is not a matrix it is a
single number. It tells us whether it is a invertible or not if the determinant is equal to 0 it
implies the matrix is non invertible. And if it is not sorry, if it is not equal to 0 it means it
is invertible. So, if you calculate the determinant and find the determinant does not exist
or determinant is 0 you will never to able to inverse the matrix, because the determinant
is the adjourned of the matrix divided by the determinant. So, this is very very important.

If we have pivots then the product of the pivots gives the determinant. This is also
important. Other easy way to remember that if the edges of a box are the rows of a
matrix; so the edges are the rows then the volume is given by the determinant of A.
determinant of A could also be written as two state lines around A or as debt A, so this is
a two ways of writing the determinant.

(Refer Slide Time: 21:03)

Once we have study determinant we can go on and what we can see is that there are
something called the eigenvalues, these are very very important. Eigenvalues is very
very special value in such a way that will come back in determinants again, that if I have
matrix A and I multiply this with the vector which is known as the eigenvector I get the
vector back itself along with the eigenvalue. This eigenvalue tells us that a by this
operation of this matrix whether this is vector has become something which is similar to
the vector.

Something in the sense whether it remain the same, so if it is equal to 1 there is no


change if it is positive that means it is grown positive and greater than 1. If it is between
0 and 1 then it is shrunk. If it is negative that means it has change direction. So, what it
means basically the set of and these vectors is x this x is known as eigenvectors. So,
basically for a given matrix if we multiply with this certain vector the vector remains
unchanged that means, this matrix operation of the vector remains unchanged and there
is a special value along with that so these are eigenvectors and along with that we get
eigenvalues. These are very very special values such that when matrix operates on the
eigenvector the vector remains as it is with certain scalar coefficient; the scalar
coefficient is going to tell us what is happened to that vector. If it is 1 nothing is
happened to the vector, if it is negative the direction is changed, if it is something it is 0
and 1 it is becomes smaller than the original vector and so on and so forth.

So, once we have this eigenvalues they can be found by solving the determinant of A
minus lambda I. If you solve the determinant of A minus lambda I equals to 0 you are
going to get the eigenvalues of it once you get the eigenvalues of it then you can solve
for eigenvectors these eigenvalues are very very important, because if you multiplying
eigenvalues the product of eigenvalues will give you the determinant of the matrix A. So,
that is why the product of eigenvalues is very very important.

And the other important about determinant is if we have t times a t times b and c d that
means, I am multiplying one row by certain T. This I am taking the determinant this is
equal to t times determinant of a b c d. If I multiply a matrix with T in that case all the
elements get multiplied the entire row gets multiplied the determinant of the original
matrix multiplied by T to the power of n. So, let say determinant of 2 I is equal to
determinant of 2 0 0 2 because two times I its two times 1 0 0 1. So, which is equal to 2 0
0 2 and this lines indicate the determinant which is equal to two square time determinant
of 0 1 1 0 which is equal to 4, because determinant of this minus the product of this.
So, determinant is also very very important we should try to understand determinants.
The last (Refer Time: 24:41) determinants I would say determinant of A transpose is
equal to determinant of A. And the determinant is 0 the determinant of A is 0 if there are
dependent rows or dependent columns in this and if there are 0 rows then also the
determinant is 0. So, these are some of the important cases that that we have.

(Refer Slide Time: 25:09)

Moving forward when we look at this eigenvalues as we were seen, so we have said that
eigenvalues are very very important we have seen them. So, if solve a minus lambda I
times more determinant of that equals to 0 that means, A is the matrix which is given I
matrix is known, lambda has to be found. So, you can solve this equation and you can
find the lambdas. So, if you take a as a 11 a 12, a 21 a 22 let say this is a matrix minus
lambda times 1 0 0 1 and take the determinant of that set equals to 0 you going to get
solution of lambda.

So, basically what you have here is equal to a 11 minus lambda and a 12, a 21 and a 22
minus lambda, basically a 11 minus lambda a 12 a 21 a 22 minus lambda. So, you take
the determinant of this set is equals to 0. So, you get quadratic term, there is two
lambdas; lambda 1 and lambda 2. Once I have got this then I can put Ax this equal to
lambda x I know this I will put the two different lambda values one at a time I am going
to get two different vectors x1 and x2. In this way I want to get the eigenvectors as well
the eigenvalues related to A which will be very very useful in our future description.
So, eigenvectors are again useful for factorizing A in terms of diagonals and when you
are able to factorize it in terms of diagonals you can really do lot of things you can
calculate the determinant easily, you can do the inversion and so on and so forth. So,
these are some of the important things.

(Refer Slide Time: 26:49)

So, now suppose we have this matrix A and we can diagonalize it in the form that I
would put S inverse S is equal to lambda as eigenvalue matrix. I could diagonalize it as
eigenvalue matrix where these S would be containing the eigenvectors of A and in that
case we could write A is equal to S lambda S inverse. So, if I have it in this form then
this could be very very useful, this would be really really very very useful. And the
important thing we should remember that this point is lambda i’s they will all be
different.

So, if this lambda i’s are different that means, when we are solving this particular this
particular equation getting the lambda 1 and lambda 2 to be different, in that case this x1
and x2 will also be independent. So, if lambda i’s be different then we will be getting x1
x2 up to all the eigenvectors are independent. And if they are independent then you again
have lot of useful values. So, if all of the lambda i’s not equal to 0 the matrix A is
invertible. In the sense that determinant is equal to product of all lambda i’s.
So, if they are non-zero I am going to get it invertible. If any one of them is 0 the
determinant is 0 that means it is invertible. And diagonalizeblity means if all lambda i’s
are different you can diagonalize it this. This is an important result.

(Refer Slide Time: 28:55)

So, if we have to diagonalize this then I need the entire lambda i’s to be different. If you
have symmetric matrix symmetric matrix what we have seen in that case that means,
when if you have A transpose is equal to A transpose you can get S that means, what we
have over here; S inverse is equal to S transpose. And S definitely if S inverse is equal to
S transpose S transpose S is equal to I because S is equals to S inverse. Basically S
inverse S which is equal to I this is the well known result. This holds true.

And for symmetric matrix means if it is symmetric then the eigenvalues then lambda i’s
are real this is again another useful outcome. For such cases you could also choose this
eigenvectors to be orthonormal, so if eigenvectors are orthonormal so S lambda S could
be written as Q lambda Q transpose. So, eigenvectors if they are independent they are
orthogonal and in this case you could make it orthonormal because you going to have
them as unit length that is the only difference that you are going to get.

And the next important result we going to have is if all lambda i’s are greater than 0 then
what you have is a positive definite matrix. If they are all greater than or equal to 0 it is a
positive semi definite matrix. These are the two important terms which will be using
symmetric matrices will be getting quite often. The last thing that I need to mention
before we close this particular session is singular value decomposition.

(Refer Slide Time: 30:29)

Singular value decomposition; so as we have said that you could write Ax is equal to
lambda x that means, these are eigenvectors this is an eigenvalue of the matrix x
considered sigma i’s as a singular values of A. If these are singular values of A, you
could write matrix A times v is equal to some vectors singular vectors sigma 1 times the
vector u 1. So, these are the input singular vectors and the output singular vectors of the
matrix and these are the singular values. Like this you going to have R number of
singular values where R is the rank of the matrix, rank of the matrix for square matrix
would be the number of independent rows or columns or otherwise semi minimum of
rows and columns. So, this is almost similar to this eigenvalue decomposition.

In this case you could factorize A like in earlier case we had S was used to diagonalize
this that means, we had S lambda S inverse was the diagonalization that that we have
used basically we have said that A S is equal to S times the eigenvalues I would multiply
the S inverse on this side and I am going to get. In this expression if you multiply S
inverse you going to get S lambda S inverse which is this particular expression. So, here
using this you could write it as A V is equal to U times sigma, these are all matrices.

On other words we could write A is equal to U sigma V hermitian if I am going to


multiply by V hermitian I want to get an identity matrix because V transpose V is equal
to I and U transpose U is equal to I, this these are true. So, I am writing hermitian for
complex elements, so we could again diagonalize A with orthogonal matrices in U and V
and this containing the singular values of a just like we have been able to diagonalize
using the eigenvalues for A. So, these are some of the results which will be using in the
study of MIMO communication systems. For example, when we study the statistical
properties of h we have used most of the results for this.

Again I would like to say that this are very very very very brief introduction or summary
of results for from linear algebra. I would strongly recommend one to have reference
book on linear algebra whenever you are doing such a course.

Thank you.

You might also like