0% found this document useful (0 votes)
16 views8 pages

Rough Notes From Cafe

The document provides an overview of various concepts in linear algebra and MATLAB, including matrix operations, degree of freedom, LU factorization, determinants, and subspaces. It explains how to perform matrix manipulations in MATLAB, the conditions for invertibility, and the significance of linear independence and dependence. Additionally, it discusses the relationships between vectors, spans, and the properties of subspaces in relation to linear systems.

Uploaded by

oohe4598
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views8 pages

Rough Notes From Cafe

The document provides an overview of various concepts in linear algebra and MATLAB, including matrix operations, degree of freedom, LU factorization, determinants, and subspaces. It explains how to perform matrix manipulations in MATLAB, the conditions for invertibility, and the significance of linear independence and dependence. Additionally, it discusses the relationships between vectors, spans, and the properties of subspaces in relation to linear systems.

Uploaded by

oohe4598
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

In MATLAB there is no bar for augmented matrix,(arbritary),operations

don’t need the bar.

a system with n variables with one equation it is represented in n-1


dimension

the equations of that dimension meet the solution decreases by 1


dimension

can call it degree of freedom.

The degree of freedom is represented by the parameters, more


parameters equal more combinations

In Ri + cRj i must not equal j

In mat lab

Separate each element with space and separate each row with semi colon

R=[1,2,3;4,5,6;7,8,9]

And it shows the matrix

To not show matrix

R=[1,2,3;4,5,6;7,8,9];

To clear matrices

>>clear

To combine the two matrices with same rows

[a b]
R(3,: ) = R(3,: )-R(1: )

R(3,: ) = R(3,: )/3

If in decimal

Can change to rational format(fraction)

>> format rat(if in format rat ensure that you change the formatting
before using irrational numbers)

>>R

You you want a lot of decimal place

>> format long

>>R

In matlab

Use

>>Syms a b

It ensure matlab does not try to find the value of a and b

To later substitute use

>> a=1, eval(A)

>>rref(ans)  to reduce matrix to rref form

If there are unknowns in matrix do not use rref code

AB=BA for diagonal entries

A^x= all the entries to power x in diagonal entries

Non trival solution will always be an answer for homogenous system.

If homogenous system has unique solution the solution must be non trivial

If there is a solution of homogenous it can be multiplied by anything

Invertible if only the rref is identity matrix

Non square matrices are not invertible


If a is non square matrix if

AB=I

Then BA cannot =I

If you rref A to get identity of right side then B is the unique solution

To find the inverse of a matrix in matlab

Do

>>inv(a)

There must be a zero row for the inverse of a matrix to not exist

If you do row operations on an identity matrix the last row canot be a zero
row

In a non square matrix

If a has more coloumns than rows and has a inverse, the inverse will have
infinite solutions

I a has more rows than coloumns than there will be no inverse as the rref
has a zero row, and therefore not an identity matrix.

For LU FACTOIRIZATIon

To find L for take all the elementary row operations performed on A and
perform it on an identity matrix of same size.

Then inverse the resultant to find L

Not every matrix is LU factorizable but in syllabus every matrix Is lu


decomposable

>>[L U]=lu(sym(A))

To do lu decomposition

Us u is not rre

Ly=b, as L is a product of elementary solutions, it is consistent and


invertable.
Adjoint entries are the cofactor of matrix minro but does not matter
which matrix minoe

Linear combination where you can reach when you are given 2
directions

If they are in the same line, the result is the multiple of v

Determinant is cofactor expansion along any coloumn or row.

If entry of matrix is 0 can ignore that matrix

If one step from diagonal determinsnt is -ve

And product of diagonal entries Is the determinant of traingualr matrix

Determinant by reduction when there are alto of non zeros use ref and
then find determinant

When you have a zero row use determinant by cofactor expansion and
since all the entries are 0 then determinant is zero

Large size matrix generally has some trick observations

Det(A)=det(AT) determinant is invariant under transpose,

So if coloumns are the same than can find transpose and then the
rows become the same

Additionally, there exists elementary coloumn operations

Multiple of coloumn added onto another,

Multiply coloumn with non zero constant

Exchanging 2 coloumns
Premultiplying matrix with E does row operation

Postmultiplying matrix with E does coloumn operation

The E operaiotn is the same, but for adding 2 coloumn together the
element ayr matrix changes based on its coloumns instead of its rows

Coloumn operation only for determinant.

Never use coloumn operation for linear systems

If there are unknowns reduction method we need to consider cases and it


will be difficult to calculate determinat, therefore we need to use co factor
expansion to calculate to calculate determinant

Determinant of product is the product of the determinant

Det(cA)=cnDet(A), where n is the order of A

Is A=L1U and B=L2U

Det(B) = det(l2)(det(U)

Det(A) = Det(l1)Det(U)

Since L is lower triangular, the diagonal entreis are 1 and therefore the
determinant of l is 1 and sisnce u is the same Det is the same

Det(AB)=DET(BA)

Since DET(AB) = DET(A)DET(B) and therefore commutative

Transpose can be brought in and out of adjoint

A is invertible if and only if adjoint is invertible

Whatever is not mathlabable write down in answers

1st criteria of subspace is contains a zero vector


R2is not the subspace of R3

A vector with 2 coordinates will not be in R 3

As vectors are matrices

And matrices are only equal if the y have the same size and the
entries are equal

Subspace is closed under linear combination,

Only subspace is 0 space, by definition

It contains origin, it is closed under linear combination,

For it to be subspace it needs to be a solution set to a homogenous


system.

Based on the number of equation and variable, the degree of freedom


deoends on that, so it must be based on parameters, it must be a plane

When the equation does not have a variable the variable is a parameter,
so it does not matter what the solution fo that variable is.

To be linearly dependent, it is not necessary for the 3 rd vector to be a


linear combination but there redundancy can be the 1 st or 2nd vector

For 3 vectors which are linearly independent they are pivot coloumn,
when put together, so the it is essentially ref, so the its det is always non
zero

Rn, is the collection of all nX1 matrices

Linear combination

- Go in direction of u1 with c1 step then turn to u2 direction….

And where you end up

SPAN Collection of all linear combination

- Where can you go given these vectors


- Can never be outside the span

Let u1,u2,u3 in Rn

So finding a point you can reach with these constant and direction vectors
is the final vector which is the solution to the linear system.
V is the span of u1,u2,u3, if you find an list of unknowns which need to be
=0, then the the equation is the equation which defines the subspace

If rref(A)in matlab, shows a numbers only, then the system is


inconsistent, i.e it does not describe the entrie span of R4,

However when the rightmost coloumn is an equation it is always


consistent(so u can rref with unknowns when it is consistent)

3 vectors cannot span entrie R4, so whenever only 3 vectors are given
then the it is always inconsistent

Subspace can never leave subspace with the directions in the subspace

If the span t(v) is a subspace of S(u)

If every V is in U

(Put T|S)

A spanning set can be a subspace,

And a subspace can also be a spanning set

Only way to show it is not a subspace is to negate one point of the


definition

Affine space is a subspace which can be shifrted away

It is a more general version of subspace, where it no longer contains


the origin but it is parallel

Independence means no other vector can use the other vectors job,
there is no redundancy

Asking for non-trival coloumn is non pivot coloumn in rref

To check for independent just check for pivot coloumns

Use bar, so see if anything is inconsistent in the system.

Any linear combination of linearly independent vectors are 0

And prove that the vectors are linearly independent as any solution trivial

If the vector times another vector is -1, then it mean they are 180 0 away
form each other,

And 3 vectors cant all be 180 degresss form each other.


Row operations do not preserve linear relations.

But coloumn relations are preserved.

Rank+nullity= number of coloumns

Coloumn space is subspace spanned by column, where it is a vector

It does not matter if it is row or column, and it only matters to make it


coloumn when multiplying

Row space is the subspace spanned by the rows

In rref, the leading entries testify than none of the rows are linearly
dependent

To we use coloumns to show relation of rows, as it preserves the row


relations.

Null space is preserved by row operations.

Dimension only works for basis, number of vectors in basis, dimension of


matrix is not defined,

If you are not full rank, equivalent statements of invertibility does not
work.

Rank<=min{m,n}

When

Rank=min{m,n}

Then it is full rank.

You might also like