0% found this document useful (0 votes)
10 views12 pages

S1, S2 Linear

Uploaded by

Gimal George
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

S1, S2 Linear

Uploaded by

Gimal George
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CUSAT - Linear Algebra

Vector Space
Vector space: Let V be a non-empty set of vectors and F be a scalar field. V is said to be a vector space over
the field F under the operations, vector addition (+ : V × V −→ V ), scalar multiplication (· : F × V −→ V ),
if the following axioms are satisfied.
1. Addition Axioms: For v1 , v2 , v3 ∈ V .
(a) v1 + v2 ∈ V ( closure property ),
(b) v1 + (v2 + v3 ) = (v1 + v2 ) + v3 ,
(c) v1 + 0 = v1 = 0 + v1 ,
(d) v1 + (−v1 ) = 0 = (−v1 ) + v1 ,
(e) v1 + v2 = v2 + v1 ,
2. Multiplicative Axioms: For v1 , v2 , v3 ∈ V and c1 , c2 , c3 ∈ F,
(a) c1 v1 ∈ V ,
(b) (c1 + c2 )v1 = c1 v1 + c2 v1 ,
(c) c1 (v1 + v2 ) = c1 v1 + c1 v2 ,
(d) 1v1 = v1 , where 1 is the multiplicative identity of F,
(e) c1 (c2 v1 ) = (c1 c2 )v1 = c2 (c1 v1 ).
Vector space - Examples:
1. R over R, C over C,
2. C over R,
3. Rn over R, Cn over C,
4. Mn (R) over R,
5. Pn (x) over R,
6. P (x) over R,
7. F(I) over R , where, F(I)= set of all real valued function on interval I.
Subspace of a vector space: Let V be a vector space over F, and W ⊆ V , then W is a subspace of V if
W itself is a vector space over F, w.r.t the same operations associated with V over F.
Theorem: Let V be a vector space over a field F, W ⊆ V , W is a subspace of V iff
(i) 0 ∈ W ,
(ii) cα + β ∈ W , for all c ∈ F and α, β ∈ W .
• Let V be a vector space over a field F, and let V1 and V2 be two subspace of V , then V1 ∩ V2 , and V1 + V2
are subspaces of V , where V1 + V2 = {(v1 + v2 ) : v1 ∈ V1 , v2 ∈ V2 }
• V1 ∪ V2 need not be a subspace of V , For instance, Let V = R2 (F = R),
V1 = {(x, 0) : x ∈ R} and V2 = {(0, y) : y ∈ R}, then (1, 0), (0, 1) ∈ V1 ∪ V2 but (1, 1) ∈
/ V1 ∪ V2
Linear Independence and Dependence, Basis, Dimension
Linear combination: A vector v ∈ V is said to be a linear combination of the vectors v1 , v2 . . . , vn , if there
exist scalars c1 , c2 , . . . cn ∈ F such that c1 v1 + c2 v2 + · · · + cn vn = v
Linear dependence: Let S ⊆ V , then S is said to be linearly dependent, if there exist a non zero coefficient
vector c1 v1 + · · · + ck vk = 0 for some vectors v1 , . . . , vk ∈ S.
Linear independence: A subset S of V is linearly independent if it is not linearly dependent.
Spanning set: Let S = {v1 , v2 . . . , vn } be a subset of V , S is said to be a spanning set of V if any vector
v ∈ V can be expressed as linear combination of elements of S. v = c1 v1 + c2 v2 + · · · + cn vn , for some c1 ∈ F.
Linear span of a set: Let S ⊆ V , then the Linear Span L(S) of S is the smallest subspace of V containing
S, i.e, the intersection of all subspaces containing S
• If S = {v1 , . . . , vn ⊆ V }, then L(S) = {c1 v1 + c2 v2 + · · · + cn vn : ci ∈ F, vi ∈ S, ∀ i = 1, 2, . . . , n}, the
set of all linear combinations of the vectors in S.
• For any non-empty subset S of V , L(S) is a subspace of V .
• L(S) = S, if S itself a subspace of V .
Basis of a vector space: Let V be a vector space over a field F, and let S ⊆ V , then
(i) S is linearly independent,
(ii) S spans V , that is L(S) = V .
• Every non-trivial vector space has a basis.
Dimension: The cardinality of a basis of a vector space is called the dimension of the vector space. If it is
finite, then the vector space is said to be finite dimensional, otherwise the vector space the vector space is of
infinite dimension.
• Let dim V = n, then
(i) Any subset of V having more than n elements is linearly dependent
(ii) Let S be a subset of V having n elements then S is linearly independent iff L(S) = V .
• Let S be a subset of V , and let L(S) = V , where V is a vector space over F, then any maximum
number of linearly independent vectors in S form a basis for V . Suppose we delete every vector that is
linear combination of other vectors in S, then remaining set of vectors form a basis for V . That is any
spanning set contains a basis.
• Let V be a vector space over the field F, and let dim V = n, let S = {v1 , v2 · · · , vm }, m ≤ n be a
linearly independent set of vectors then S is a part of some basis for V . That is S can be extended as
a basis for V .
Dimension of subspace: Let V be a vector space over the field F with dim V = n, let W be a subspace of
V , then W is also finite dimensional and dim W ≤ dim V . dim W = dim V iff V = W .
Hyperspace: If dim W = n − 1, where W is a subspace of V , with dim v = n, then W is called a hyperspace
of V .

Linear Transformation
Linear transformations or linear maps:
Let V and W be two vector space over the same field F, a map T : V → W is said to be linear map if
T (v1 + v2 ) = T (v1 ) + T (v2 ), ∀ v1 , v2 ∈ V T (cv) = cT (v), ∀ c ∈ F, v ∈ V or T (cv1 + v2 ) = cT (v1 ) + T (v2 )
• If T : V → W is a linear transformation then T (0) = 0
Kernel and image of a linear tranformation:
1. Let V and W be two vector space over the same field F and map T : V → W is a linear transformation.
The kernel of T or the null space of T is given by ker(T ) = {v ∈ V : T v = 0}. ker(T ) is a subspace of
V and dim(ker(T )) is called nullity of T .
2. The image space of T or the range space of T is given by {T v : v ∈ V }, that is
{w ∈ W : w = T (v), for some v ∈ V }.
The range space is a subspace of w, and the dim(range(T )) is called the rank(T ).
Rank- Nullity Theorem: Let V and W be a finite dimensional vector space over the same field F, and let
T : V → W is a linear transformation, then rank(T ) + nullity(T ) = dim(V )
• Let V be a vector space over the same field F and T : V → V is a linear transformation, then
(i) nullity (T ) ≤ nullity (T 2 ) ≤ nullity (T 3 ) ≤ . . .
(ii) rank (T ) ≥ rank (T 2 ) ≥ rank (T 3 ) ≥ . . .
Singular and non-singular transformation: Let V and W be two vector space over the same field F,
and let T : V → W is a linear transformation.
(i) T is one-one or injective if v1 6= v2 ⇒ T (v1 ) 6= T (v2 ).
(ii) T is onto or surjective if T (V ) = W , that is img(T ) = W , that is rank(T ) = dim(W ).
Result:
• T is 1 − 1 iff ker(T ) = {0}.
• T is non-singular iff ker(T ) = {0}.
• If dim V = dim W < ∞, then T is 1 − 1 iff T is onto.
• If dim V > dim W and dim V < ∞, then T is singular.
Definition: Let V and W be two vector space over the same field F. V and W are said to be isomorphic if
there exist an invertible linear transformation T : V → W .
• If V and W are finite dimensional vector space over the same field F, then V and W are isomorphic iff
dim V = dim W .
• Let V be an n - dimensional vector space over a field F, then V ∼
= Fn .
• Let V be an infinite dimensional vector space over a field F then V ∼
= V × V . In general V ∼
= Vn
• R over Q is isomorphic to C over Q.
Definition: Let V be a vector space over the same field F, any linear transformation, T : V → V is also
called linear operator on V .
Definition: Let V be a vector space over the same field F, any linear transformation, T : V → F is also
called linear functional on V .
Matrix representation of a linear transformation: Let V and W be two vector space over the same
field F, with dim(V ) = n, dim(W ) = m, let B1 = {v1 , v2 , . . . , vn }, B2 = {u1 , u2 , . . . , um } be ordered bases
for V and W respectively. Let T : V → W be a linear transformation.
For each vi , T (vi ) can be uniquely expressed as linear combination of elements of B2 , that is

T (v1 ) = a11 w1 + a21 w2 + · · · + am1 wm


T (v2 ) = a12 w1 + a22 w2 + · · · + am2 wm
..
.
T (vn ) = a1n w1 + a2n u2 + · · · + amm wm
The matrix of T with respect to the basis B1 and B2 is an m × n matrix whose columns are the co-ordinate
vectors of T (vi )’s with respect to the bases B2 , that is

MBB21 = [[T (v1 )]B2 , [T (v2 )]B2 , · · · , [T (vn )]B2 ]


 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
=  ..
 
.. .. 
 . . ··· . 
am1 am2 · · · amn

• For any v ∈ V , we have M [v]B1 = [T (v)]B2


If [T (v)]B2 = [c1 c2 . . . cm ]T , then T (v) = c1 w1 + c2 w2 + · · · + cm wm .
• rank(T ) = rank(M ) nullity(T ) = nullity(M ).

Special Linear Transformations

Some standard linear transformations


Transformation on R2 :
1. Projection map (idempotent maps)
(i) Projection on to x-axis
T (x, y) = (x, 0) T 2 (x, y) = T (x, 0) = (x, 0) = T . Therefore T 2 = T .
 
1 0
• Matrix of T with respect to standard basis M = .
0 0
(ii) Projection on to y-axis
T (x, y) = (0, y) All the results are analogus to the previous case
2. Reflection map (Involuntary map)
(i) Reflection along x-axis
T (x, y) = (x, −y) T 2 (x, y) = T (x, −y) = (x, y) = I. Therefore T 2 = I.
 
1 0
• Matrix of T with respect to standard basis M = .
0 −1
(ii) Reflection along y - axis
T (x, y) = (−x, y) All the results are analogus to the previous case
3. Rotation map (Orthogonal maps)
(i) Rotation through θ degree anticlockwise T (x, y) = (cos θx − sin θy, sin θx + cos θy).
 
cos θ − sin θ
• Matrix of T with respect to standard basis M = .
sin θ cos θ
(ii) Rotation through θ degree clockwise T (x, y) = (cos θx + sin θy, − sin θx + cos θy).
 
cos θ sin θ
• Matrix of T with respect to standard basis M = .
− sin θ cos θ
4. Dilation maps (Scalar maps)
T (x, y) = (αx, αy), where α is a fixed real number T (x, y) = α(x, y) = αI.
 
α 0
• Matrix of T with respect to standard basis M = .
0 α
Derivative map: Let V be a real vector space of different functions over R, the map D : V → V defined
by D(f (x)) = dfdx
(x)
is a linear operator on V called the derivative map.
Particular map: If V = Pn (x), then D : V → V defined by D(P (x)) = P 0 (x). Here D is nilpotent operator,
Dn+1 = 0. Here 0 is the only eigen value of D.
Integral map: Let V be the vector space of all Riemann integer functions over [a, b] over the field R, the
Zx
map J : V → V defined by J(f (x)) = f (t)dt is a linear map, called the integral map.
a

Matrices and Determinants

Matrices: Matrices are rectangular arrays of ‘elements’ (numbers, symbols or expressions) arranged in rows
and columns.
• The set of all n × n matrices with real entries is usually denoted as Mn (R).
• (Mn (R), +) is an Abelian group.
• (Mn (R), +, ·) is a non commutative ring with unity, which is not an integral domain.
Determinant of a square matrix: Determinant is a map from Mn (R) to R given by det : Mn (R) → R
satisfying the following properties:
(i) det(AT ) = det(A)
(ii) det(A0 ) = − det(A), where A0 is the matrix obtained from A by interchanging any two rows (Ri ) or
columns (Cj )
(iii) det(A) = 0, if any two rows or columns of A are proportional
(iv) det(kA) = k n det(A), k ∈ R
(v) If every element of a row or column of A can be expressed as the sum of two or more terms, then
determinant of A can be expressed as the sum of two or more determinants
(vi) det(A00 ) = det(A), where A00 is the matrix obtained from A by replacing Ri with Ri ± kRj , i 6= j (or Ci
with Ci ± kCj , i 6= j), where k ∈ R

Some Special Matrices

Symmetric Matrices: Let A ∈ Mn (R), then A is said to be symmetric if AT = A


• Let A, B ∈ Mn (R) such that A and B are symmetric, then the matrices A + B, A − B, AB + BA, An
are symmetric.
T T T
The conclusions are
 immediate
  from the fact (AB)
 =
 B A , and see that AB is not always symmetric,
1 0 1 1 1 1
for instance A = B= ⇒ AB =
0 0 1 0 0 0
• For any square matrix A, the matrices A + AT , AAT , AT A are symmetric.
Skew- Symmetric Matrices: Let A ∈ Mn (R), A is said to be skew-symmetric if AT = −A.
• Let A, B ∈ Mn (R) such that A and B are skew-symmetric, then A + B, A − B, kA, AB − BA are
skew-symmetric.      
0 1 0 1 −1 0
AB need not skew symmetric. E.g. Consider A = , B = , then AB =
−1 0 −1 0 0 −1
which not skew symmetric.
• For any square matrix A, the matrix A − AT is always skew-symmetric
• If A is skew symmetric, then I − A is invertible
Hermitian Matrices (Self Adjoint Matrices):
Let A ∈ Mn (C), A is said to be hermitian if A∗ = A, where A∗ = (A)T = (AT ).
 
1 1 + 2i 5i
E.g. 1 − 2i 2 6
−5i 6 3
Skew-Hermitian Matrices:
Let A ∈ Mn (C), A is said to be skew-hermitian if A∗ = −A, where A∗ = (A)T = (AT ).
 
0 1 + 2i
E.g.
−1 + 2i 5i
Cartesian decomposition of a square matrix: Let A ∈ Mn (C), then A can be expressed as A = B + iC,
∗ ∗
where B and C are hermitian matrices given by B = A+A
2
and C = A−A
2i
.
Orthogonal matrices: Let A ∈ Mn (R), A is said to be orthogonal if AAT = AT A = I.
• If A is orthogonal, then A−1 = AT .
• If A is orthogonal, then rows(as well as columns) of A are orthonormal.
 
1 − i 1 + i
Unitary matrices: Let A ∈ Mn (C), A is said to be orthogonal if AA∗ = A∗ A = I. E.g. 1
.
1+i 1−i
2
 
∗ ∗ i i
Normal matrices: Let A ∈ Mn (C), A is said to be orthogonal if AA = A A. E.g. .
i −i
• Real symmetric, real skew-symmetric and real orthogonal matrices are normal
Idempotent matrices (Projection): Let A ∈ Mn (C), A is said to be a idempotent matrix if A2 = A
Periodic matrices:
Let A ∈ Mn (R), A is said to be a periodic, if there exist a k ∈ N and k > 1 such that Ak = A. The least such
k is called the period of A.
 
2 5 14
E.g. A =  1 3 8 , Here A4 = A.
−1 −2 −6
Nilpotent matrices:
Let A ∈ Mn (R), A is said to be a nilpotent if Ak = 0 for some k ≤ n and k ∈ N. The least positive K is
called the index of
 A. 
0 1 0
E.g. For A = 0 0 1, Here A3 = 0 ∴ Index of A = 3
0 0 0

Rank of a Matrix

Rank of a Matrix: Let A be a m × n real matrix, rank(A) can be defined as


(i) The order of the largest non-singular square submatrix of A.
(ii) The number of linearly independent rows or columns of A.
(iii) The dimension of the row space or column space of A.
(iv) The number of non zero rows in the row reduced echelon form of A.
(v) The order of the identity submatrix in the normal form of A
(vi) The rank of linear transformation from Rn → Rm corresponding to A.
   
1 2 3 2 3 4  
1 0 0 0
E.g. Let A = 2 3 4 , B = 0 0 1 , C = , then
0 0 1 0 2×4
0 2 2 3×3 0 0 0 3×3
rank(A) = 3 rank(B) = 2 rank(C) = 2.
• If A is an m × n matrix and B is an n × p matrix, then rank(AB) ≤ min{rank(A), rank(B)}.
• Sylvester’s inequality: If A is an m × n matrix and B is an n × p matrix, then
rank(A) + rank(B) − n ≤ rank(AB)
• If A is a non-zero skew symmetric matrix, then rank(A) ≥ 2
• Let A ∈ Mm×n (R), then rank(A) = rank(AT A) = rank(AAT )
• Let A ∈ Mn (R), then rank(A) = n ⇔ det(A) 6= 0.
• If det(A) = 0, then rank(A) < n
• Let A ∈ Mn (R), then rank(A) ≥ rank(A2 ) ≥ rank(A3 ) . . .
Nullity of a matrix: Let A be an m × n matrix, then nullity of A is the dim(N (A)) = {x ∈ Rn : Ax = 0}
the null space of A, denoted by dim(N (A)) or nullity(A).
Range space of A:
Let A ∈ Mm×n (R), then Range(A), the range space of A, is defined as Range(A) = {Ax ∈ Rm : x ∈ Rn }
• Range(A) is a subspace of Rm .
• rank(A) = dim(Range(A)).

Solution of System of Linear Equations

System of linear equations:


Let A be an m × n real matrix, X be an n × 1 matrix and B be an m × 1 matrix, then the matrix equation
AX = B represents a system of m linear equations in n unknowns, where A is called the coefficient matrix,
X is the variable or unknown matrix and B is the constant matrix.
Case(i): Consider the system of linear equations AX = B, B 6= 0 ————– (1)
then (1) is said to be a non homogeneous system of linear equations
• The system (1) has a unique solution ⇔ rank(A) = rank(A : B) = n, where n is the number of
unknowns or number of columns
• The system (1) has infinite number of solutions ⇔ rank(A) = rank(A : B) = r < n. In this case, we
have to take (n − r) variables as free, out of the n variables.
• The system (1) has no solution ⇔ rank(A) 6= rank(A : B)
Case(ii): Consider the system of linear equations, AX = B, B = 0 ———(2)
then (2) is said to be a non homogeneous system of linear equations
• In this case rank(A) = rank(A : B) always, then the system is consistent and has atleast one solution,
as there is a trivial solution X = 0.
• X = 0 is always a solution of (2), this solution is called trivial solution
• System (2) has a unique solution (X = 0) ⇔ rank(A) = n, number of columns or number of unknowns.
• System (2) has infinite number of solutions iff rank(A) = r < n, Here, we have (n − r) free variables
and the solution of (2) is called the null space of matrix A given by N (A) = {x : Ax = 0, x ∈ Rn } and
dim(N (A)) = n − r.
• If m < n then system (2) has infinitely many solutions.

Eigen Values/Vectors

Characteristic polynomial of a square matrix:


Let A ∈ Mn (F), where F is an arbitrary field. The characteristic polynomial of A is usually denoted by ∆A (x)
or χA (x), which is given by χA (x) = det(xI − A) = xn − S1 xn−1 + S2 xn−2 − · · · + (−1)n Sn , where Sk is the
sum of the principal minors of A of order k and x is a parameter.
Principal minor of a square matrix: Let A ∈ Mn (F), a principal minor of order k is the minor obtained
by removing (n − k)th rows and the corresponding (n − k)th columns.
• The characteristic polynomial of A ∈ Mn (F) is an nth degree monic polynomial in x.
Characteristic equation: Let A ∈ Mn (F), characteristic equation of A is given by χA (x) = 0. That is
det(xI − A) = xn − S1 xn−1 + S2 xn−2 − · · · + (−1)n Sn = 0.
Cayley Hamilton theorem:
Every square matrix satisfies its characteristic equation. Let A ∈ Mn (F) and χA (x) be the characteristic
polynomial of A. Then χA (A) = 0, in fact, An − S1 An−1 + S2 An−2 − · · · + (−1)n Sn = 0.
• Let A ∈ M2 (F), then χA (x) = x2 − tr(A)x + det(A).
• Let A ∈ M3 (F), then χA (x) = x3 − tr(A)x2 + (M1,1 + M2,2 + M3,3 )x − det(A), where Mi,i is the minor
of ai,i .
Minimal polynomial of a square matrix: Let A ∈ Mn (F) and let JA (x) be the collection of all polyno-
mials in x for which A is a root. JA (x) = {P (x) ∈ F[x] : P (A) = 0}
Algebraic multiplicity and Geometric multiplicity:
Let A ∈ Mn (F) and χA (x) be the characteristic polynomial of A. Let λ ∈ F be an eigen value of A, the
multipicity of λ as a root of the characteristic polynomial is called the algebraic multiplicity of λ, it is denoted
by AM . The dimension of eigen space Eλ is called the geometric multiplicity of λ, denoted by GM . GM is
the number of linearly independent eigen vectors corresponding to λ.
• Let A ∈ Mn (F) and let λ1 , λ2 , · · · , λn be the n solutions of the characteristic equation χA (x) = 0 (need
not be eigen values) then,
Pn
S1 = λi = trace(A) = sum of eigen values
i=1
n
Q
Sn = λi = det(A) = product of eigen values
i=1
WORKSHEET QUESTIONS
1. Which one of the following is a subspace of Rn ?
(A) {(x1 , x2 , . . . , xn ) | either x1 = 0 or x2 = 0}
(B) {(x1 , x2 , . . . , xn ) | x1 = x2 = 0}
(C) {(x1 , x2 , . . . , xn ) | x1 6= 0}
(D) {(x1 , x2 , . . . , xn ) | 5x1 − 9x2 = 6}
2. Which one of the following subsets of R2 is not a basis of R2 over
(A) {(1, 1), (−1, −1)}
(B) {(1, −1), (−1, 0)}
(C) {(0, 1), (−1, 0)}
(D) {(1, 1), (1, −1)}
3. If S and T are linear transformations on R2 defined by S(x, y) = (y, x) and T (x, y) = (0, x), then
(A) S2 = S, T2 = I
(B) S2 = S, T2 = 0
(C) S2 = I, T2 = 0
(D) S2 = I, T2 = I
4. If S and T are finite dimensional subspaces of a vector space V over a field F , then
(A) d[S] + d[T ] = d[S ∩ T ] + d[S + T ]
(B) d[S] + d[T ] = d[S ∩ T ] + d[S ∪ T ]
(C) d[S ∩ T ] + d[S ∪ T ] = d[S] + d[S + T ]
(D) d[S ∪ T ] + d[S ∩ T ] = d[S ∪ T ] + d[T ]
 
1 0 1
5. The matrix of 2 1 0 is
3 1 1
(A) skew-symmetric
(B) symmetric
(C) non-singular
(D) singular
6. Let the characteistic equation of the matrix M be λ2 + λ − 1 = 0. Then
(A) M −1 does not exist
(B) M −1 = M − I
(C) M −1 = M + I
(D) M −1 exists but cannot determine from the data
 
3 −4
7. If A = then the value of An
1 −1
 
1 + 2n −4n
(A)
n 1 − 2n
 
1 − 2n 4n
(B)
n 1 + 2n
 
n 2n
(C)
−n 1
 
n 1 − 2n
(D)
1 + 2n 1 − n
8. For vector spaces M and N such that M ⊆ N , consider the following statements.
I. dim M ≤ dim N ;
II. If dim M = dim N , then M = N .
Then
(A) I and II are true
(B) only I is true
(C) only II is true
(D) I and II are false
9. Consider the following statements
I. If X and Y are subspaces of a vector space V , then dim(X + Y ) = dim X + dim Y − dim(X ∩ Y );
II. rank (A + B) ≤ rank (A)+ rank (B).
Then
(A) I and II are true
(B) only I is true
(C) only II is true
(D) I and II are false
10. If A is m × n and B is n × p, consider the statements
I. rank (AB) ≤ min{rank(A), rank(B)}
II. rank (A)+ rank (B) + n ≤ rank (AB).
Then
(A) I and II are true
(B) only I is true
(C) only II is true
(D) I and II are false
11. The vectors (2, 3, k), (1, 2, 0) and (8, 13, k) are linearly dependent. Then the value of k is
(A) 3
(B) 23
(C) 1
(D) 0
12. The vectors x1 = (2, −1, 0), x2 = (4, 1, 1), x3 = (8, −1, 1) are linearly depenedent. Then the relationship
between these vectors is
(A) 2x1 + x2 + x3 = 0
(B) 2x1 − x2 − x3 = 0
(C) 2x1 + x2 − x3 = 0
(D) 2x1 − x2 + x3 = 0
13. The system of equations 3x1 +x2 −λx3 = 0, 4x1 −2x2 −3x3 = 0, 2x1 +x2 −x3 = 0 and 2λx1 +4x2 +λx3 = 0
possesses non-trivial solutions. Then the values of λ are
(A) −9, 1
(B) −9, −1
(C) 9, −1
(D) 9, 1
14. The system of equations x1 − x2 + x3 = 0, x1 + 2x2 − x3 = 0 and 2x1 + x2 + 3x3 = 0 has
(A) a unique non-trivial solution
(B) infinite numbers of solutions
(C) no non-trivial solutions
(D) None of the above
15. If A is a self adjoint matrix, then its diagonal entries are
(A) all complex numbers
(B) all real numbers
(C) 0
(D) −1
16. The system of equations x + y + z = a, x + 2y + 3z = b and 3x + 5y + 7z = c has a one parameter
family solution, then
(A) c − a − 2b = 0
(B) c + a − 2b = 0
(C) c − a + 2b = 0
(D) c + a + 2b = 0
 
3 −4 4
17. Two eigen values of the matrix 1 −2 4 are 2 and 3. Then the cube of the third eigen value is
1 −1 3
(A) 1
(B) 0
(C) −1
(D) −8
18. If the characteristic equation of a matrix is λ3 − 6λ2 + 5λ + 12 = 0, then the sum and product of its
eigen values are
(A) 6, 12
(B) 6, −12
(C) −6, 12
(D) −6, −12
sin2 A cot A 1
19. If A + B + C = π, then sin2 B cot B 1 =
sin2 C cot C 1
(A) 0
(B) 2
(C) −1
(D) None of the above
 
2 2 0 0
2 1 0 0
20. The number of linearly independent eigen vectors of the matrix 
0
 is
0 3 0
0 0 1 4
(A) 1
(B) 2
(C) 3
(D) 4
 
2 1 5
21. For the matrix A = 0 0 7  A3 − αA = 0. Then the value of α is
0 0 −2
(A) 4
(B) 3
(C) 2
(D) 1
22. Let A ∈ Rm×n . The system of equations Ax = b has solution if
(A) rank A= rank [A : b]
(B) rank A= rank [A : b] = n
(C) rank A= rank [A : b] = m
(D) rank A 6= rank [A : b]
23. Eigen values of real symmetric matrix is
(A) real
(B) imaginary
(C) purely imaginary
(D) can be both real and imaginary
 
α 2
24. If A = and |A3 | = 125, then the value of α is
2 α
(A) ±5
(B) ±3
(C) ±2
(D) ±1
cos α sin α
25. The matrix is unitary when α is
i sin α i cos α
(A) (2x + 1) π2
(B) (3x + 1) π2
(C) (4x + 1) π2
(D) (5x + 1) π2

You might also like