Sample Copy Maths Class12 Rajeev KR Giri
Sample Copy Maths Class12 Rajeev KR Giri
Let 𝐴 and 𝐵 be two sets. Then a relation 𝑅 from 𝐴 to 𝐵 is a subset of 𝐴 × 𝐵. 𝑅 is a relation from 𝐴 to 𝐵 ⇔ 𝑅 ⊆
𝐴 × 𝐵.
2. TYPES OF RELATIONS
Void relation: Let 𝐴 be a set. Then 𝜙 ⊆ 𝐴 × 𝐴 and so it is a relation on 𝐴. This relation is called the void or empty
relation on 𝐴. It is the smallest relation on set 𝐴.
Universal relation: Let 𝐴 be a set. Then 𝐴 × 𝐴 ⊆ 𝐴 × 𝐴 and so it is a relation on 𝐴. This relation is called the universal
relation on 𝐴. It is the largest relation on set 𝐴.
Identity relation: Let 𝐴 be a set. Then the relation 𝐼𝐴 = {(𝑎, 𝑎): 𝑎 ∈ 𝐴} on 𝐴 is called the identity relation on 𝐴.
Reflexive Relation: A relation 𝑅 on a set 𝐴 is said to be reflexive if every element of 𝐴 is related to itself. Thus, 𝑅
reflexive ⇔ (𝑎, 𝑎) ∈ 𝑅. 𝑎 ∈ 𝐴.
A relation 𝑅 on a set 𝐴 is not reflexive if there exists an element 𝑎 ∈ 𝐴 such that (𝑎, 𝑎) ∉ 𝑅.
Symmetric relation: A relation 𝑅 on a set 𝐴 is said to be a symmetric relation iff (𝑎, 𝑏) ∈ 𝑅 ⇒ (𝑏, 𝑎) ∈ 𝑅 for all 𝑎, 𝑏 ∈
𝐴. i.e. 𝑎𝑅𝑏 ⇒ 𝑏𝑅𝑎 for all 𝑎, 𝑏 ∈ 𝐴.
A relation 𝑅 on a set 𝐴 is not a symmetric relation if there are atleast two elements 𝑎, 𝑏 ∈ 𝐴 such that (𝑎, 𝑏) ∈ 𝑅 but
(𝑏, 𝑎) ∈ 𝑅.
Transitive relation: A relation 𝑅 on 𝐴 is said to be a transitive relation iff (𝑎, 𝑏) ∈ 𝑅 and (𝑏, 𝑐) ∈ 𝑅 ⇒ (𝑎, 𝑐) ∈ 𝑅 for
all 𝑎, 𝑏, 𝑐 ∈ 𝐴. i.e. 𝑎𝑅𝑏 and 𝑏𝑅𝑐 ⇒ 𝑎𝑅𝑐 for all 𝑎, 𝑏, 𝑐 ∈ 𝐴.
3. TYPES OF FUNCTIONS
ONE- ONE FUNCTION (INJECTION)
A function 𝑓: 𝐴 → 𝐵 is said to be a oneone function or an injection if different elements of 𝐴 have different images in
𝐵. Thus, 𝑓: 𝐴 → 𝐵 is oneone ⇔ 𝑎 ≠ 𝑏 ⇒ 𝑓(𝑎) ≠ 𝑓(𝑏) for all 𝑎, 𝑏 ∈ 𝐴 ⇔ 𝑓(𝑎) = 𝑓(𝑏) ⇒ 𝑎 = 𝑏 for all 𝑎, 𝑏 ∈ 𝐴.
MANY-ONE FUNCTION
A function 𝑓: 𝐴 → 𝐵 is said to be a manyone function if two or more elements of set 𝐴 have the same image in 𝐵.
NCERT: EXE – 1.1 CW: Q. NO. 1, 2,3,4,5, 10 HW: Q. NO. – 6,7,8,9, 11, 14, 15, 16
EXE – 1.2 CW: Q. NO. 1, 2,3,4,5, 9 HW: Q. NO. – 6,7,8,10, 11, 12
EXE – MISCELLANEOUS CW: Q. NO. 1, 4, 6,7 HW: Q. NO. – 2,3,5
EXEMPLAR: CW – 1, 16, 18, 19 HW: 20, 21, 22, 23, 24
𝜋
sin−1 𝑥 + cos −1 𝑥 = 2 , where −1 ≤ 𝑥 ≤ 1
𝜋
tan−1 𝑥 + cot −1 𝑥 = , where −∞ ≤ 𝑥 ≤ ∞
2
𝜋
( sec −1 𝑥 + cosec −1 𝑥 = 2 , where 𝑥 ≤ −1 or 𝑥 ≥ 1
𝑥+𝑦
tan−1 𝑥 + tan−1 𝑦 = tan−1 (1−𝑥𝑦), if 𝑥𝑦 < 1
𝑥+𝑦
tan−1 𝑥 + tan−1 𝑦 = 𝜋 + tan−1 ( ), if 𝑥𝑦 > 1
1−𝑥𝑦
𝑥−𝑦
tan−1 𝑥 − tan−1 𝑦 = tan−1 ( )
1 + 𝑥𝑦
NCERT: EXE – 2.1 CW: Q. NO. 1, 2,3,4,5, 6,11,12 HW: Q. NO. – 7,8,9,10,13,14
EXE – 2.2 CW: Q. NO. 1, 2,3,4,5, 7 HW: Q. NO. – 6,8,9,11,12,13
EXE – MISCELLANEOUS CW: Q. NO. 8,9,10,13,14 HW: Q. NO. – 1,2,3,4,5,6
EXEMPLAR: EX- 2.3 – 4, 7, 12, 19 HW: 14, 15, 20,21, 22, 24, 25
MATRIX
A matrix is an ordered rectangular array of numbers or functions. The numbers or functions are called the elements or
the entries of the matrix. We denote matrices by capital letters.
ORDER OF A MATRIX
A matrix having 𝑚 rows and 𝑛 columns is called a matrix of order 𝑚 × 𝑛 or simply 𝑚 × 𝑛 matrix (read as an 𝑚 by 𝑛
matrix).
In general, an 𝑚 × 𝑛 matrix has the following rectangular array:
or A = [𝑎𝑖𝑗 ] , 1 ≤ 𝑖 ≤ 𝑚, 1 ≤ 𝑗 ≤ 𝑛𝑖, 𝑗 ∈ N
𝑚×𝑛
TYPES OF MATRICES
COLUMN MATRIX
A matrix is said to be a column matrix if it has only one column. In general, A = [𝑎𝑖𝑗 ] is a column matrix of order
𝑚×1
𝑚 × 1.
(ii) Row matrix
A matrix is said to be a row matrix if it has only one row. In general, 𝐁 = [𝑏𝑖𝑗 ]1×𝑛 is a row matrix of order 1 × 𝑛.
SQUARE MATRIX
DIAGONAL MATRIX
A square matrix B = [𝑏𝑖𝑗 ]𝑚×𝑚 is said to be a diagonal matrix if all its non diagonal elements are zero, that is a matrix
B = [𝑏𝑖𝑗 ]𝑚×𝑚 is said to be a diagonal matrix if 𝑏𝑖𝑗 = 0, when 𝑖 ≠ 𝑗.
SCALAR MATRIX
A diagonal matrix is said to be a scalar matrix if its diagonal elements are equal, that is, a square matrix B = [𝑏𝑖𝑗 ]𝑛×𝑛
is said TO BE A SCALAR MATRIX IF 𝑏𝑖𝑗 = 0, WHEN 𝑖 ≠ 𝑗 𝑏𝑖𝑗 = 𝑘, WHEN 𝑖 = 𝑗, FOR SOME CONSTANT 𝑘.
IDENTITY MATRIX
A square matrix in which elements in the diagonal are all 1 and rest are all zero is called an identity matrix. In other
1 if 𝑖 = 𝑗
words, the square matrix 𝐴 = [𝑎𝑖𝑗]𝑛 × 𝑛 is an identity matrix, if 𝑎𝑖𝑗 = {
0 if 𝑖 ≠ 𝑗
We denote the identity matrix of order 𝑛 by 𝐼𝑛 . When order is clear from the context, we simply write it as I.
ZERO MATRIX
A matrix is said to be zero matrix or null matrix if all its elements are zero. We denote zero matrix by O.
EQUALITY OF MATRICES
Two matrices A = [𝑎𝑖𝑗 ] and B = [𝑏𝑖𝑗 ] are said to be equal if
(i) they are of the same order
(ii) each element of A is equal to the corresponding element of B, that is 𝑎𝑖𝑗 = 𝑏𝑖𝑗 for all 𝑖 and 𝑗.
OPERATIONS ON MATRICES
ADDITION OF MATRICES
𝑎11
𝑎12 𝑎13 𝑏 𝑏12 𝑏13
Thus, if A = [𝑎 𝑎 𝑎 ] is a 2 × 3 matrix and B = [ 11 ] is another 2 × 3 matrix. Then, we define
21 22 23 𝑏21 𝑏22 𝑏23
𝑎11 + 𝑏11 𝑎12 + 𝑏12 𝑎13 + 𝑏13
A+B =[ ].
𝑎21 + 𝑏21 𝑎22 + 𝑏22 𝑎23 + 𝑏23
MULTIPLICATION OF MATRICES
The product of two matrices A and B is defined if the number of columns of A is equal to the number of rows of B.
Let A = [𝑎𝑖𝑗 ] be an 𝑚 × 𝑛 matrix and B = [𝑏𝑗𝑘 ] be an 𝑛 × 𝑝 matrix. Then the product of the matrices A and B is the
matrix C of order 𝑚 × 𝑝.
NON-COMMUTATIVITY OF MULTIPLICATION OF MATRICES:
Now, we shall see by an example that even if 𝐴𝐵 and 𝐵𝐴 are both defined, it is not necessary that 𝐴𝐵 = 𝐵𝐴.
TRANSPOSE OF A MATRIX:
A square matrix A = [𝑎𝑖𝑗 ] is said to be symmetric if A′ = A, that is, [𝑎𝑖𝑗 ] = [𝑎𝑗𝑖 ] for all possible values of 𝑖 and 𝑗.
A square matrix A = [𝑎𝑖𝑗 ] is said to be skew symmetric matrix if A′ = −A, that is 𝑎𝑗𝑖 = −𝑎𝑖𝑗 for all possible values of
𝑖 and 𝑗. Now, if we put 𝑖 = 𝑗, we have 𝑎𝑖𝑖 = −𝑎𝑖𝑖 . Therefore 2𝑎𝑖𝑖 = 0 or 𝑎𝑖𝑖 = 0 for all i's.
This means that all the diagonal elements of a skew symmetric matrix are zero.
THEOREM 1: For any square matrix A with real number entries, A + A′ is a symmetric matrix and A − A′ is a skew
symmetric matrix.
THEOREM 2: Any square matrix can be expressed as the sum of a symmetric and a skew symmetric matrix.
INVERTIBLE MATRICES:
If A is a square matrix of order 𝑚, and if there exists another square matrix B of the same order 𝑚, such that AB =
BA = I, then B is called the inverse matrix of A and it is denoted by A−1 . In that case A is said to be invertible.
DETERMINANT
𝑎 𝑏 𝑎 𝑏
If A = [ ], then determinant of A is written as |A| = | | = det(A) or Δ
𝑐 𝑑 𝑐 𝑑
(i) For matrix A, |A| is read as determinant of A and not modulus of A.
(ii) Only square matrices have determinants.
There are six ways of expanding a determinant of order 3 corresponding to each of three rows (𝑅1 , 𝑅2 and 𝑅3 ) and
three columns (C1 , C2 and C3 ) giving the same value as shown below.
Consider the determinant of square matrix A = [𝑎𝑖𝑗 ]3×3
If A = 𝑘 B where A and B are square matrices of order 𝑛, then |A| = 𝑘 𝑛 | B|, where 𝑛 = 1,2,3
AREA OF TRIANGLE:
Area of a triangle whose vertices are (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) and (𝑥3 , 𝑦3 ), is given by the expression
1 𝑥1 𝑦1 1
Δ = |𝑥2 𝑦2 1| (1)
2 𝑥 𝑦3 1
3
Minor of an element 𝒂𝒊𝒋 of a determinant is the determinant obtained by deleting its 𝑖 th row and 𝑗 th column in
which element 𝑎𝑖𝑗 lies. Minor of an element 𝑎 = is denoted by M𝑖𝑗 .
Cofactor of an element 𝒂𝒊𝒋 , denoted by A𝑖𝑗 is defined by A𝑖𝑗 = (−1)𝑖+𝑗 M𝑖𝑗 , where M𝑖𝑗 is minor of 𝑎𝑖𝑗 .
If elements of a row (or column) are multiplied with cofactors of any other row (or column), then their sum is zero.
The adjoint of a square matrix A = [𝑎𝑖𝑗 ]𝑛×𝑛 is defined as the transpose of the matrix [A𝑖𝑗 ]𝑛×𝑛 , where A𝑖𝑗 is the
cofactor of the element 𝑎𝑖𝑗 . Adjoint of the matrix A is denoted by adj A.
THEOREM 1: If A be any given square matrix of order 𝑛, then A(adjA) = (adjA)A = |A|I, where I is the identity
matrix of order 𝑛
THEOREM 3: The determinant of the product of matrices is equal to product of their respective determinants, that is,
|AB| = |A||B|, where A and B are square matrices of the same order
𝑎1 𝑥 + 𝑏1 𝑦 + 𝑐1 𝑧 = 𝑑1
𝑎2 𝑥 + 𝑏2 𝑦 + 𝑐2 𝑧 = 𝑑2
𝑎3 𝑥 + 𝑏3 𝑦 + 𝑐3 𝑧 = 𝑑3
𝑎1 𝑏1 𝑐1 𝑥 𝑑1
Let 𝐴 = [𝑎2 𝑏2 𝑐2 ] , 𝑋 = [𝑦] and 𝐵 = [𝑑2 ]
𝑎3 𝑏3 𝑐3 𝑧 𝑑3
Then, the system of equations can be written as, AX = B, i.e.,
𝑎1 𝑏1 𝑐1 𝑥 𝑑1
[𝑎2 𝑏2 𝑐2 ] [𝑦] = [𝑑2 ]
𝑎3 𝑏3 𝑐3 𝑧 𝑑3
CASE I:
If A is a non-singular matrix, then its inverse exists. Now AX = B
This matrix equation provides unique solution for the given system of equations as inverse of a matrix is unique. This
method of solving system of equations is known as Matrix Method.
CASE II:
If A is a singular matrix, then |A| = 0. In this case, we calculate (adjA)B.
If (adj A)B ≠ O, (O being zero matrix), then solution does not exist and the system of equations is called inconsistent.
If (adj A) B = O, then system may be either consistent or inconsistent according as the system have either infinitely
many solutions or no solution.
Continuity on an open interval: A function 𝑓(𝑥) is said to be continuous on an open interval (𝑎, 𝑏) if and only if it is
continuous at every point on the interval (𝑎, 𝑏).
Continuity on a closed interval: A function 𝑓(𝑥) is said to be continuous on a closed interval [𝑎, 𝑏] if and only if
(i) 𝑓 is continuous on the open interval (𝑎, 𝑏).
(ii) lim+ 𝑓(𝑥) = 𝑓(𝑎)
𝑥→𝑎
(iii) lim− 𝑓(𝑥) = 𝑓(𝑎)
𝑥→𝑎
Continuous Function: A function 𝑓(𝑥) is said to be continuous, if it is continuous at each point of its domain.
Everywhere Continuous Function: A function 𝑓(𝑥) is said to be everywhere continuous if it is continuous on the
entire real line (−∞, ∞).
Theorem Suppose 𝑓 and 𝑔 be two real functions continuous at a real number 𝑐. Then
(1) 𝑓 + 𝑔 is continuous at 𝑥 = 𝑐.
(2) 𝑓 − 𝑔 is continuous at 𝑥 = 𝑐.
(3) 𝑓. 𝑔 is continuous at 𝑥 = 𝑐.
𝑓
(4) 𝑔 is continuous at 𝑥 = 𝑐, (provided 𝑔(𝑐) ≠ 0).
DISCONTINUOUS FUNCTIONS:
A function 𝑓 is said to be discontinuous at a point 𝑎 of its domain 𝐷 if it is not continuous at 𝑎. The point 𝑎 is then
called a point of discontinuity of the function.
The discontinuity may arise due to any of the following situations:
lim+ 𝑓(𝑥) or lim− 𝑓(𝑥) both may not exist.
𝑥→𝑎 𝑥→𝑎
lim+ 𝑓(𝑥) as well as lim− 𝑓(𝑥) may exist, but are unequal.
𝑥→𝑎 𝑥→𝑎
lim+ 𝑓(𝑥) as well as lim− 𝑓(𝑥) both may exist, but either of the two or both may not be equal to 𝑓(𝑎).
𝑥→𝑎 𝑥→𝑎
DIFFERENTIABILITY AT A POINT
Let 𝑓(𝑥) be a real valued function defined on an open interval (𝑎, 𝑏) and let 𝑐 ∈ (𝑎, 𝑏). Then 𝑓(𝑥) is said to be
𝑓(𝑥)−𝑓(𝑐)
differentiable or derivable at 𝑥 = 𝑐, if and only if lim 𝑥−𝑐
exists finitely.
𝑥→𝑐
f(x) is differentiable at 𝑥 = 𝑐 ⇔ 𝐿𝑓 (𝑐) = 𝑅𝑓 (𝑐). If 𝐿𝑓 (𝑐) ≠ 𝑅𝑓 ′ (𝑐), we say that 𝑓(𝑥) is not differentiable at 𝑥 =
′ ′ ′
𝑐.
𝑓(𝑥) is differentiable at point 𝑃, if and only if there exists a unique tangent at point 𝑃. In other words, 𝑓(𝑥) is
differentiable at a point 𝑃 if and only if the curve does not have 𝑃 as a corner point.
𝑓(𝑥) is differentiable at 𝑥 = 𝑐 ⇒ 𝑓(𝑥) is continuous at 𝑥 = 𝑐.
A function 𝑓(𝑥) defined on an open interval (𝑎, 𝑏) is said to be differentiable or derivable in open interval (𝑎, 𝑏) if it
is differentiable at each point of (𝑎, 𝑏).
DERIVATIVE: The rate of change of a function with respect to the independent variable. For the function 𝑦 = 𝑓(𝑥)
𝑑𝑦
it is denoted by 𝑑𝑥
DIFFERENTIATION: The process of obtaining the derivative of a function by considering small changes in the function
and independent variable, and finding the limiting value of the ratio of such changes.
𝑓(𝑥)−𝑓(𝑐) 𝑑𝑓(𝑥)
Slope of tangent at P = lim =( )
𝑥→𝑐 𝑥−𝑐 𝑑𝑥 𝑥=𝑐
𝑑
Differentiation of a constant function is zero i.e. 𝑑𝑥 (𝑐) = 0
Let 𝑓(𝑥) be a differentiable function and let 𝑐 be a constant. Then 𝑐. 𝑓(𝑥) is also differentiable such that
𝑑 𝑑
𝑑𝑥
{𝑐. 𝑓(𝑥)} = 𝑐. 𝑑𝑥 𝑓(𝑥)
𝑑
If 𝑓(𝑥) and 𝑔(𝑥) are differentiable functions, then 𝑓(𝑥) ± 𝑔(𝑥) are also differentiable such that [𝑓(𝑥) ± 𝑔(𝑥)] =
𝑑𝑥
𝑑 𝑑
𝑑𝑥
𝑓(𝑥) ± 𝑑𝑥 𝑔(𝑥)
PRODUCT RULE : If 𝑓(𝑥) and 𝑔(𝑥) are two differentiable functions, then 𝑓(𝑥). 𝑔(𝑥) is also differentiable such that
𝑑 𝑑 𝑑
[𝑓(𝑥) ⋅ 𝑔(𝑥)] = 𝑓(𝑥) 𝑔(𝑥) + 𝑔(𝑥) 𝑓(𝑥)
𝑑𝑥 𝑑𝑥 𝑑𝑥
That is, derivative of the product of two functions = [( First function ) × ( derivative of second function ) + (second
function) × (derivative of first function)].
𝑓(𝑥)
QUOTIENT RULE: If 𝑓(𝑥) and 𝑔(𝑥) are two differentiable functions and 𝑔(𝑥) ≠ 0, then 𝑔(𝑥) is also differentiable
𝑑 𝑑
𝑑 𝑓(𝑥) 𝑔(𝑥) 𝑓(𝑥)−𝑓(𝑥) 𝑔(𝑥)
𝑑𝑥 𝑑𝑥
such that [
𝑑𝑥 𝑔(𝑥)
] = [𝑔(𝑥)]2
When it is not possible to express 𝑦 as a function of 𝑥 in the form of 𝑦 = f(𝑥), then 𝑦 is said to be an implicit
function of 𝑥. To find the derivative in such case we differentiate both sides of the given relation with respect of 𝑥.
𝑦 = 𝑓(𝑥) 𝑔(𝑥) = 𝑒 𝑔(𝑥)⋅log{𝑓(𝑥)} and then differentiating with respect to 𝑥, we may get
𝑑𝑦 1 𝑑 𝑑
= 𝑒 𝑔(𝑥)log{𝑓(𝑥)} [𝑔(𝑥) ⋅ 𝑓(𝑥) + log{𝑓(𝑥)} 𝑔(𝑥)]
𝑑𝑥 𝑓(𝑥) 𝑑𝑥 𝑑𝑥
𝑔(𝑥) 𝑑 𝑑
= [𝑓(𝑥)] 𝑔(𝑥) [ 𝑓(𝑥) + log{𝑓(𝑥)} 𝑔(𝑥)]
𝑓(𝑥) 𝑑𝑥 𝑑𝑥
When 𝑥 and 𝑦 are given as functions of a single variable, i.e., 𝑥 = 𝑓(𝑡), 𝑦 = 𝑔(𝑡) are two functions and 𝑡 is a
𝑑𝑦 𝑑𝑦/𝑑𝑡
variable. Then 𝑑𝑥 = 𝑑𝑥/𝑑𝑡.
If a quantity 𝑦 varies with another quantity 𝑥, satisfying some rule 𝑦 = 𝑓(𝑥), then 𝑓 ′ (𝑥) represents the rate of
change of 𝑦 with respect to 𝑥 and 𝑓 ′ (ℎ) represents the rate of change of 𝑦 with respect to 𝑥 at 𝑥 = ℎ.
𝑑𝑦
To is positive if 𝑦 increases as 𝑥 increases and is negative if 𝑦 decreases as 𝑥 increases.
𝑑𝑥
Strictly Increasing Function : A function 𝑓(𝑥) is said to be a strictly increasing function on (𝑎, 𝑏) if 𝑥1 < 𝑥2 ⇒
𝑓(𝑥1 ) < 𝑓(𝑥2 ) for all 𝑥1 , 𝑥2 ∈ (𝑎, 𝑏).
Strictly Decreasing Function: A function 𝑓(𝑥) is said to be a strictly decreasing function on (𝑎, 𝑏) if 𝑥1 < 𝑥2 ⇒
𝑓(𝑥1 ) > 𝑓(𝑥2 ) for all 𝑥1 , 𝑥2 ∈ (𝑎, 𝑏).
Monotonic Function: A function 𝑓(𝑥) is said to be monotonic on an interval (𝑎, 𝑏) if it is either increasing or
decreasing on (𝑎, 𝑏).
A function 𝑓(𝑥) is said to be increasing on [𝑎, 𝑏] if it is increasing (decreasing) on (𝑎, 𝑏) and it is also increasing at
𝑥 = 𝑎 and 𝑥 = 𝑏.
If 𝑓(𝑥) is increasing function on ( 𝑎, 𝑏 ), then tangent at every point on the curve 𝑦 = 𝑓(𝑥) makes an acute angle q
with the positive direction of 𝑥-axis.
SLOPE OF TANGENT
𝑑𝑦
If a tangent line to the curve 𝑦 = 𝑓(𝑥) makes an angle 𝜃 with 𝑥-axis in the positive direction, then 𝑑𝑥 = slope of the
tangent tan 𝜃.
MAXIMUM
Let 𝑓(𝑥) be a function with domain 𝐷 ⊂ 𝑅. Then 𝑓(𝑥) is said to attain the maximum value at a point 𝑎 ∈ 𝐷, if
𝑓(𝑥) ≤ 𝑓(𝑎) for all 𝑥 ∈ 𝐷.
In such a case, 𝑎 is called point of maxima and 𝑓(𝑎) is known as the maximum value or the greatest value or the
absolute maximum value of 𝑓(𝑥).
MINIMUM
Let 𝑓(𝑥) be a function with domain 𝐷 ⊂ 𝑅. Then 𝑓(𝑥) is said to attain the minimum value at a point 𝑎 ∈ 𝐷, if
𝑓(𝑥) ≥ 𝑓(𝑎) for all 𝑥 ∈ 𝐷
In such a case, 𝑎 is called point of minima and 𝑓(𝑎) is known as the minimum value or the least value or the absolute
minimum value of 𝑓(𝑥).
LOCAL MAXIMUM: A function 𝑓(𝑥) is said to attain a local maximum at 𝑥 = 𝑎 if there exists a neighbourhood
( 𝑎 − 𝛿, 𝑎 + 𝛿 ) of 𝑎 such that, 𝑓(𝑥) < 𝑓(𝑎) for all 𝑥 ∈ (𝑎 − 𝛿, 𝑎 + 𝛿), 𝑥 ≠ 𝑎 or, 𝑓(𝑥) − 𝑓(𝑎) < 0 for all 𝑥 ∈ (𝑎 −
𝛿, 𝑎 + 𝛿), 𝑥 ≠ 𝑎.
In such a case 𝑓(𝑎) is called to attain a local maximum value of 𝑓(𝑥) at 𝑥 = 𝑎.
LOCAL MINIMUM: 𝑓(𝑥) > 𝑓(𝑎) for all 𝑥 ∈ (𝑎 − 𝛿, 𝑎 + 𝛿), 𝑥 ≠ 𝑎 or 𝑓(𝑥) − 𝑓(𝑎) > 0 for all 𝑥 ∈ (𝑎 − 𝛿, 𝑎 + 𝛿),
𝑥 ≠ 𝑎. In such a case 𝑓(𝑎) is called the local minimum value of 𝑓(𝑥) at 𝑥 = 𝑎.
If 𝑐 is a point of local maxima of 𝑓, then 𝑓(𝑐) is a local maximum value of 𝑓. Similarly, if 𝑐 is a point of local minima of
𝑓, then 𝑓(𝑐) is a local minimum value of 𝑓.
A point c in the domain of a function 𝑓 at which either 𝑓 ′ (𝑐) = 0 or 𝑓 is not differentiable is called a critical point of
𝑓. Note that if 𝑓 is continuous at 𝑐 and 𝑓 ′ (𝑐) = 0, then there exists an ℎ > 0 such that 𝑓 is differentiable in the
interval (𝑐 − ℎ, 𝑐 + ℎ).
FIRST DERIVATIVE TEST FOR LOCAL MAXIMA AND MINIMA-
Let 𝑓 be a function defined on an open interval I. Let 𝑓 be continuous at a critical point 𝑐 in I. Then
(i) If 𝑓 ′ (𝑥) changes sign from positive to negative as 𝑥 increases through c.
(iii)If 𝑓 ′ (𝑥) does not change sign as 𝑥 increases through 𝑐, then 𝑐 is neither a point of local maxima nor a point of
local minima. Infact, such a point is called point of inflexion.
CHAPTER - 7: INTEGRALS
𝑑
Integration is the inverse process of differentiation. Let 𝑑𝑥 𝐹(𝑥) = 𝑓(𝑥). Then we write ∫ 𝑓(𝑥)𝑑𝑥 = 𝐹(𝑥) + 𝐶.
𝑑
(v) 𝑑𝑥 (−cos 𝑥) = sin 𝑥 ⇒ ∫ sin 𝑥𝑑𝑥 = −cos 𝑥 + 𝐶
𝑑
(vi) 𝑑𝑥 (sin 𝑥) = cos 𝑥 ⇒ ∫ cos 𝑥𝑑𝑥 = sin 𝑥 + 𝐶
𝑑
(vii) (tan 𝑥) = sec 2 𝑥 ⇒ ∫ sec 2 𝑥𝑑𝑥 = tan 𝑥 + 𝐶
𝑑𝑥
𝑑
(viii) 𝑑𝑥 (−cot 𝑥) = cosec 2 𝑥 ⇒ ∫ cosec 2 𝑥𝑑𝑥 = −cot 𝑥 + 𝐶
𝑑
(ix) (sec 𝑥) = sec 𝑥tan 𝑥 ⇒ ∫ sec 𝑥tan 𝑥𝑑𝑥 = sec 𝑥 + 𝐶
𝑑𝑥
𝑑
(x) 𝑑𝑥 (−cosec𝑥) = cosec𝑥cot 𝑥 ⇒ ∫ cosec𝑥cot 𝑥𝑑𝑥 = −cosec𝑥 + 𝐶
𝑑
(xi) 𝑑𝑥 (log sin 𝑥) = cot 𝑥 ⇒ ∫ cot 𝑥𝑑𝑥 = log |sin 𝑥| + 𝐶
𝑑
(xii) (−log cos 𝑥) = tan 𝑥 ⇒ ∫ tan 𝑥𝑑𝑥 = −log |cos 𝑥| + 𝐶
𝑑𝑥
𝑑
(xiii) 𝑑𝑥 (log(sec 𝑥 + tan 𝑥)) = sec 𝑥 ⇒ ∫ sec 𝑥𝑑𝑥 = log |sec 𝑥 + tan 𝑥| + 𝐶
𝑑
(xiv) 𝑑𝑥 (log(cosec𝑥 − cot 𝑥)) = cosec𝑥 ⇒ ∫ cosec𝑥𝑑𝑥 = log |cosec𝑥 − cot 𝑥| + 𝐶
𝑑 𝑥 1 1 𝑥
(xv) 𝑑𝑥 (sin−1 𝑎) = 2 2 ⇒ ∫ 2 2 𝑑𝑥 = sin−1 (𝑎) + 𝐶
√𝑎 −𝑥 √𝑎 −𝑥
𝑑 𝑥 −1 1 𝑥
(xvi) 𝑑𝑥 (cos−1 𝑎) = 2 2 ⇒ ∫ − 2 2 𝑑𝑥 = cos−1 (𝑎) + 𝐶
√𝑎 −𝑥 √𝑎 −𝑥
𝑑 1 𝑥 1 1 1 𝑥
(xvii) 𝑑𝑥 (𝑎 tan−1 𝑎) = 𝑎2 +𝑥2 ⇒ ∫ 𝑎2 +𝑥2 𝑑𝑥 = 𝑎 tan−1 (𝑎) + 𝐶
𝑑 1 𝑥 1 1 1 𝑥
(xviii) ( cot −1 ) = − 2 2 ⇒ ∫ − 2 2 𝑑𝑥 = cot −1 ( ) + 𝐶
𝑑𝑥 𝑎 𝑎 𝑎 +𝑥 𝑎 +𝑥 𝑎 𝑎
𝑑 1 𝑥 1 1 1 𝑥
(xix) 𝑑𝑥 (𝑎 sec −1 𝑎) = ⇒∫ 𝑑𝑥 = 𝑎 sec −1 (𝑎) + 𝐶
𝑥√𝑥 2 −𝑎 2 𝑥√𝑥 2 −𝑎 2
𝑑 1 𝑥 1 1 1 𝑥
( 𝐱𝐱) 𝑑𝑥 (𝑎 cosec −1 𝑎) = − ⇒∫ − 𝑑𝑥 = 𝑎 cosec −1 (𝑎) + 𝐶
𝑥√𝑥 2 −𝑎 2 𝑥√𝑥 2 −𝑎 2
1
• If ∫ 𝑓(𝑥)𝑑𝑥 = 𝜙(𝑥), then ∫ 𝑓(𝑎𝑥 + 𝑏)𝑑𝑥 = 𝑎 𝜙(𝑎𝑥 + 𝑏).
(𝑎𝑥+𝑏)𝑛+1
• ∫ (𝑎𝑥 + 𝑏)𝑛 𝑑𝑥 = 𝑎(𝑛+1)
+ 𝐶, 𝑛 ≠ −1.
1 1
• ∫ 𝑎𝑥+𝑏
𝑑𝑥 = 𝑎 log |𝑎𝑥 + 𝑏| + 𝐶.
1
• ∫ 𝑒 𝑎𝑥+𝑏 𝑑𝑥 = 𝑎 𝑒 𝑎𝑥+𝑏 + 𝐶.
1 𝑎 𝑏𝑥+𝑐
• ∫ 𝑎𝑏𝑥+𝑐 𝑑𝑥 = ⋅ + 𝐶, 𝑎 > 0 and 𝑎 ≠ 1.
𝑏 log 𝑎
1
• ∫ sin(𝑎𝑥 + 𝑏)𝑑𝑥 = − 𝑎 cos(𝑎𝑥 + 𝑏) + 𝐶.
1
• ∫ cos(𝑎𝑥 + 𝑏)𝑑𝑥 = sin(𝑎𝑥 + 𝑏) + 𝐶.
𝑎
1
• ∫ sec 2 (𝑎𝑥 + 𝑏)𝑑𝑥 = 𝑎 tan(𝑎𝑥 + 𝑏) + 𝐶.
1
• ∫ cosec 2 (𝑎𝑥 + 𝑏)𝑑𝑥 = − 𝑎 cot(𝑎𝑥 + 𝑏) + 𝐶.
1
• ∫ sec(𝑎𝑥 + 𝑏)tan(𝑎𝑥 + 𝑏)𝑑𝑥 = 𝑎 sec(𝑎𝑥 + 𝑏) + 𝐶.
−1
• ∫ cosec(𝑎𝑥 + 𝑏)cot(𝑎𝑥 + 𝑏)𝑑𝑥 = 𝑎
cosec(𝑎𝑥 + 𝑏) + 𝐶.
−1
• ∫ tan(𝑎𝑥 + 𝑏)𝑑𝑥 = 𝑎
log |cos(𝑎𝑥 + 𝑏)| + 𝐶.
1
• ∫ cot(𝑎𝑥 + 𝑏)𝑑𝑥 = log |sin(𝑎𝑥 + 𝑏)| + 𝐶.
𝑎
1
• ∫ sec(𝑎𝑥 + 𝑏)𝑑𝑥 = log |sec(𝑎𝑥 + 𝑏) + tan(𝑎𝑥 + 𝑏)| + 𝐶.
𝑎
1
• ∫ cosec(𝑎𝑥 + 𝑏)𝑑𝑥 = log |cosec(𝑎𝑥 + 𝑏) − cot(𝑎𝑥 + 𝑏)| + 𝐶.
𝑎
• In rational algebraic functions if the degree of numerator is greater than or equal to the degree of
denominator, then always divide the numerator by denominator and use the result.
Numerator Remainder
= Quotient +
Denominator Denominator
• To evaluate integrals of the form ∫ sin 𝑚𝑥cos 𝑛𝑥𝑑𝑥, ∫ sin 𝑚𝑥sin 𝑛𝑥𝑑𝑥, ∫ cos 𝑚𝑥cos 𝑛𝑥𝑑𝑥 and
∫ cos 𝑚𝑥sin 𝑛𝑥𝑑𝑥, we use the following trigonometrical identities:
2sin 𝐴cos 𝐵 = sin(𝐴 + 𝐵) + sin(𝐴 − 𝐵).
2cos 𝐴sin 𝐵 = sin(𝐴 + 𝐵) − sin(𝐴 − 𝐵).
2cos 𝐴cos 𝐵 = cos(𝐴 + 𝐵) + cos(𝐴 − 𝐵).
2sin 𝐴sin 𝐵 = cos(𝐴 − 𝐵) − cos(𝐴 + 𝐵).
𝑓′ (𝑥)
• ∫ 𝑓(𝑥)
𝑑𝑥 = log{𝑓(𝑥)} + 𝐶.
INTEGRATION BY PARTS
• If 𝑢 and 𝑣 are two functions of 𝑥, then
𝑑𝑢
∫ 𝑢𝑣𝑑𝑥 = 𝑢 (∫ 𝑣𝑑𝑥) − ∫ { ∫ 𝑣𝑑𝑥} 𝑑𝑥
𝑑𝑥
Choose the first function as the function which comes first in the word ILATE, where
𝐼 - stands for the inverse trignometrical functions
DEFINITE INTEGRALS
𝑏
A definite integral is denoted by ∫𝑎 𝑓(𝑥)𝑑𝑥, where 𝑎 is called the lower limit of the integral and 𝑏 is called the upper
limit of the integral. The definite integral is introduced either as the limit of a sum or if it has an anti derivative F in
the interval [𝑎, 𝑏], then its value is the difference between the values of F at the end points, i.e., F(𝑏) − F(𝑎).
2𝑎 𝑎 𝑎
𝐏5 :∫0 𝑓(𝑥)𝑑𝑥 = ∫0 𝑓(𝑥)𝑑𝑥 + ∫0 𝑓(2𝑎 − 𝑥)𝑑𝑥
2𝑎 𝑎
𝐏6 :∫0 𝑓(𝑥)𝑑𝑥 = 2∫0 𝑓(𝑥)𝑑𝑥, if 𝑓(2𝑎 − 𝑥) = 𝑓(𝑥) and
0 if 𝑓(2𝑎 − 𝑥) = −𝑓(𝑥)
𝑎 𝑎
𝐏7 : (i) ∫−𝑎 𝑓(𝑥)𝑑𝑥 = 2 ∫0 𝑓(𝑥)𝑑𝑥, if 𝑓 is an even function, i.e., if 𝑓(−𝑥) = 𝑓(𝑥).
𝑎
(ii) ∫−𝑎 𝑓(𝑥)𝑑𝑥 = 0, if 𝑓 is an odd function, i.e., if 𝑓(−𝑥) = −𝑓(𝑥).
Expression Substitution
𝑎2 + 𝑥 2 𝑥 = 𝑎tan 𝜃 or 𝑎cot 𝜃
𝑎2 − 𝑥 2 𝑥 = 𝑎sin 𝜃 or 𝑎cos 𝜃
𝑥 2 − 𝑎2 𝑥 = 𝑎sec 𝜃 or 𝑎cosec𝜃
𝑎−𝑥 𝑎+𝑥
√𝑎+𝑥 or √𝑎−𝑥 𝑥 = 𝑎cos 2𝜃
𝑥−𝛼
√𝛽−𝑥 or √(𝑥 − 𝛼)(𝑥 − 𝛽) 𝑥 = 𝛼cos2 𝜃 + 𝛽sin2 𝜃
• Area bounded by the two curves 𝑦 = 𝑓(𝑥) and 𝑦 = 𝑔(𝑥), such that 0 ≤ 𝑔(𝑥) ≤ 𝑓(𝑥) for all 𝑥 ∈ [𝑎, 𝑏] and
between the abscissae at 𝑥 = 𝑎, 𝑥 = 𝑏 is given by
𝑏
Area = ∫𝑎 {𝑓(𝑥) − 𝑔(𝑥)}𝑑𝑥
• Area bounded by the two curves 𝑥 = 𝑓(𝑦) and 𝑥 = 𝑔(𝑦) such that 0 ≤ 𝑔(𝑦) ≤ 𝑓(𝑦) for all 𝑦 ∈ [𝑐, 𝑑] and
between the ordinates at 𝑦 = 𝑐 and 𝑦 = 𝑑 is given by
𝑑
Area = ∫ {𝑓(𝑦) − 𝑔(𝑦)}𝑑𝑦
𝑐
HOMOGENEOUS FORM
• A function 𝑓(𝑥, 𝑦) is called a homogeneous function of degree 𝑛 if 𝑓(𝜆𝑥, 𝜆𝑦) = 𝜆𝑛 𝑓(𝑥, 𝑦).
A homogeneous function 𝑓(𝑥, 𝑦) of degree 𝑛 can always be written as
𝑦 𝑥
𝑓(𝑥, 𝑦) = 𝑥 𝑛 𝑓 ( ) or 𝑓(𝑥, 𝑦) = 𝑦 𝑛 𝑓 ( ).
𝑥 𝑦
9. VECTOR
The line 𝑙 to the line segment AB, then a magnitude is prescribed on the line 𝑙 with one of the two directions, so that
we obtain a directed line segment. Thus, a directed line segment has magnitude as well as direction.
The position vectors of points A, B, C, etc., with respect to the origin O are denoted by 𝑎, 𝑏⃗ and 𝑐, etc., respectively
𝑥
The triangle OAP is right angled, and in it, we have cos 𝛼 = 𝑟 (𝑟 stands for |𝑟|). Similarly, from the right angled
𝑦 𝑧
triangles OBP and OCP, we may write cos 𝛽 = 𝑟 and cos 𝛾 = 𝑟. Thus, the coordinates of the point P may also be
expressed as ( 𝑙𝑟, 𝑚𝑟, 𝑛𝑟 ). The numbers 𝑙𝑟, 𝑚𝑟 and 𝑛𝑟, proportional to the direction cosines are called as direction
ratios of vector 𝑟, and denoted as 𝑎, 𝑏 and 𝑐, respectively.
𝑙 2 + 𝑚2 + 𝑛2 = 1 but 𝑎2 + 𝑏 2 + 𝑐 2 ≠ 1, in general.
TYPES OF VECTORS
ZERO VECTOR: A vector whose initial and terminal points coincide, is called a zero vector (or null vector), and
denoted as ⃗0. Zero vector can not be assigned a definite direction as it has zero magnitude. Or, alternatively
otherwise, it may be regarded as having any direction. The vectors ⃗⃗⃗⃗⃗
𝐴𝐴, 𝐵𝐵⃗⃗⃗⃗⃗ represent the zero vector,
UNIT VECTOR: A vector whose magnitude is unity (i.e., 1 unit) is called a unit vector. The unit vector in the
direction of a given vector 𝑎 is denoted by 𝑎ˆ.
COINITIAL VECTORS: Two or more vectors having the same initial point are called coinitial vectors.
COLLINEAR VECTORS: Two or more vectors are said to be collinear if they are parallel to the same line,
irrespective of their magnitudes and directions.
EQUAL VECTORS: Two vectors 𝑎 and 𝑏⃗ are said to be equal, if they have the same magnitude and direction
regardless of the positions of their initial points, and written as 𝑎 = 𝑏⃗.
NEGATIVE OF A VECTOR: A vector whose magnitude is the same as that of a given vector (say, ⃗⃗⃗⃗⃗
𝐴𝐵 ), but
direction is opposite to that of it, is called negative of the given vector.
For example, vector ⃗⃗⃗⃗⃗
𝐵𝐴 is negative of the vector ⃗⃗⃗⃗⃗
𝐴𝐵, and written as ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗ .
𝐵𝐴 = −𝐴𝐵
The vectors defined above are such that any of them may be subject to its parallel displacement without changing its
magnitude and direction. Such vectors are called free vectors.
ADDITION OF VECTORS
If we have two vectors 𝑎 and 𝑏⃗ represented by the two adjacent sides of a parallelogram in magnitude and direction,
then their sum 𝑎 + 𝑏⃗ is represented in magnitude and direction by the diagonal of the parallelogram through their
common point. This is known as the parallelogram law of vector addition.
PROPERTY -1:
For any two vectors 𝑎 and 𝑏⃗, 𝑎 + 𝑏⃗ = 𝑏⃗ + 𝑎 (Commutative property)
PROPERTY -2:
For any three vectors 𝑎, 𝑏⃗ and 𝑐, 𝑎 + (𝑏⃗ + 𝑐) = (𝑎 + 𝑏⃗) + 𝑐 (Associative property)
PROPERTY -3:
For any vector 𝑎, we have 𝑎 + ⃗0 = ⃗0 + 𝑎 = 𝑎, Here, the zero vector ⃗0 is called the additive identity for the vector
addition.
PROPERTY -4:
Let 𝑎 be a given vector and 𝜆 a scalar. Then the product of the vector 𝑎 by the scalar 𝜆, denoted as 𝜆𝑎, is called the
multiplication of vector 𝑎 by the scalar 𝜆. Note that, 𝜆𝑎 is also a vector, collinear to the vector 𝑎. The vector 𝜆𝑎 has
the direction same (or opposite) to that of vector 𝑎 according as the value of 𝜆 is positive (or negative). Also, the
magnitude of vector 𝜆𝑎 is |𝜆| times the magnitude of the vector 𝑎, i.e., |𝜆𝑎| = |𝜆||𝑎|
1
Unit vector in the direction of vector 𝑎 is given by 𝑎ˆ = ⋅𝑎
|𝑎⃗ |
SECTION FORMULA:
⃗
1⋅𝑏+1⋅𝑎⃗ 𝑎⃗ +𝑏 ⃗
If 𝐶 is the midpoint of ⃗⃗⃗⃗⃗
𝐴𝐵, then ⃗⃗⃗⃗⃗
𝑂𝐶 divides ⃗⃗⃗⃗⃗
𝐴𝐵 in the ratio 1: 1. Therefore, position vector of 𝐶 is 1+1 = 2
The scalar product of two nonzero vectors 𝑎 and 𝑏⃗, denoted by 𝑎 ⋅ 𝑏⃗, is defined as 𝑎 ⋅ 𝑏⃗ = |𝑎‖𝑏⃗|cos 𝜃 where, 𝜃 is
the angle between 𝑎 and 𝑏⃗, 0 ≤ 𝜃 ≤ 𝜋
If either 𝑎 = ⃗0 or 𝑏⃗ = ⃗0, then 𝜃 is not defined, and in this case, we define 𝑎 ⋅ 𝑏⃗ = ⃗0
OBSERVATIONS:
𝑎 ⋅ 𝑏⃗ is a real number.
Let 𝑎 and 𝑏⃗ be two nonzero vectors, then 𝑎 ⋅ 𝑏⃗ = 0
⃗ if and only if 𝑎 and 𝑏⃗ are perpendicular to each other. i.e.
𝑎 ⋅ 𝑏⃗ = 0 ⇔ 𝑎 ⊥ 𝑏⃗
If 𝜃 = 0, then 𝑎 ⋅ 𝑏⃗ = |𝑎‖𝑏⃗|. In particular, 𝑎 ⋅ 𝑎 = |𝑎|2 , as 𝜃 in this case is 0 .
If 𝜃 = 𝜋, then 𝑎 ⋅ 𝑏⃗ = −|𝑎‖𝑏⃗| In particular, 𝑎 ⋅ (−𝑎) = −|𝑎|2 , as 𝜃 in this case is 𝜋.
In view of the Observations 2 and 3 , for mutually perpendicular unit vectors 𝑖ˆ, 𝑗ˆ and 𝑘ˆ, we have 𝑖ˆ ⋅ 𝑖ˆ = 𝑗ˆ ⋅ 𝑗ˆ = 𝑘ˆ ⋅ 𝑘ˆ =
1
𝑖ˆ. 𝑗ˆ = 𝑗ˆ ⋅ 𝑘ˆ = 𝑘ˆ . 𝑖ˆ = 0
𝑎 ⋅ 𝑏⃗ 𝑎 ⋅ 𝑏⃗
cos 𝜃 = , or 𝜃 = cos−1 ( )
|𝑎||𝑏⃗| |𝑎||𝑏⃗|
𝜎 𝜎 If 𝑝ˆ is the unit vector along a line 𝑙, then the projection of a vector 𝑎 on the line 𝑙 is given by 𝑎 ⋅ 𝑝ˆ.
𝑏 ⃗ 1
Projection of a vector 𝑎 on other vector 𝑏⃗, is given by 𝑎 ⋅ 𝑏ˆ or 𝑎 ⋅ ( ⃗ ) or ⃗|
(𝑎 ⋅ 𝑏⃗)
|𝑏| |𝑏
⃗⃗⃗⃗⃗ will be 𝐴𝐵
If 𝜃 = 0, then the projection vector of 𝐴𝐵 ⃗⃗⃗⃗⃗ itself and if 𝜃 = 𝜋, then the projection vector of 𝐴𝐵
⃗⃗⃗⃗⃗ will be 𝐵𝐴
⃗⃗⃗⃗⃗
𝜋 3𝜋
If 𝜃 = or 𝜃 = , then the projection vector of ⃗⃗⃗⃗⃗
2 2
𝐴𝐵 will be zero vector.
If 𝛼, 𝛽 and 𝛾 are the direction angles of vector 𝑎 = 𝑎1 𝑖ˆ + 𝑎2 𝑗ˆ + 𝑎3 𝑘ˆ, then its direction cosines may be given as
𝑎⃗ ⋅𝑖ˆ 𝑎 𝑎 𝑎
cos 𝛼 = |𝑎⃗||𝑖ˆ| = |𝑎⃗1| , cos 𝛽 = |𝑎⃗2| cos 𝜆 = |𝑎⃗3|
The vector product of two nonzero vectors 𝑎 and 𝑏⃗, is denoted by 𝑎 × 𝑏⃗ and defined as 𝑎 × 𝑏⃗ = |𝑎‖𝑏⃗|sin 𝜃𝑛ˆ,
where, 𝜃 is the angle between 𝑎 and 𝑏⃗, 0 ≤ 𝜃 ≤ 𝜋 and 𝑛ˆ is a unit vector perpendicular to both 𝑎 and 𝑏⃗, such that
𝑎, 𝑏⃗ and 𝑛ˆ form a right handed system.
⃗ or 𝑏⃗ = 0
If either 𝑎 = 0 ⃗ , then 𝜃 is not defined and in this case, we define 𝑎 × 𝑏⃗ = 0
⃗.
OBSERVATIONS
𝑎 × 𝑏⃗ is a vector.
Let 𝑎 and 𝑏⃗ be two nonzero vectors. Then 𝑎 × 𝑏⃗ = ⃗0 if and only if and 𝑎 and 𝑏⃗ are parallel (or collinear) to each
other, i.e., 𝑎 × 𝑏⃗ = ⃗0 ⇔ 𝑎‖𝑏⃗
In particular, 𝑎 × 𝑎 = 0 ⃗ and 𝑎 × (−𝑎) = 0
⃗ , since in the first situation, 𝜃 = 0 and in the second one, 𝜃 = 𝜋, making
the value of sin 𝜃 to be 0 .
𝜋
(T) If 𝜃 = then 𝑎 × 𝑏⃗ = |𝑎‖𝑏⃗|
2
In view of the Observations 2 and 3 , for mutually perpendicular unit vectors 𝑖ˆ, 𝑗ˆ and 𝑘ˆ, we have 𝑖ˆ × 𝑖ˆ = 𝑗ˆ × 𝑗ˆ =
𝑘ˆ × 𝑘ˆ = 0
𝑖ˆ × 𝑗ˆ = 𝑘ˆ , 𝑗ˆ × 𝑘ˆ = 𝑖ˆ, 𝑘ˆ × 𝑖ˆ = 𝑗ˆ
In terms of vector product, the angle between two vectors 𝑎 and 𝑏⃗ may be given as
|𝑎 × 𝑏⃗|
sin 𝜃 =
|𝑎||𝑏⃗|
Property 3 (Distributivity of vector product over addition): If 𝑎, 𝑏⃗ and 𝑐 are any three vectors and 𝜆 be a scalar, then
Let 𝑎 and 𝑏⃗ be two vectors given in component form as 𝑎 = 𝑎1 𝑖ˆ + 𝑎2 𝑗ˆ + 𝑎3 𝑘ˆ and 𝑏⃗ = 𝑏1 𝑖ˆ + 𝑏2 𝑗ˆ + 𝑏3 𝑘ˆ, respectively.
Then their cross product may be given by
𝑖ˆ 𝑗ˆ 𝑘ˆ
𝑎 × 𝑏⃗ = |𝑎1 𝑎2 𝑎3 |
𝑏1 𝑏2 𝑏3
Direction cosines of a line are the cosines of the angles made by the line with the positive directions of the
coordinate axes.
𝑙
• Direction Ratios: Let 𝑙, 𝑚, 𝑛 be direction cosines of a vector 𝑟 and 𝑎, 𝑏, 𝑐, be three numbers such that 𝑎 =
𝑚 𝑛
𝑏
= 𝑐.
Then 𝑎, 𝑏, 𝑐 are known as direction ratios or direction numbers of vector 𝑟.
• If 𝑎, 𝑏, 𝑐 are direction ratios of a vector, then its direction cosines are given by
𝑎 𝑏 𝑐
± ,± ,±
√𝑎2 + 𝑏2 + 𝑐2 √𝑎2 + 𝑏2 + 𝑐2 √𝑎2 + 𝑏2 + 𝑐 2
𝑎 𝑏 𝑐
• Direction cosines of 𝑟 = 𝑎𝑖ˆ + 𝑏𝑗ˆ + 𝑐𝑘ˆ are , , .
|𝑟| |𝑟| |𝑟|
𝑥2 −𝑥1 𝑦2 −𝑦1 𝑧2 −𝑧1
• ⃗⃗⃗⃗⃗ are 𝑥2 − 𝑥1 , 𝑦2 − 𝑦1 , 𝑧2 − 𝑧1 and its direction cosines are
Direction ratios of 𝑃𝑄 ⃗⃗⃗⃗⃗ |
, ⃗⃗⃗⃗⃗ , ⃗⃗⃗⃗⃗ .
|𝑃𝑄 |𝑃𝑄 | |𝑃𝑄 |
• The direction ratios of a line are the direction ratios of any vector whose support is the given line.
• If 𝐴(𝑥1 , 𝑦1 , 𝑧1 ) and 𝐵(𝑥2 , 𝑦2 , 𝑧2 ) are two points on a line, then its direction ratios are 𝑥2 − 𝑥1 , 𝑦2 − 𝑦1 , 𝑧2 −
𝑧1 .
• Angle Between Two Vectors in Terms of Their Direction Ratios: Let 𝑎 and 𝑏⃗ be two vectors with direction
ratios 𝑎1 , 𝑏1 , 𝑐1 and 𝑎2 , 𝑏2 , 𝑐2 respectively. Let 𝜃 be the angle between 𝑎 and 𝑏⃗, then cos 𝜃 =
𝑎1 𝑎2 +𝑏1 𝑏2 +𝑐1 𝑐2
• Algorithm for Finding Angle Between Two Vectors in Terms of Their Direction Cosines or Directions Ratios
Step I : Obtain direction ratios or direction cosines of two vectors. Let the direction ratios of two vectors be
𝑎1 , 𝑏1 , 𝑐1 and 𝑎2 , 𝑏2 , 𝑐2 respectively.
Step II : Write vectors parallel to the given vectors. Let 𝑎 be vector parallel to the vector having direction
ratios 𝑎1 , 𝑏1 , 𝑐1 = 𝑎1 𝑖ˆ + 𝑏1 𝑗ˆ + 𝑐1 𝑘ˆ and 𝑏⃗ be a vector parallel to the vector having direction ratios 𝑎2 , 𝑏2 , 𝑐2 =
𝑎2 𝑖ˆ + 𝑏2 𝑗ˆ + 𝑐2 𝑘ˆ.
⃗
𝑎⃗ ⋅𝑏
Step III : Use the formula cos 𝜃 = ⃗|
.
|𝑎⃗ ||𝑏
• Vector Equation of a Line Passing Through a given Point and Parallel to a given Vector : Vector equation of a
straight line passing through a fixed point with position vector 𝑎 and parallel to a given vector 𝑏⃗ is, 𝑟 = 𝑎 +
𝜆𝑏⃗, where 𝜆 is scalar.
If 𝑟 is the position vector of any point 𝑃(𝑥, 𝑦, 𝑧) on the line 𝑟 = 𝑎 + 𝜆𝑏⃗, then 𝑟 = 𝑥𝑖ˆ + 𝑦𝑗ˆ + 𝑧𝑘ˆ.
• Cartesian Equation of a Line Passing Through a given Point and given Direction Ratios : Cartesian equation of
a straight line passing through a fixed point
𝑥−𝑥1 𝑦−𝑦 𝑧−𝑧
(𝑥1 , 𝑦1 , 𝑧1 ) and having direction ratios 𝑎, 𝑏, 𝑐 is
𝑎
= 𝑏 1 = 𝑐 1.
𝑥−𝑥1 𝑦−𝑦1 𝑧−𝑧1
• The parametric equation of the line 𝑎
= 𝑏
= 𝑐
are 𝑥 = 𝑥1 + 𝑎𝜆, 𝑦 = 𝑦1 + 𝑏𝜆, 𝑧 = 𝑧1 + 𝑐𝜆, where 𝜆
is the parameter.
𝑥−𝑥1 𝑦−𝑦1 𝑧−𝑧1
• The co-ordinates of any point on the line arc = = are (𝑥1 + 𝑎𝜆, 𝑦1 + 𝑏𝜆, 𝑧1 + 𝑐𝜆), and having
𝑎 𝑏 𝑐
𝑥−𝑥1 𝑦−𝑦1 𝑧−𝑧1
direction cosines 𝑙, 𝑚, 𝑛 is 𝑙
= 𝑚
= 𝑛
.
• Since, the direction cosines of a line are also direction ratios, therefore equation of a line passing through
𝑥−𝑥1 𝑦−𝑦 𝑧−𝑧
(𝑥1 , 𝑦1 , 𝑧1 ) and having direction cosines 𝑙, 𝑚, 𝑛 is
𝑙
= 𝑚 1 = 𝑛 1.
• Since 𝑥, 𝑦 and 𝑧-axes pass through the origin and have direction cosines (1,0,0); (0,1,0) and (0,0,1)
𝑥−0 𝑦−0 𝑧−0
respectively. Therefore, their equations are 𝑥-axis : 1 = 0 = 0 or 𝑦 = 0 and 𝑧 = 0
• Vector Equation of a Line Passing Through Two given Points: The vector equation of a line passing through
two points with position vector 𝑎 and 𝑏⃗ is
𝑟 = 𝑎 + 𝜆(𝑏⃗ − 𝑎)
• Cartesian Equation of a Line Passing Through Two given Points: The Cartesian equation of a line passing
𝑥−𝑥 𝑦−𝑦 𝑧−𝑧
through two given points (𝑥1 , 𝑦1 𝑧1 ) and (𝑥2 , 𝑦2 , 𝑧2 ) is given by 𝑥 −𝑥1 = 𝑦 −𝑦1 = 𝑧 −𝑧2 .
2 1 2 1 2 1
𝑥−𝑥1 𝑦−𝑦1 𝑧−𝑧1
• Cartesian to Vector: The Cartesian equation of a line be 𝑎
= 𝑏
= 𝑐
.
• Vector form: Let the vector equations of the two lines be 𝑟 = 𝑎1 + 𝜆𝑏⃗1 and 𝑟 = 𝑎2 + 𝜇𝑏⃗2. If 𝜃 is the angle
𝑏⃗ 1 ⋅𝑏
⃗2
between the given lines, then cos 𝜃 = ⃗ 1 ||𝑏
⃗ 2|
.
|𝑏
This gives the position vector of 𝑄 which is the image of 𝑃 in the given line.
• Skew Lines: Two straight lines in space which are neither parallel nor intersecting are called skew lines.
• Line of Shortest Distance: If 𝑙1 and 𝑙2 are two skew-lines, then there is one and only one line perpendicular
to each of lines 𝑙1 and 𝑙2 which is known as the line of shortest distance.
• Shortest Distance: The shortest distance between two lines 𝑙1 and 𝑙2 is the distance 𝑃𝑄 between the points
𝑃 and 𝑄 where the lines of shortest distance intersects the two given lines.
• Shortest Distance Between Two Skew Lines (Vector Form): Let 𝑙1 and 𝑙2 be two lines whose equations are
𝑙1 : 𝑟 = 𝑎1 + 𝜆𝑏⃗1 and 𝑙2 : 𝑟 = 𝑎2 + 𝜇𝑏⃗2 respectively. Let ⃗⃗⃗⃗⃗
𝑃𝑄 be the shortest distance vector between 𝑙1 and
𝑙2 .
(𝑏⃗1 × 𝑏⃗2 ) ⋅ (𝑎2 − 𝑎1 ) |[𝑏⃗1 𝑏⃗2 (𝑎2 − 𝑎1 )]|
𝑃𝑄 = | |= .
|𝑏⃗1 × 𝑏⃗2 | |𝑏⃗1 × 𝑏⃗2 |
• Condition for Two given Lines to Intersect: If the lines 𝑟 = 𝑎1 + 𝜆𝑏⃗1 and 𝑟 = 𝑎2 + 𝜇𝑏⃗2 intersect, then the
shortest distance between them is zero.
(𝑏⃗1 × 𝑏⃗2 ) ⋅ (𝑎2 − 𝑎1 )
| |=0
|𝑏⃗1 × 𝑏⃗2 |
• Shortest Distance Between Two Skew Lines (Cartesian Form) : Let the two skew lines be
𝑥 − 𝑥1 𝑦 − 𝑦1 𝑧 − 𝑧1 𝑥 − 𝑥2 𝑦 − 𝑦2 𝑧 − 𝑧2
= = and = = .
𝑙1 𝑚1 𝑛1 𝑙2 𝑚2 𝑛2
𝑥2 − 𝑥1 𝑦2 − 𝑦1 𝑧2 − 𝑧1
| 𝑙1 𝑚1 𝑛1 |
𝑙2 𝑚2 𝑛2
𝑑=
√(𝑚1 𝑛2 − 𝑚2 𝑛1 )2 + (𝑛1 𝑙2 − 𝑙1 𝑛2 )2 + (𝑙1 𝑚2 − 𝑙2 𝑚1 )2
bisects the angle between the planes that contains the origin.
• Condition of Parallelism: 𝑏⃗ = 𝑙𝑖ˆ + 𝑚𝑗ˆ + 𝑛𝑘ˆ and 𝑛⃗ = 𝑎𝑖ˆ + 𝑏𝑗ˆ + 𝑐𝑘ˆ are perpendicular.
So, 𝑏⃗ ⋅ 𝑛⃗ = 0 ⇒ 𝑎𝑙 + 𝑏𝑚 + 𝑐𝑛 = 0.
A half-plane in the 𝑥𝑦-plane is called a closed half-plane if the points on the line separating the halfplane are also
included in the half-plane.
The graph of a linear inequation involving sign ‘ ≤ ’or ' ≥ 'is a closed half-plane.
A half-plane in the 𝑥𝑦-plane is called an open half-plane if the points on the line separating the halfplane are not
included in the half-plane.
The graph of linear inequation involving sign '<'or '>' is an open half-plane.
Two or more linear inequations are said to constitute a system of linear inequations.
The solution set of a system of linear inequations is defined as the intersection of solution sets of linear inequations
in the system.
A linear inequation is also called a linear constraint as it restricts the freedom of choice of the values 𝑥 and 𝑦.
LINEAR PROGRAMMING
In linear programming we deal with the optimization (maximization or minimization) of a linear function of a number
of variables subject to a number of restrictions (or constraints) on variables, in the form of linear inequations in the
variable of the optimization function.
A Linear Programming Problem is one that is concerned with finding the optimal value (maximum or minimum value)
of a linear function (called objective function) of several variables (say 𝑥 and 𝑦 ), subject to the conditions that the
variables are non-negative and satisfy a set of linear inequalities (called linear constraints).
OBJECTIVE FUNCTION Linear function Z = 𝑎𝑥 + 𝑏𝑦, where 𝑎, 𝑏 are constants, which has to be maximised or
minimized is called a linear objective function. Variables 𝑥 and 𝑦 are called decision variables.
CONSTRAINTS The linear inequalities or equations or restrictions on the variables of a linear programming
problem are called constraints. The conditions 𝑥 ≥ 0, 𝑦 ≥ 0 are called non-negative restrictions.
FEASIBLE REGION The common region determined by all the constraints including non-negative constraints
𝑥, 𝑦 ≥ 0 of a linear programming problem is called the feasible region (or solution region) for the problem. The
region other than feasible region is called an infeasible region.
6 Feasible solutions Points within and on the boundary of the feasible region represent feasible solutions of the
constraints.
Theorem 1 Let R be the feasible region (convex polygon) for a linear programming problem and let Z = 𝑎𝑥 + 𝑏𝑦 be
the objective function. When Z has an optimal value (maximum or minimum), where the variables 𝑥 and 𝑦 are
subject to constraints described by linear inequalities, this optimal value must occur at a corner point (vertex) of the
feasible region.
A corner point of a feasible region is a point in the region which is the intersection of two boundary lines.
Theorem 2 Let R be the feasible region for a linear programming problem, and let Z = 𝑎𝑥 + by be the objective
function. If R is bounded, then the objective function Z has both a maximum and a minimum value on 𝑅 and each of
these occurs at a corner point (vertex) of 𝑅.
A feasible region of a system of linear inequalities is said to be bounded if it can be enclosed within a circle.
Otherwise, it is called unbounded. Unbounded means that the feasible region does extend indefinitely in any
direction.
If R is unbounded, then a maximum or a minimum value of the objective function may not exist. However, if it exists,
it must occur at a corner point of R. (By Theorem 1).
The method of solving linear programming problem is referred as Corner Point Method. The method comprises of
the following steps:
1. Find the feasible region of the linear programming problem and determine its corner points (vertices) either
by inspection or by solving the two equations of the lines intersecting at that point.
2. Evaluate the objective function Z = 𝑎𝑥 + 𝑏𝑦 at each corner point. Let M and 𝑚, respectively denote the
largest and smallest values of these points.
3. (i) When the feasible region is bounded, M and 𝑚 are the maximum and minimum values of Z .
(ii) In case, the feasible region is unbounded, we have:
4. (a) M is the maximum value of Z, if the open half plane determined by 𝑎𝑥 + 𝑏𝑦 > 𝑀 has no point in
common with the feasible region. Otherwise, Z has no maximum value.
(b) Similarly, 𝑚 is the minimum value of Z, if the open half plane determined by 𝑎𝑥 + 𝑏𝑦 < 𝑚 has no point in
common with the feasible region. Otherwise, Z has no minimum value.
WORKING RULE
(i) Consider the linear equations of their corresponding linear inequations.
(ii) Draw the graph of each linear equation.
(iii) Check the solution region of each linear inequations by testing the points and then shade the common region of
all the linear inequations.
(iv) Determine the corner points of the feasible region.
(v) Find the value of objective function at each of the corner points obtained in above step.
(vi) The maximum or minimum value out of all the values obtained in above step is the maximum or minimum value
of the objective function.
2. When linear constraints and objective functions are not given.
WORKING RULE
(i) Identify the unknown variables in the given Linear programming problems. Denote them by 𝑥 and 𝑦.
(ii) Formulate the objective function in terms of x and y. Also, observe it is maximized or minimized.
(iii) Write the linear constraints in the form of linear inequations formed by the given conditions.
(iv) Consider the linear equations of their corresponding linear inequations.
(v) Draw the graph of each linear equation.
(vi) Check the solution region of each linear inequations by testing the points and then shade the common region of all
the linear inequations.
(vii) Determine the corner points of the feasible region.
(viii) Evaluate the value of objective function at each corner points obtained in the above step.
(ix) As the feasible region is unbounded, therefore the value may or may not be minimum or maximum value of the
objective function. For this draw a graph of the inequality by equating the objective function with the above value to
form linear inequation i.e. < for minimum or > for maximum. And check whether the resulting half plane has points in
common with the feasible region or not.
NCERT: EXE – 12.1 CW: Q. NO. 1, 2,3,4,5, 6,9 HW: Q. NO. – 7,8,10
EXE – 6.2 CW: Q. NO. 1, 2,3,4,5, 7 HW: Q. NO. – 6,8,9,11,12,13
EXEMPLAR: 1,5,7,9 HW: 2,3,4,6
EXHAUSTIVE NUMBER OF CASES: The total number of possible outcomes of a random experiment in a trial is
known as the exhaustive number of cases.
The total number of elementary events of a random experiment is called the exhaustive number of cases.
MUTUALLY EXCLUSIVE EVENTS: Events are said to be mutually exclusive or incompatible if the occurrence of
anyone of them prevents the occurrence of all the others, i.e., if no two or more of them can occur simultaneously in
the same trial.
EQUALLY LIKELY EVENTS: Events are equally likely if there is no reason for an event to occur in preference to
any other events.
The number of cases favourable to an events in a trial is the total number of elementary events such that the
occurrence of any one of them ensures the happening of the event.
INDEPENDENT EVENTS: Events are said to be independent if the happening (or non-happening) of one event is
not affected by the happening (or non-happening) of others.
SAMPLE SPACE: The set of all possible outcomes of a random experiment is called the sample space associated
with it and it is generally denoted by 𝑆.
If 𝐸1 , 𝐸2 , … , 𝐸𝑛 are the possible outcomes of a random experiment, then 𝑆 = {𝐸1 , 𝐸2 , … , 𝐸𝑛 }. Each element of 𝑆 is
called a sample point.
EVENT: A subset of the sample space associated with a random experiment is called an event.
Elementary Events: Single element subsets of the sample space associated with a random experiment are known as
the elementary events or indecomposable events.
COMPOUND EVENTS: Those subsets of the sample space 𝑆 associated to an experiment which are disjoint union
of single element subsets of the sample space 𝑆 are known as the compound or decomposable events.
OCCURENCE OR HAPPENING OF AN EVENT: Let 𝑆 be the sample space associated with a random experiment
and let 𝐴 be an event. If 𝑤 is an outcome of a trial such that 𝑤 ∈ 𝐴, then we say that the event 𝐴 has occurred. If
𝑤 ∉ 𝐴, we say that the event 𝐴 has not occured.
ALGEBRA OF EVENTS
Not 𝐴 𝐴‾
𝐴 and 𝐵 𝐴∩𝐵
𝐴 but not 𝐵 𝐴 ∩ 𝐵‾
Neither 𝐴 nor 𝐵 𝐴‾ ∩ 𝐵‾ = (𝐴 ∪ 𝐵)
MUTUALLY EXCLUSIVE EVENTS: Let 𝑆 be the sample space associated with a random experiment and let 𝐴1
and 𝐴2 be two events. Then 𝐴1 and 𝐴2 are mutually exclusive events if 𝐴1 ∩ 𝐴2 = 𝜙.
MUTUALLY EXCLUSIVE AND EXHAUSTIVE SYSTEM OF EVENTS: Let 𝑆 be the sample space associated
with a random experiment. Let 𝐴1 , 𝐴2 , … , 𝐴𝑛 be subsets of 𝑆 such that
(i) 𝐴𝑖 ∩ 𝐴j = 𝜙 for 𝑖 ≠ 𝑗, and (ii) 𝐴1 ∪ 𝐴2 ∪ … ∪ 𝐴𝑛 = 𝑆.
FAVOURABLE EVENTS: Let 𝑆 be the sample space associated with a random experiment and let 𝐴 ⊂ 𝑆. Then the
elementary events belonging to 𝐴 are known as the favourable events to 𝐴.
EXPERIMENTALLY PROBABILITY: Let 𝑆 be the sample space associated with a random experiment, and let 𝐴
be a subset of 𝑆 representing an event. Then the probability of the event 𝐴 is defined as
If 𝐴 and 𝐵 are mutually exclusive events, then 𝑃(𝐴 ∩ 𝐵) = 0, therefore 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵). This is the
addition theorem for mutually exclusive events.
ADDITION THEOREM FOR THREE EVENTS: If 𝐴, 𝐵, 𝐶 are three events associated with a random experiment then,
𝑃(𝐴 ∪ 𝐵 ∪ 𝐶) = 𝑃(𝐴) + 𝑃(𝐵) + 𝑃(𝐶) − 𝑃(𝐴 ∩ 𝐵) − 𝑃(𝐵 ∩ 𝐶) − 𝑃(𝐴 ∩ 𝐶) + 𝑃(𝐴 ∩ 𝐵 ∩ 𝐶).
If 𝐴, 𝐵, 𝐶 are mutually exclusive events, then 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵 ∩ 𝐶) = 𝑃(𝐴 ∩ 𝐶) = 𝑃(𝐴 ∩ 𝐵 ∩ 𝐶) = 0.
∴ 𝑃(𝐴 ∪ 𝐵 ∪ 𝐶) = 𝑃(𝐴) + 𝑃(𝐵) + 𝑃(𝐶)
Let 𝐴 and 𝐵 be two events associated with a random experiment.Then
(i) 𝑃(𝐴‾ ∩ 𝐵) = 𝑃(𝐵) − 𝑃(𝐴 ∩ 𝐵)
(ii) 𝑃(𝐴 ∩ 𝐵‾) = 𝑃(𝐴) − 𝑃(𝐴 ∩ 𝐵)
𝑃(𝐴‾ ∩ B) is known as the probability of occurence of 𝐵 only.
𝑃(𝐴 ∩ 𝐵‾) is known as the probability of occurence of 𝐴 only.
If 𝐵 ⊂ 𝐴, then (i) 𝑃(𝐴𝐵‾) = 𝑃(𝐴) − 𝑃(𝐵) (ii) 𝑃(𝐵) ≤ 𝑃(𝐴).
CONDITIONAL PROBABILITY:
INDEPENDENT EVENTS: Event are said to be independent, if the occurrence or non-occurrence of one does not
affect the probability of the occurrence or non-occurrence of the other.
If 𝐴 and 𝐵 are two independent events associated with a random experiment then,
𝑃(𝐴/𝐵) = 𝑃(𝐴) and 𝑃(𝐵/𝐴) = 𝑃(𝐵) and viceversa.
If 𝐴 and 𝐵 are independent events associated with a random experiment, then 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵) i.e., the
probability of simultaneous occurrence of two independent events is equal to the product of their probabilities.
Events 𝐴1 , 𝐴2 , … , 𝐴𝑛 are independent or mutually independent if the probability of the simultaneous occurence of
(any) finite number of them is equal to the product of their separate probabilities while these events are pair wise
independent if 𝑃(𝐴𝑖 ∩ 𝐴𝑗 ) = 𝑃(𝐴𝑗 )𝑃(𝐴𝑖 ) for all 𝑖 ≠ 𝑗.
THE LAW OF TOTAL PROBABILITY: Let 𝑆 be the sample space and let 𝐸1 , 𝐸2 , … , 𝐸𝑛 be 𝑛 mutually exclusive and
exhaustive events associated with a random experiment.
If 𝐴 is any event which occurs with 𝐸1 or 𝐸2 or … or 𝐸𝑛 , then
𝐴 𝐴 𝐴
𝑃(𝐴) = 𝑃(𝐸1 )𝑃 ( ) + 𝑃(𝐸2 )𝑃 ( ) + ⋯ … … + 𝑃(𝐸𝑛 )𝑃 ( )
𝐸1 𝐸2 𝐸𝑛
BAYE'S RULE: Let 𝑆 be the sample space and let 𝐸1 , 𝐸2 , … , 𝐸𝑛 be 𝑛 mutually exclusive and exhaustive events
associated with a random experiment. If 𝐴 is any event which occurs with 𝐸1 or 𝐸2 or … or 𝐸𝑛 , then
𝐸𝑖 𝑃(𝐸𝑖 )𝑃(𝐴/𝐸𝑖 )
𝑃( ) = 𝑛 , 𝑖 = 1,2, … 𝑛
𝐴 ∑𝑖=1 𝑃(𝐸𝑖 )𝑃(𝐴/𝐸𝑖 )
The events 𝐸1 , 𝐸2 , … , 𝐸𝑛 are usually referred to as 'hypothesis' and the probabilities 𝑃(𝐸1 ), 𝑃(𝐸2 ), …, 𝑃(𝐸𝑛 ) are
known as the 'priori' probabilities as they exist before we obtain any information from the experiment.
The probabilities 𝑃(𝐴/𝐸𝑖 ); 𝑖 = 1,2, … , 𝑛 are called the likelihood probabilities as they tell us how likely the event 𝐴
under consideration occur, given each and every priori probabilities.
The probabilities 𝑃(𝐸𝑖 /𝐴); 𝑖 = 1,2, … , 𝑛 are called the posterior probabilities as they are determined after the result
of the experiment are known.
RANDOM VARIABLE: A random variable is a real valued function having domain as the sample space associated
with a given random experiment.
A random variable associated with a given random experiment associates every event to a unique real number.
NCERT: EXE – 13.1 CW: Q. NO. 1, 2,3,4,5, 6,11,12,15 HW: Q. NO. – 7,8,9,10,13,14,16,17
EXE – 12.2 CW: Q. NO. 1, 2,3,4,5,6,11,15,16 HW: Q. NO. – 7,8,9,10,11,12,13,17,18
EXE – 12.3 CW: Q. NO. 1, 2,3,4,7,12,13 HW: Q. NO. – 5,6,8,9,10,11
EXE – MISCELLANEOUS CW: Q. NO. 1,2,3,5,6,9 HW: Q. NO. – 4,7,8,10,11,12,13
EXEMPLAR: 1,3,4,5,10 HW: 14,15,16,19,25