0% found this document useful (0 votes)
9 views6 pages

Class XII: Mathematics Chapter: Probability Chapter Notes

This document provides comprehensive notes on probability for Class XII Mathematics, covering key concepts such as conditional probability, independence of events, random variables, and probability distributions. It includes important theorems like Bayes' Theorem and formulas for calculating expected value, variance, and standard deviation for both discrete and binomial distributions. The notes also emphasize the significance of understanding events, their relationships, and the mathematical principles governing probability.

Uploaded by

Saroj Narwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Class XII: Mathematics Chapter: Probability Chapter Notes

This document provides comprehensive notes on probability for Class XII Mathematics, covering key concepts such as conditional probability, independence of events, random variables, and probability distributions. It includes important theorems like Bayes' Theorem and formulas for calculating expected value, variance, and standard deviation for both discrete and binomial distributions. The notes also emphasize the significance of understanding events, their relationships, and the mathematical principles governing probability.

Uploaded by

Saroj Narwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1

Class XII: Mathematics


Chapter : Probability
Chapter Notes

Key Concepts

1. The probability that event B will occur if given the knowledge that event A
has already occurred is called conditional probability. It is denoted as
P(B|A).

2. Conditional probability of B given A has occurred P(B|A) is given by the


ratio of number of events favourable to both A and B to number of events
favourable to A.

3. E and F be events of a sample space S of an experiment, then


(i). P(S|F) = P(F|F)=1
(ii) For any two events A and B of sample space S if F is another event
such that P(F)  0
P ((A  B) |F) = P (A|F)+P (B|F)–P ((A  B)|F)
(iii) P(E’|F) = 1-P(E|F)

4. Two events A and B are independent if and only if the occurrence of A


does not depend on the occurrence of B and vice versa.

5. If events A and B are independent then P(B|A) = P(A) and P(A|B)=P(A)

6. Three events A,B, C are independent if they are pair wise independent i.e
P(A  B) = P(A) .P(B) , P (A  C) = P (A). P (C), P(B  C) = P(B) .P(C)

7. Three events A,B, C are independent if


P(A  B  C) = P(A). P (B). P (C)
Independence implies pair wise independence, but not conversely.

8. In the case of independent events, event A has no effect on the probability


of event B so the conditional probability of event B given event A has
already occurred is simply the probability of event B, P(B|A)=P(B).

9. If E and F are independent events then so are the events


(i)E’ and F
(ii)E and F’
(iii)E’ and F’

10. If A and B are events such that B   then B is said to affect A


i) favourably if P(A|B) > P(A)

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com
2

ii) unfavourably if P(A|B) < P(A)


iii) not at all if P(A|B) = P(A).

11. Two events E and F are said to be dependent if they are not independent,
i.e. if P(E  F ) ≠ P(E).P (F)

12. The events E1, E2, ..., En represent a partition of the sample space S
if

(1) They are pair wise disjoint,


(2) They are exhaustive and
(3) They have nonzero probabilities.

13.The events E1, E2,...,En are called hypotheses. The probability P(Ei)
is called the priori probability of the hypothesis Ei. The conditional
probability P(Ei| A) is called a posteriori probability of the hypothesis Ei

14. Bayes' Theorem is also known as the formula for the probability of
"causes".

15. When the value of a variable is the outcome of a random


experiment, that variable is a random variable.

16. A random variable is a function that associates a real number to each


element in the sample space of random experiment.

17. A random variable which can assume only a finite number of values
or countably infinite values is called a discrete random variable.
In experiment of tossing three coins a random variable X representing
number of heads can take values 0, 1, 2, 3.

18. A random variable which can assume all possible values between
certain limits is called a continuous random variable.
Examples include height, weight etc.

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com
3

19. The probability distribution of a random variable X is the system of


numbers
X : x1 x 2 ..... xn
P(X) : p1 p2 ..... pn
n
where pi  0, p
i1
i  1,i  1, 2, 3,....,n

The real numbers x1, x2,..., xn are the possible values of the random variable
X and pi (i = 1,2,..., n) is the probability of the random variable X taking the
value xi i.e. P(X = xi) = pi

20. In the probability distribution of x each probability pi is non


negative, and sum of all probabilities is equal to 1.

21. Probability distribution of a random variable x can be represented


using bar charts.

X 1 2 3 4
P(X) .1 .2 .3 .4
Tabular Representation

Graphical Representation

22. The expected value of a random variable indicates its average or


central value.

23. The expected value of a random variable indicates its average or central
value. It is a useful summary value of the variable's distribution.

24. Let X be a discrete random variable which takes values x1, x2, x3,…xn
with probability pi = P{X = xi}, respectively. The mean of X, denoted by μ, is
summation pixi
25. Trials of a random experiment are called Bernoulli trials, if they satisfy
the following conditions :
(i) There should be a finite number of trials.
(ii) The trials should be independent.
(iii) Each trial has exactly two outcomes: success or failure.
(iv) The probability of success remains the same in each trial.

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com
4

26. Binomial distribution is the discrete probability distribution of the


number of successes in a sequence of n independent binomial experiments,
each of which yields success with probability p.

27. Bernoulli experiment is a random experiment whose trials have two


outcomes that are mutually exclusive: they are, termed, success and failure.

28. Binomial distribution with n-Bernoulli trials, with the probability of


success in each trial as p, is denoted by B (n, p). n and p are called the
parameters of the distribution.

29. The random variable X follows the binomial distribution with parameters
n and p, we write XK ~ B(n, p). The probability of getting exactly k successes
in n trials is given by the probability mass function
P (X = k) = nCk q n–k pk

30. Equal means of two probability distributions does not imply same
distributions.

Key Formulae
1. 0 ≤ P (B|A) ≤ 1

2. If E and F are two events associated with the same sample space of a
random experiment, the conditional probability of the event E given
that F has occurred, i.e. P (E|F) is given by

n(E  F)
P(E | F)  providedP(F)  0 or
n(F)
n(E  F)
P(F | E)  providedP(E)  0
n(E)
3. Multiplication Theorem:
(a) For two events
Let E and F be two events associated with a sample space S.
P (E F) = P (E) P (F|E) = P (F) P (E|F) provided P (E)  0 and
P (F)  0.

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com
5

(b) For three events:


If E, F and G are three events of sample space S,

P(E F G) = P (E) P (F|E) P(G|(E F)) = P (E) P(F|E) P(G|EF)

4. Multiplication theorem for independent Events


(i) P (E F) = P(E)P(F)
(ii) P(E F G) = P(E)P(F)P(G)

5. Let E and F be two events associated with the same random


experiment Two events E and F are said to be independent, if
(i) P(F|E) = P (F) provided P (E)  0 and
(ii) P (E|F) = P (E) provided P (F)  0
(iii)P(E F) = P(E) . P (F)

6. Occurrence of atleast one of the two events A or B


P(AB) =1-P(A’)P(B’)

7. A set of events E1, E2, ..., En is said to represent a partition of the sample
space S if
(a) Ei  Ej = , i ≠j, i, j = 1, 2, 3, ..., n
(b) E1  E2  En= S
(c) P(Ei) > 0 for all i = 1, 2, ..., n.

8. Theorem of Total Probability


Let {E1, E2,...,En} be a partition of the sample space S, and suppose that
each of the events E1, E2,..., En has nonzero probability of occurrence. Let A
be any event associated
with S, then
P(A) = P(E1) P(A|E1) + P(E2) P(A|E2) + ... + P(En) P(A|En)
n
=  P E j  P  A | E j 
j1

9. Bayes’ Theorem
If E1, E2 ,..., En are n non-empty events which constitute a partition of
sample space S and
A is any event of nonzero probability, then
P(E )P(A | Ei )
P(Ei | A)  n i for any I =1,2,3,…n
 jP(E
j1
)P(A | E j )

10.The mean or expected value of a random variable X, denoted by E(X) or


μ is defined as
n
E (X) = μ =  xp
i1
i i

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com
6

11. The variance of the random variable X, denoted by Var (X) or x2 is
defined as
n

 x    p(xi )  E(X  )2


2
x 2  Var(X)  i
i1

Var (X) = E(X2) – [E(X)]2

12. Standard Deviation of random variable X:


n

 x    p(xi )
2
x  Var(X)  i
i1

13. For Binomial distribution B (n, p),


P (X = x) = nCx q n–x px, x = 0, 1,..., n (q = 1 – p)

14. Mean and Variance of a variable X following Binomial distribution


E (X) = μ = np
Var (X) = npq
Where n is number of trials, p = probability of success
q = probability of failures
15. Standard Deviation of a variable X following Binomial distribution

x  Var(X)  npq

Get the Power of Visual Impact on your side


Log on to www.topperlearning.com

You might also like