0% found this document useful (0 votes)
8 views13 pages

Math Theory

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 13

HIGHER ENGINEERING MATHEMATICS

ADDITION LAW OF PROBABILITY or THEOREM OF TOTAL PROBABILITY


(1) If the probability of an event A happening as a result of a trial is P(A) and the probability of a mutually
exclusive event B happening is P(B), then the probability of either of the events happening as a result of the trial
is P(A + B) or P(A u B) = P(A) + P(B).
Proof. Let n be the total number of equally likely cases and let m 1 be favourable to the event A and m be
2
favourable to the event B . Then the number of cases favourable to A or Bis m 1 + m 2. Hence the probability of A
or B happening as a result of the trial

= m1 + m2 = P(A) + P(B).
= m1 + m2
n n n
(2) If A, B, are any two events (not mutually exclusive), then
P (A + B) = P (A) + P (B) - P (AB)
or P (A u B) = P (A) + P (B) - P (A n B)
If the events A and B are any two events then, there are some outcomes which favour both A and B. If m 3
be their number, then these are included in both m 1 and m 2. Hence the total number of outcomes favouring
either A or B or both is
ml+ m2- m3.
Thus the probability of occurrence of A or B or both
= m 1 +r~-m3 =m1 +m2 _ma
n n n n
Hence P(A + B) = P(A) + P(B) - P(AB)
or P(A u B) = P(A) + P(B) - P(A n B)
Obs . When A and B are mutually exclusive P(AB ) or P(/1_ n B ) = 0 and we get
P(A + B ) or P(A v B ) = P(A ) + P(B).
In general, for a number of mutually exclusiv e events Al' A 2 , .• • A,,, we have
P(/1. 1 + A 2 + ... + A ,.) or P(A 1 v A 2 v ... u A,,) = P(A 1) + P(A 2 ) + ... + P(A ) .

(3)If A, B, C are any three events, then


P (A+ B + C) = P (A) + P (B) + P (C) - P (AB) - P (BC) - P (CA) + P (ABC).
or p (Au Bu C) = P (A) + P (B) + P (C) - P (An B) - P (B n C) -P (C n A) + P (A nB n C)
Proof Using the above result for any two events, we have
P (A u B v C) = P [(A u B) u Cl
= P (Au B) + P (C) - P [(Au B) n Cl
= [P (A)+ p (B) - p (An B)l + P (C) - P [(An C) u (B n C)l (Distributive Law)
= p (A)+ p (B) + P (C) - P (An B) - {P (An C) + P (B n C) - P (An B n C)}
[ ·: (An C) n (B n C) = A n B n Cl
=P(A)+P(B) +P(C)-P (AnB)-P (B n C)-P(C nA) + P(A nB nC) [·: A nC = C nA.l
EU (1) INDEPENDENT EVENTS
Two events are said to be independent, if happening or failure of one does not affect the happening or
failure of the other. Otherwise the events are said to be dependent.
For two d ependent events A and B , the symbol P(B I A) d enotes the probability of occurrence of B, when A
has already occurred . It is known as the conditional probability and is read as a 'probability of B given A '.
(2) Multiplication law of probability or Theorem of compound probability. If the probability of an
event A happening as a res ult of trial is P(A) and after A has happened the probability of an event B happening as
a result of another trial (i. e., conditional probability of B given A) is P(B I A ), then the probability of both the
events A and B happening as a result of two trials is P(AB) or P(A ri B) = P(A). P(BIA).
Proof. Let n be the total number of outcomes in the first trial and m be favourable to the event A so that
P(A ) = min.
Let n 1 be the total number of outcomes in the second trial of which m 1 are favourable to the event B so that
P(BIA) = m/n 1 .
Now each of the n outcomes can be associated with each of the n 1 outcomes. So the total number of
outcomes in the combined trial is nn 1 • Of these mm 1 are favourable to both the events A and B . Hence
P(AB) or P(A n B) = mmi = P(A) . P(BIA).
nn1
Similarly, the conditional probability of A given B is P(AIB).
P(AB) or P(A n B) = P(B) . P(AIB)
Thus P(A n B) = P(A) . P(BIA) = P(B) . P(AIB).
(3) Jfthe events A and Bare independent, i. e., if the happening of B does not depend on whether A has
happened or not, then P(B I A) = P(B) and P(A I B) = P(A).
P(AB) or P(A ri B) = P(A) . P(B).
In general, P(A A .. A n) or P(A 1 n A 2 n .. . n A,.) = P(A 1) . P (A 2 ) .... P(An).
2
1
ml HIGHER ENGINEERING MATHEMATICS

Co r . If /Jp p 2 be th e probabilities of happening of two independent eve nts , then


(i ) the probability that th e firs t event happens and th e second fail s is p (1 - p ).
1 2
(ii ) th e probability that both events fail to happen is (1 - p ) (1 - p ) .
1 2
(iii) the probability that at least one of the events h a ppens is
1 - (1 - p 1 ) (1 - p 2 ) . This is common ly known a s their cumulative probability.
In g en eral, if p I' p 2 , p 3 , ... JJ,, be the chances of happening of n independent events, then their cumulative probability
(i. e. , the chance that at least one of the events will happen) is
l - ( 1-pl ) (1 - P 2l (1 -p) ... (l -p,, ).
ED RANDOM VARIABLE
If a real variable X be associated with the outcome of a random experiment, then since the values which X
takes depend on chance, it is called a random variable or a stochastic variable or simply a variate. For instance,
if a random experimentE consists of tossing a pair of dice, the sumX of the two numbers which turn up have the
value 2, 3, 4, ... , 12 depending on chance. Then Xis the random variable. It is a function whose values are real
numbers and depend on chance.
If in a random experiment, the event corresponding to a number a occurs, then the corresponding random
variable Xis said to assume the value <i and the probability of the event is denoted by P(X = a). Similarly the
probability of the eventX assuming any value in the intervalci <X <bis denoted by P(a <X < b). The probability
of the event X c is written as P(X c).
If a random variable takes a finite set of values, it is called a discrete variate. On the other hand, if it
assumes an infinite number of uncountable values, it is called a continuous variate.

(1) DISCRETE PROBABILITY DISTRIBUTION


Suppose a discrete variate Xis the outcome of some experiment. If the probability that X takes the values
X;, is p ;, then
P (X = x) = P; or p(x) for i = 1, 2, ...
where (i) p(x;) 0 for all values of i, (ii) I.p(x;) = 1
The set of values x with their probabilities P; constitute a discrete probability distribution of the
discrete variate X. ' .

For example, the discrete probability distribution for X, the sum of the numbers which turn on tossing a
pair of dice is given by the following table :

X= x; 2 3 4 5 6 7 8 9 10 11 12
p(x) 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36
[·: There are 6 x 6 = 36 equally likely outcome~ and therefore, each has F(x)
the probability 1/36. We have X = 2 for one outcome, i.e. (1, 1); X = 3 for two
outcomes (1, 2) and (2, 1); X = 4 for three outcomes (1, 3), (2, 2) and (3, 1) and so
on.]
(2) Distribution function. The distribution function F (x) of the discrete
variate X is defined by

F (x) = p (X x) = I
i=l
p(x;) where xis any integer. The graph of F(x) will be
0 X

stair st ep fiorm (F'1g. 26 .2 )• The distribution function is also sometimes called Fig. 26.2
cumulative distribution function.
~N~INUOUS PROBABILITY _DIST~IBUTION . . _
When a vanate X takes _every _value m an mterval, it gives rise to continuous distribution of X . The
' ·butions defined by th e vanates hke heights or weights are continuous distributions.
st11
di A major conceptual difference, ~~wever, exists between discrete and continuous probabilities. When
inking in discrete terms, th e proba_b 1~1ty ~ssociated with an event is meaningful. With c~ntinuou~ event~,
tb r where the number of events 1s mfimtely large the probability that a specific event will occur 1s pract1-
howevero' For t h 1s ' reason, con t muous
· '
probability statements must be worded somewhat different lY from
callY:: o~es. Instead of finding the probability that x equals some value we find the probability of x falling in a
~ re '
small interv · .. . . . . . .
Thus the probability d1stnbut10n of a contrnuous vanate x is defined by a function f (x) such that the
bility of the variate x falling in the small interval x _ .!. dx to x + .!. dx is f (x) dx . Symbolically it can be
pro ba · 2 2
~xpressed as P(x -½ dx::;; x::;; x + ½dx) = f (x) dx. Then f (x) is called the probability density function and the

ntinuous curve y = f (x) is called the probability curve.


m The range of the varia • bl e may be finite or infinite. But even when the range is finite, it is convement
. to
mnsider it as infinite by supposing the density function to be zero outside the given range. Thus if f (x) = <)>(x) be
the density function denoted for the variate x in the interval (a, b ), then it can be written as
f(x)=0, x<a ,
= <j>(x), a ::;; x ::;; b
= 0, x > b.
The density function f (x) is always positive and J_~~ f (x) dx = 1 (i.e., the total area under the probability
curve and the x-axis is unity which corresponds to the requirements that the total probability of happening of an
ei•ent is unity).
(2) Distribution function

If F(x) =P(X::;; x) = r~ f (x) dx,


then F (x) is defined as the cumulative distribution function or simply the distribution functio n of the
continuous variate X. It is the probability that the value of the variateXwill be ::;; x . The graph of F{x ) in this case
as shown in Fig. 26 .3(b ).
The distribution function F (x) has the following properties :
(i) F' (x) = f (x) 0, so that F (x) is a non-decreasing function .
(i)F(-oo)=0; (iii)F(=)=l

(iu) P(a $x s; b) = I: f (x ) dx = f ~ f (x) dx - f ~ f(x) dx = F (b)-F (a) .


Example 26.31. (i) Is the function defined as follows a density function?
f(x) = e-x, X ;:,:Q

=0, x<0,
(ii) If so, determine the probability that the variate having this density will fall in the interval (1, 2) ?
(ill) Also fi nd the cumulative probability function F(2) ?

Solution. (i) f (x) is clearly~ 0 for ever y x in (1, 2) and

I~f (x ) dx = f r=e- x dx = 1
0
0 · dx +
- = - = Jo
ml HIGHER ENGINEERING M
ATJ.iEM,.%
Hence the function f (x) satisfies the requirements for a density function .

(ii) Required probability= P (1 x 2) = r e-x dx = e- 1 - e - 2 = 0.368 - 0.135 = 0 .233.


This probability is equal to the shaded area in Fig. 26.3 (a).
(iii) Cumulative probability function F(2)

fJ(x)dx= r ~O-dx+ C e-xdx =l- e- 2 =1-0.135=0.865


which is shown in Fig. 26.3 (b ).

lt .

0 1 2
.
X · 2
0 1 3 X
(a)
(b)
Fig. 26.3

(1) EXPECTATION

The m ean value (µ) of the probability distribution of a variate Xis commonly known as its expectation
and is denoted by E(X). If f(x) is the probability density function of the variate X, then

I x; f (x) (discrete distribution)


i

or
E(X) = r~ xf (x) dx (continuous distribution!
In general, expectation of any function q,(x) is given by
E[q,(x)] = L<l>(x;) f (x) . 1·butionl
(discrete d1str
or
E[q,(x)] = r~ q,(x) f (x) dx . t n"butionl
(continuous dis
(2) Variance of a distribution is given by
2
cr = I<x; - µ) 2 f (x;) ·butionl
(discrete di 51n

or
cr2 = r~ (x -µ)2 f (x) dx (continuous distn
·butioD'
where cr is the standard d eviation of the distribution.
(3) The rth moment about the m ean (denoted byµ) is defined by
µr = r(x; - µY f (x) ·but10·nl
(discrete distn
r~
I
'b (ioD
or µr = (x -µ)' f (x) dx distr1 u
(continuous
(4) Mean deviation from the mean is given by . ,I
r I x ; - µ I f (x) . tfl'b''tJO'
V

(discrete dis . 11
0111
orby [ ~l x -µlf (x)dx distribU
(continuous
· f1f f• MOMENT GENERATING FUNCTION
(1) The moment generating function (m.g.f.) of the discrete probability distribution of the variate X about
the value x = a is defined as the expected value of et<x - a) and is denoted by M / t) . Thus
... (1)
Ma(t) = I.p i i <x, -a)
which is a function of the parameter t only.
Expanding the exponential in (1), we get
t2 tr
M (t) = I.p . + t'I.p . (x . - a)+ - I.p .(x . - a) 2 + ... + - I.x . (x; - a Y + ...
a t t t 2! t t r! I "

= 1 + tµ ' + -t2 µ' + ... + -tr µ' + .. . ... (2)


2! 2
1 r! r
where µ' r is the moment of order r about a. Thus M 0 (t) generates moments and that is why it is called the
moment generating function. From (2), we find
µ' r = coefficient of tr/r! in the expansion of M 0 (t).
Otherwise differentiating (2) r times with respect tot and then putting t = 0, we get

µ' r = [::r (t)]


M0
t=O
.. (3)

Thus the moment about any point x = a can be found from (2) or more conveniently from the formula (3).
Rewriting (1) as
M a (t) = e- at I.p t. ixi or M a (t) = e - at M 0(t) ... (4)

Thus the m.g.f. about the point a= e-at (m.g.f about the origin).
PROBABILITY AND \)ISTRIBUTIONS
ma
(2) If f (x ) is. t!ie d~nsi_ty f~nction of a con_tinuous variate X, then the moment generating f unction of th is
continuous probability distrib ution about x = a is given by

M/ t ) = r~ e1<x- a>f (x) dx.

Example 26.36. Find the mom ent generating fun ction of the exponential distribution
f (x) = !_ e- xtc, 0 ::{x::; 00, c > 0. Hence find its mean and S.D. (Kurukshetra, 2009 )
C

Solution. The moment generating function about the origin is


M (t ) = r~etx . .! e- xlc dx =.! r~e(t-ll c)x dx
0
Jo c c Jo [·.- ltl<~]
e<t - 1/ c)x I~
I
= .! · = (1- ct )-
0 1
= 1 + ct+ c2t 2 + c3t 3 + ...
c l (t-1/ c)I

µ' 1 = [!!:.... M 0(t)] = (c + 2c2t + 3c3t2 + .. .)1= 0 =c


dt t= O

2
µ,2 = [ -d 2 Mo (t )_
] = 2c2, and µ2 = µ'2 - (µ\)2 = 2c2 - c2 =c2.
dt t =O

Hence the mean is c and S.D. is also c.

fllfl PROBABILITY GENERATING FUNCTION


The probability generating function (p.g.f) Pi() for a mndom variable x which takes integral values 0, 1,
2, 3, ... only, is defined by

P/ t) =Po+ P/ + Pi 2 + ... = L Pntn =


n=O
E (F)

The coefficient of tn in the expansion of P (t) in powers oft gives P (t\ = n.


aP=
-
at
L~np tn-l n
or (aP) = Lnpn = µ/

;J
n=O at t=l

2
or (a = L n (n - 1) Pn = µ2' - µ/
lat 1=1

= µ2 + µ/ 2 - µ/ and so on

Also (akpJ =n!pn,k=l,2, ... n .


l atk t=o
For integral valued variates, we have
Px (e1) =E (e 1x) = m.g.f for x .

Obs. The p.g.f. of the sum of two independent random variables is the product of their p.g.f.'s.
- -'1.~u. ........... u .... .1.u,.-:, LV J / \ .4. ) w Ut: a UCOSILY IUDCLIOil.

- REPEATED TRIALS

We know that the probability of getting h d . 1


.. . a ea or a tad on tossing a coin is - . If the coin is tossed thrice,
the probability of gettmg one head and two tails b b" 2
. can e com med as H-T-T, T-H-T, T-T-H. The probabil-
.!.2 x .!2 x 1 ' i.e.,
3
ity of each one of these being · ( 1)
2 , their total probability shall be 3(1/2)3.
.
2
Similarly
·t if
f a trial is repeated
d n time
. s an.d 1"f P IS
· t h e probability
· · of a success and q that of a failure, then
b bil
th e pro a I Y o r successes an n - r failures 1s given by pr qn - r_
But these r successes and n - r failures can occur in any of the nc ways in each of which the probability is
same. r

Thus the probability of r successes is ncr pr qn - r_


Cor. The probabilities of at least r successes in n trials
= the sum of the probabilities ofr, r +I, ... , n successes
= ncrpr qn-r + ncr+ 1Pr+ I qn-r-1 + ... + ncnpn.

fflQ (1) BINOMIAL DISTRIBUTION*

It is concerned with trials of a repetitive nature in which only the occurrence or non-occurrence; success or
failure, acceptance or rejection, yes or no of a particular event is of interest.
If we perform a series of independent trials such that for each trial p is the probability of a success and q
thatofa failure, then the probability ofr successes in a series ofn trials is given bynCrpr qn-r, where r takes any
integral value from Oto n. The probabilities of 0, 1, 2, ... r, ... , n successes are, therefore, given by
q n, ncJ!lq n - 1, nc 2q n - 2, ... , ncr prqn - r ,..., pn.
The probability of the number of successes so obtained is called the binomial distribution for the simple
reason that the probabilities are the successive terms in the expansion of the binomial (q + p)n_
:. the sum of the probabilities
=qn + ncJ!lqn-1 + nc~2qn-2 + ... + pn =(q + p)n = 1.
(2) Constants of the binomial distribution. The moment generating function about the origin is
MaCt) =E(etX) = ncxpXqn-x etx [By (1) § 26.111
= ncx (peer' qn -x =(q + pet)n
HIGHER ENGINEERING M
ATHe,.,..,Tics
Differentiating with respect tot and putting t = 0 and using (3) § 26.11, we get the mean
µ'1 = np .
Since M/t) = e-at Mo<t), the m.g.f. of the binomial distribution about its mean (m) = np, is given by
M m (t) = e- npt (q + pet)n = (qe-pt + peqt)n
2 3 4
= 1 + pq -t + pq(q 2 - p 2) -t + pq(q 3 + p 3 ) -t + .. .Jn
( 2! 3! 4!
t2 t3 t4
or 1 + µl+ µ2 - + µ3 - + µ4 , + · · ·
2! 3! 4.
f f . t
= 1 + npq - + npq(q - p) - + npq (1 + 3 (n - 2) pq] - + ...
2! 3! 4!
Equating the coefficients of like powers oft on either side, we have
µ 2 = npq, µ 3 = npq(q - p), µ 4 = npq (1 + 3(n - 2)pq].
Also ~1= µ/ =(q-p)2 =(1-2p)2 and ~2= ~=3+1-6pq
µ 23 npq npq µ/ npq
Thus mean= np, standard deviation=
skewness= (1 - 2p)! kurtosis= ~2-

Obs. The skewness is positive for p < ½and negative for p > ½• When p = · ½, the skewness is zero, i.e., the
probability curve of the binomial distribution will be symmetrical (bell-shaped).
As n the number of trials increase indefinitely, ~1 0, and ~2 3.

(3) Binomial frequency distribution. If n independent trials constitute one experiment and this
experiment be repeated N times, then the frequency of r successes is N nc, p' qn - '. The possible number of
successes together with these expected frequencies constitute the binomial frequency distribution.
(4) Applications of Binomial distribution. This distribution is applied to problems concerning:
(i) Number of defectives in a sample from production line,
(ii) Estimation of reliability of systems,
(iii) Number of rounds fired from a gun hitting a target,
(iii) Radar detection.
-
I .

l?Mfl (1) POISSON DISTRIBUTION*


It is a distribution related to the probabilities of events which are extrem ely rare, but which have a laT'.
number of independent opportunities for occurrence. The number of persons born blind per year in a large ci!e
and the number of death s by horse kick in an army corps are some of the phenomena, in which this law i!
followed .
This distrib_u tion can be derived as a limiting case of the binomial distribution by making n very large and
p very small, keeping np fixed(= m, say).
The ·p robability of r s uccesses in a binomial-distribution is
_ "C r n _ r _ n(n - 1) (n - 2)·· · · (n - r + 1)
P(r ) - rP q - pr q" - r
r.1

-_ np(np - p) (np- 2p) •· · (np- rlp) (1 - p)n - r


r!
As n 00 ,p 0 (np = m ), we have
. _ mr Lt ___
(1-m/n)" mr e- m
P(r) - - __:__-
r! (1-mln( ---;:-T
so that the probabilities of o, 1 , 2 ... , r ,,... successes ma
. Poisson distrib u t·ion are given
. by
m 2e- m r - m
e-m, m e- m, - 2-! - ,. . ·, -m r-e ,- -·

The s um of these probabilities is unity as it ~hou1d be.


PAosAatuTY AND D1sTA1auT10Ns . f
ED
the
·1 b e d e n ved l o1n
(2) Constants of the Poisson distribution. These constants can eas~y( -'> l ) and notin g that
corresponding con stants of the binomial distribution simply by making n -'> =, P -'> • q
np == ,n \
M ea n = Lt (np) = m
µ 2 = Lt (npq) = m Lt (q) = m
Standard deviation =
Also µ 3 = m, µ 4 = m + 3m 2
Skewness( = J[f;) = 1/m, Kurtosis (= jj 2 ) = 3 + 1./m .
Since µ 3 is positive , Poisson distribution is positively s kewe d and s ince jj > 3, it is L eptokurtic. .
2
(3) Applications of Poisson distribution. This di stribution is applied to probl ems concerning
(i) Arrival ·pattern of'defective vehicles in a workshop', 'patients in a hospital ' or 'telephone calls'.
(ii) Demand pattern for certain spare parts.
(iii) Number of fragments from a shell hitting a target.
(iv) Spatial distribution of bomb hits.
NORMAL APPROXIMATION TO BINOMIAL DISTRIBUTION
If the number of successes in a Binomial distribution range from x 1 to x 2 , then the probability of getting
these successes

As the number of trials increases, the Binomial distribution becomes approximated to the Normal
distribution. The mean np and the variance npq of the binomial distribution will be quite close to the mean and
standard deviation of the approximated normal distribution. Thus for n sufficiently large (,? 30), the binomial
distribution with probability of success p , is approximated by the normal distribution withµ= np, CJ= ..}npq .
We must however, be careful to get the correct values of z . For any success x, real class interval is (x -
I/2, x + I/2). Hence
I I I
~- - -µ ~- - - ~ ~+ - - ~
2 _ 2 ·z _ 2
2
CJ - ..}npq ' - ..}npq

so that P (x 1 < x < x 2 ) = P (z 1 < z < z 2 ) = f" cj>(z) dz which can be calculated by using table Ill-Appendix 2.
z,

You might also like