0% found this document useful (0 votes)
40 views8 pages

Cosm Unit-1 Part-2

Uploaded by

saikiran93980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views8 pages

Cosm Unit-1 Part-2

Uploaded by

saikiran93980
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CHAPTER -2

RANDOM VARIABLES&
PROBABILITY DISTRIBUTIONS
2.1 INTRODUCTION
If we roll apair of dice, then the sum X of thetwo numbers must be an integer between
2and 12. but we cannot predict which value of X will occur in the next trial, and we may say
that X depends on chance", Similarly, if we draw 5 bolts from a lot, then we cannot predict
howmany willbe detective(i.e. will not meet given requirements). Hence X= no. of defectives
is again afunction which depends on "chance". Like this we may have several examples. Thus,
we can say that, a random variable Xis a function whose values are real numbers and depend
on "chance".
Previously we considered the concept of random experiment, events, sample space and
sample points. The events described on arandom experiment may be numerical or non-numerical
(deseriptive). For example, the outcomes that we obtain, when we throw a die are numerical.
(We get the outcomes as 1, 2, 3, 4, 5 or 6). Hence the sample space is numerical. But, the
outcomes we obtain, when we toss a coin are non-numerical. We get the outcomes as head or
tail. Here the sample - space is non-numerical or descriptive.
It is inconvenient to deal with these descriptive outcomes mathematically. Hence, for
ease of manipulation, we may assign a real number to each of the outcomes using a fixed rule
or mapping. For example, when we toss a coin we get two outcomes, namely head or tail; we
can assign numerical values, say 1' to head and 0° to tail. This interpretation is easy and
attractive from mathematical point of view and also practically meaningful.
This rule or mapping from the original sample space (numerical or non-numerical) to a
numerical (real) sample space, subjected.to certain constraints is called a random variable.
Thus Random variable is a real valued function which maps the numerical or non-numerical
sample space (domain) of the random experiment to real values (codomain or range).
2.2 RANDOM VARIABLE

We know that a variable is a quantity which changes or varies - the change may
occur due to time factor or any factor. For example, the height of a person vary with age,
but the age of a person changes with time. Every variable has a range in which it can take
any value. Examples are: marks scored by a student in an examination, number of children
in afamily, height and weight of a person, ete. The variable could be continuous or
discrete. Acontinuous variable takes allpossible values in its range. For example, the
height ofa person can take any value in acertain range, but for the sake of convenience,
it is measured only up to the accuracy of inches or cms. Similarly, the weight of a person
57
Computer Oriented Statistical
60

calls received by the telephone operator are examples of Discrete Random variables
Methods
Thus to each outcome 's' of arandom experiment there corresponds areal
X(s) which is defincd for cach point of the sample S.
number
Afew examples are :
() In the cexample (1), X(s) = (s:s = 0, 1, 2) or Range of X= {0, 1, 2}.
The random variable X is a discrete random variable.
(ii) The random variable denoting the number of students in a class is
X(r) ={x:x is apositive integer)
2. Continuous Random Variable : Arandom variable X which can take values
continuously ie., which takes all possible values in agiven interval is called acontinuous
random variable.
For example, the height, age and weight of individuals are examples of continuous
random variable. Also temperature and time are continuous random variables.
2.4 PROBABILITY FUNCTION
Illustration 1: Distribution of Probability over various numbers on Die
When a single dice is thrown, the probability associated with each number on the
top face of the dice can take the values with probability being same is 1/6 for all the
numbers 1to 6. This can be depicted with the help of a diagram as follows:
Probability

’Number on Die
2 3 4 5 6
Illustration 2: Distribution of Probability over number of Heads
When two coins are tossed, the number of Heads is a random variable which can
11
take the values 0,J or 2. The associated probabilities are 4' and 4 and is shown below:
Probability

2/4

|/4

’ Number of Heads
2
Random Variables & Probability Distributions 61

It may be noted that total of all the three probabilities 11 and 1 is 1. This is so
4'2 4
because the random variable is bound totake one of the three values 0,1 or 2 in its range.
The above diagrams shows how the total probability(= 1)is distributed over the
range of the variable. So, these diagrams are referred to graphs of probability distributions.
In many stuations, these graphs are described by amathematical function which is referred
to as a Probability Function or Statistical distribution.
2.4.1 PROBABILITY FUNCTION OF ADISCRETERANDOM VARIABLE
If for a discrete random variable X, the real valued function p(x) is such that
P(X =x) =P(*) then p(x) is called probability function or probability density function of a
discrete random variable X. Probability function p(*) gives the measure of probability
for different values of X.
Properties of a Probability Function
If p(*) is aprobability function of arandom variable X, then it possesses the following
properties:
1. p(x) 0 for allx.
2. Px) =1, summation is taken over for all values of x.
3. p(x) cannot be negative for any value of x.
2.4.2 PROBABILITY DISTRIBUTION FUNCTION [JNTU(H) Dec. 2019]
Let X be a random variable. Then the probability distribution function associated
with Xis defined as the probability that the outcome of an experiment willbe one of the
outcomes for which X(s) s x,xeR. That is, the function F() [ie, Fy()]defined by
F-()= P(X<x) = P{s:X(s)<x}, - o <x<o is called the distribution function of X.
Definition : IfX is any random variable, not necessarily discrete, then for any real
number xthere exists the probability P(X s x) corresponding to X<x.
Clearly, P(X <x) depends on x. So it is afunction of x, which is called the distribution
function of X and is denoted by F(x). Thus F(x) = P(X Sr)
Properties of Distribution Function : [JNTU (K) Dec. 2013 (Set No. 3)]
1. If F is the distribution function of a random variable X and if a < b, then
() P(a < X<b)= F(b) - F(a)
(ii) Pla <X<b)= P(X= a) + [F(b) F(a)]
(i)P(a <X<b) = [F(b) F(a)] P(X= b)
(iv) P(a<X<b)=[F(b) F[(a)]- P(X= b) + P(X= a)
Note : If PX = a) = PX= b) = 0 then
P(a<X<b) =P(asX<b) = Pla<X<b)= Pla<X< b) = F(b) F(a)
Computer Oriented Statistical
62
This shows that Ithe distribution function determines the distribution of Xuniquely andMethoth
that it can be used for computing probabilities.
between 0 and1
2. Alldistribution functions are monotonically increasing and Iie
then
That is,ifF is the distribution function of the random variable X,
(i) 0<F()<1
(ii) Fr) <Fo) when x<y.
Lt
3. () F- o) = Fr) = 0

Lt
(ii) F(o) = F(r) = 1
2.5
DISCRETE PROBABILITY DISTRIBUTION (PROBABILITY MASS FUNCTION)
Probability distribution of arandom variable is the set of its possible values together
With their respective probabilities. Suppose Xis a discrete random variable with possible
outcomes (values) x,, x,, x.....The probability of each possible outcome x, is
P, = P(X= x) = p(x) for i= 1, 2, 3, ...
If the numbers p(x), i= 1, 2, 3, ... satisfy the two
conditions
() p(x) > 0for all values ofi; 0<p; sl
(ii) Ep(x) =1, i= 1, 2, 3,
then the function P(x) is called the probability mass function of the
random variable X
and the set {p (x)}, i= 1, 2, ... is called the discrete probability
distribution of the
discrete random variable X.
The probability distribution of the random variable Xis given by means of the
table: following

P(X) P P2 P3 ....
Pn

Further P(X <x)= p(x,) +pr,) +t .. t p;-)


PXSx)=pr) +plr,) +.. +p,.) tpl)
and P(X> x)=1- P(XSx)
Ex. 1. In tossing a coin two times, S = (TT, HT, TH, HH)
P(X = 0) = Probability of geting two tails (no heads)
1 1 1
= P({T, T) = 2 2 4
P(X= 1)= Probability of getting one head
1
= P({H, T}, {T, H}) =7i=
Random Variables & Probability Distributions 63

PX= 2) =Probability of getting two heads


=P((H, H}) =
4

Thus the total probability is distributed into three


parts 4' 2' 4 according to
whether X=0or 1or 2. This probabilitydistribution is given in the following table :

X= x; 01 2
1 1
P(X =x) 24

Ex. 2. Let X be the random varjable which represents the sum of numbers of points
throwing two unbiased dice. If the point shown in each dice is equal to one, the
minimum sum of numbers is equal to two. Ifboth dice show six, then the sum of numbers
at the maximum is 12.
Thus if two unbiased dice are thrown. then the sum X of the two numbers which
turnup must be an integer between 2 and 12. For X= 2, there is only one favourable point
(1, 1) and hence PX=2)= 1/36, since there are 36 sample points in all. For X=3, there
are two favourable sample points (1, 2)and (2, 1) and hence P(X= 3) = 2/36. Similarly
for X = 4, there are three favourable sample points (1, 3,), (2, 2), (3, 1) and hence
P(X= 4)= 3/36 and so on: Now the probability distribution in this case is given by the
following table :

X=x 2 4 5 6 7 10 12

PX =x) 1/36| 2/36 3/36| 4/36| 5/36| 6/36| 5/36 4/36 3/36 2/361/36
Clearly Ep(x)=p, +P, t .. tp,, =1
From the above table, we have
3 2+ 1 6 1
P(X>10) = 36 36 36 36 6
1 2 3 4 5 15 5
P(X<6) = 36 +
36
+
36 36 36 36 12
5 4 3 2 14 7
and P(7<X< 12) =36 +
36
+
36
+
36 36 18
In the above example, Xtakes only a finite number of values as such it is a discrete
random variable.

Note : The concept of probability distribution function is analogous to that of frequency


distribution. Afrequency distribution tells us how the total frequency is distributed among
different values of the variable, whereas aprobability distribution tells us how total probability
Tisdistributed among the values which the random variable can take.
64 Computer Oriented
Statistical
Cumulative Distribution Function of a Discrete Random Variable Methots
There are many occasions in which it is of interest to knowthe probability1
value of a random variable is less than or equal to some real number x. that the
Suppose that Xis adiscrete random variable. Then the Discrete Distribution funeti.
or Cumulative Distribution function F(r) is defined by

F() = P(X<x) = Plx) =p;) where xis any integer.


(: sr)
(OR) F(«) =P(X <x) =)f), where -0<X<0and f() is the value of the
probability
distribution of Xat t.
Aliter : If p(x) i.e., f(r) is the probability function or probability distribution,
then the value of p(x) i.e., ) fa) denoted by F(r) is called the
x=0 x=0 Cumulative
Distribution Function or simply Distribution Function. It is similar to
frequency i.e., the cumulative frequency upto agiven point or value. It gives the cumulative
that the random variable takes any value from 0 (lowest value that the probability
a given value. variable can take) to
For example,

F(2)- f(r) =f(0)+f) +f(2) =p(0) + p(l) +p(2)


X=0

1 11
757=1 (in Ex. Iabove)
Suppose if Xtakes only a finite number of values x, x,,
function is given by ., x_ then the distribution

0,-0<x<X
plx).xSx<n
p(x) +p() Sx<
F(x) =

P()+ p) +..+ px,), x, Sx<o


Probability Density Function:
The probability density function fy(r) is defined as the
distribution function, F,(x) of the random variable X. derivative of the probability
d
Thus f (r)=F())
Random Variables & Probability Distributions 65

2.6 CONTINUOUS PROBABILITY DISTRIBUTION


When a random variable Xtakes cvery value in an interval, it gives rise to continuous
distribution of X. The distributions defincd by the variates like temperature, heights and
weights are continuous distributions.
Probability Density Function [JNTU 2001]
For continuous variable, the probability distribution is called Probability Density
function because it is defined for every point in the range and not only for certain
VaiueS.
Consider the small interval -drt of length dx round the point x. Letf (x)
2
be any continuous function of x so that f() dx represents the probability that the variable
d dx
Xfalls in the infinitesimal intervalx Symbolically it can be expressed as
2 2
dx dx
P-X Sx+=f() dx. Then f() is called the probability denisty function or
simply density function of the variate Xand the continuous curve y =f(x) is known as the
probability denisty curve or simply probability curve.
As the probability for a variate value to lie in the interval dx is f (x) dx, so the

probability for avariate value to fall in the finite interval (a, b) is f() dx which represents

the area between the curve y =f (x), the x-axis and the ordinates x = aand

x= b. Since the total probability is unity, we have |f() dx = 1, where [a, b] is the range
of the variate X. The range of the variable may be finite or inifinite. But even when the
range is finite, it is convenient to consider it as infinite by supposing the density function to
be zero outside the given interval.
Properties of the probability density function fo):

() f«) >0, xe R

(ii) The probability P(E) is given by


P(E) = f(x) dt is welldefined for any event E.
Note: In the case ofcontinuous random variable, we associate the probabilities with intervals.
In this case the probability of the variable at a particular point is always zero.
Thus P(a <X<b)= P(a<X< b) = P(a <X< b) = P(a <X< b) = F(b) - F(a)
(:: P(X = a) = 0, P(X= b) = 0]
That is, inclusion or non-inclusion of end points, does not change the probability,
which is not the case in the discrete distributions.
66 Computer Oriented Statistical
Melhods
Cumulative Distribution Function ofA Continuous Random Variable :
The cumulative distribution function or simply the distribution function ofa contin
random variable Xis denoted by F(x) and is defined as

F() = PX <x)=| f) d
Thus F(r) gives the probability that the value of the variable Xwill be <x.
Properties of F(x) :
() 0<F(r) <1, -0 <x< 0.
(i) F(*) =fr) > 0, so that F) is a non-decreasing function.
(iii) F(-o) =0
(iv) F(o)= 1
(v) F(x) is a continuous function of x on the
right.
(vi) The discontinuities of F(x) are
countable.
(vii) P(a<X< b) =f() dr =F(b) F(a)
(viii) Since F'(x)=f(), we have d
This is known as probability differential of X.

|1.
REVIEW QUESTIONS)
Define a Random Variable.
|2. Write the definitions of (i) |JNTU(H)Dec. 2014]
Random variable (ii) Discrete Random
(iii) Continuous Random Variable and (iv) Probability Variable
Distribution function
3. Define random |JNTU(K)Nov. 2011(Set No. 2)|
variable, discreteprobability distribution, continuous probability
distribution and cumulative distribution. Give an example of each.
IJNTU2007, 2008S (Set No.4), (K) Nov.
4. Define distribution function of a random 2011 (Set No. 1)|
variable.
5. List the properties of probability distribution function. |JNTU(H)Dec. 2019|
6. (a) Define (i) Probability density JNTU(K) Dec. 2013 (Set No. 3)]
function (ii) Probability mass function
(ii) Discrete random variable (iv) Continuous random variable

(b) Explain with suitable examples, discrete and IJNTU Dec. 2013 (Set No. 3)|
(K)
continuous random variables
|JNTU(H)Nov. 2015)

You might also like