0% found this document useful (0 votes)
79 views15 pages

Lec30 PDF

The document discusses properties of Poisson distributions and provides a theorem about transformations of continuous random variables. 1) It shows that the sum of two independent Poisson random variables is another Poisson variable, with the parameter being the sum of the original parameters. This property extends to any number of independent Poisson variables. 2) A theorem is presented for transforming two continuous random variables X and Y, defined by new variables Z=h1(X,Y) and W=h2(X,Y), under certain conditions the joint density of Z and W can be determined from the joint density of X and Y. 3) The conditions require uniquely solving for X and Y in terms of Z and W, and the

Uploaded by

Swati Rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views15 pages

Lec30 PDF

The document discusses properties of Poisson distributions and provides a theorem about transformations of continuous random variables. 1) It shows that the sum of two independent Poisson random variables is another Poisson variable, with the parameter being the sum of the original parameters. This property extends to any number of independent Poisson variables. 2) A theorem is presented for transforming two continuous random variables X and Y, defined by new variables Z=h1(X,Y) and W=h2(X,Y), under certain conditions the joint density of Z and W can be determined from the joint density of X and Y. 3) The conditions require uniquely solving for X and Y in terms of Z and W, and the

Uploaded by

Swati Rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Introduction to Probability Theory and Stochastic Processes

Prof. S. Dharmaraja
Department of Mathematics
Indian Institute of Technology, Delhi

Lecture – 30

I will move into the one more example of a discrete type.

(Refer Slide Time: 00:03)

This is also going to be a very important result that is let x is Poisson distribution with a
parameter lambda and y is again Poisson distribution with a parameter a mu. And I make
the assumption x and y are independent random variables independent random variables.

Suppose I create a random variable Z is x plus y are similar derivation what we have
done it for the binomial distribution. The similar derivation you can do and you can
conclude a the probability of Z takes a value z that is going to be e power minus lambda
plus mu lambda plus mu power z divided by z factorial where z can takes a value 0, 1, 2
and so on.

I am not giving the derivation we can do the similar derivation of the previous example
you can get a the probability mass function of z is going to be this form other than a this
z values it is going to be 0. Now we can map this with is there any standard distributions
or common distribution matches we can find out.
So, this is going to be same as the probability mass function of Poisson distribution with
a parameter lambda plus mu. Therefore, one can conclude a Z is also Poisson distribution
with a parameter lambda plus mu. That means, if you have a two independent random
variables both are Poisson distribution with some parameters then the sum is also going
to be a Poisson distribution with a parameter is sum of their parameters.

(Refer Slide Time: 02:34)

The same concept can be extended for a any n random variables; that means, if a n
random variables are a mutual independent that means there is a one random variable
that is the x 1 that is Poisson distributor with a parameter lambda. There is a another
random variable x 2 that is also Poisson distributed with a parameter a lambda 2. Like
that a I have a n-th random variable that is also Poisson distributed with a parameter
lambda n.

If I make a random variable which is nothing, but sum of a X is; that means, I will land
up only one with the only one random variable by summing all the random variables that
is Z, and this is going to be a Poisson distribution with sum of their parameters.

As long as all the X i’s are mutually independent random variables as long as all the
random variables are mutually independent. Then the summation is going to be again a
Poisson distribution with a parameters this is sum of lambda i’s. From these we are going
to give one important properties that is called Reproductive property.
(Refer Slide Time: 04:03)

What the reproductive property says that a if you have sequence of random variables
and if you make a sum of those few some of the variables out of it and all are having
some distribution and after making the summation you are getting the same distribution
of same as X i’s, or the original sequence of random variable then we conclude this
random variable has a this particular random variables has a reproductive property.

That means for example each X i’s are a binomial distributed and I have a many random
variables. All are mutually independent I make the assumption all the random variables
are mutually independent. Then if I make a random variable as the sum of few random
variables out of this collection if that is also follows a binomial distribution. So, we can
conclude binomial distribution has reproductive property.

Similarly, one can say the Poisson distribution is also has a reproductive property where
as the Bernoulli distribution does not have a reproductive property. Because if you have a
Bernoulli distributed random variable all are mutually independent, if you make a n such
random variable as a summation then that is going to be a binomial distribution no more
Bernoulli distribution. Therefore, Bernoulli distribution does not have a re productive
property.

Similarly, one can go for some common continuous type random variables. If you have a
normal distributions all are mutually independent if you make a summation then that is
also going to satisfies; the reproductive property; that means, summation is also going to
be a normal distribution. So, like that we can make a list of a standard, or common
distributions satisfying the reproductive property and not satisfying the reproductive
property.

Now, we will move into distributions of a functions of several variable when each
random variable is of the continuous type. So, for that I am going to give one important
result as a theorem. After I introduce a theorem then I will go for giving some examples
we are not going to prove the theorem. So, I am going to give the important result or the
theorem as a some sort of results, then I am going to give some examples.

(Refer Slide Time: 07:36)

So, let me make it as the theorem we are not going to give the proof of this theorem. Let
me start with this theorem for only two dimension random variable then the same
concept can be extended for n dimensional. So, let me start with the two dimensional that
is let x comma y be a two dimensional continuous type random variables with the joint
probability density function that is small f x comma y with a variables x comma y.

I am going to define new set of random variable that is the first random variables Z is H
1 of x comma y. The another random variable W is H 2 of x comma y. We can assume
that both H 1 and H 2 are a Boral measurable functions, so that Z and W are going to be
a random variables.
I am going to make a few assumptions so that I can able to get the joint probability
density function of Z and W directly with the help of the joint probability density
function of x comma y. I am going to make a few assumptions if those assumptions are
satisfied, then that makes Z is a continuous type random variable as well as W is a
continuous type random variables not only that I can find the joint probability density
function of Z and W with the help of the joint probability density function of x comma y,
so that is what I am going to give it as theorem.

In the first assumption I can solve z as a function of x comma y and the w as a function
of x comma y. This equation can be solved uniquely for x and y in terms of z and w. I
can solve the same thing I am going to I am replacing capital Z by small z capital X and
Y by small x and y. Therefore, whatever I made the transformation of the random
variable Z is equal to H 1 of x comma y, W is equal to H 2 of x comma y. I am going to
solve those with a smaller letters because I am consistently using the capital letter for the
random variable. So, I am solving this equation uniquely for x and y in terms of Z and W.

(Refer Slide Time: 11:36)

So, whatever I am getting the solution that I am going to write it as say x is going to be
say the answer which I am going to get x in terms of z and w that I am going to write it
as the sum function of z comma w g 1. Similarly I am going to write y as sum function of
a z comma w. So, this is the after solving a those two equations ok.
The second assumption the x in terms of z and w, and y in terms of Z and w I can go for
finding out the partial derivative with respect to z w for x and y. I can find the partial
derivative of x with respect to z and w.

Similarly the partial derivative of y with respect to z and w here I am making the
assumptions partial derivative exist not only exist all this partial derivative are a
continuous functions, not only the partial derivative exist it as to be continuous functions
also this is the second assumption.

(Refer Slide Time: 13:35)

With this assumption I am going for concluding joint probability density function of Z
comma W can be written as the probability density function of Z comma w as a function
of a Z and w in terms of the joint probability density function of x comma y by replacing
x by. If you see we made the we got by after solving x in terms of a z and w of g 1 of this
y you are getting g 2. Therefore, in the joint probability density function of x comma y I
am going to replace small x by g 1 of z comma w.

Similarly y I am going to replace by g 2 of Z comma w multiplied by the absolute of the


determinant that is called Jacobian as a function of z comma w where I can define the
Jacobian as a function of z comma w that is nothing, but the determinant of the partial
derivative which we have got it partial derivative of x with respect to z, partial derivative
of x with respect to w, partial derivative of y with respect to z, partial derivative of y with
respect to w.
This determinant is the Jacobian where as in the probability density function of z and w
you substituted the absolute of this Jacobian. This is going to be the joint probability
density function of z comma w; that means, this theorem says whenever you have a
continuous type random variable and you know the joint probability density function of a
the continuous type random variables.

As long as these two assumptions are satisfied the word uniquely is very important if that
is not satisfied then we have a another remark over it. So, here if the assumption 1 as
well as the assumption 2 are satisfied, then we can directly conclude the Z comma W is
going to be a continuous type variables and one can get the joint probability density
function of Z comma W. By substituting an x by g 1 and y by g 2 in the joint probability
density function of a x comma y with the product of a absolute of Jacobian.

The product absolute of Jacobian that is nothing, but the normalizing constant; that
means, the joint probability density function of z comma w over the integration minus
infinity to infinity, the joint probability density function has to be 1. So, this is going to
be 1 whenever you multiply the absolute of Jacobian therefore, the absolute of Jacobian
is nothing, but the normalizing constant.

There is the another remark some books use instead of a product of Jacobian they use
divided by absolute of a Jacobian. In that case they make the Jacobian in the determinant
form not the partial derivative of x with respect to z and w they make the partial
derivative of z and w with respect to x and y. Find the determinant of Jacobian of that
inverse then substitute in the formula with the divider in the denominator.

Both the results are one and the same because the result is the Jacobian matrix, Jacobian
this determinant are the inverse 1 if you make a product that is going to be 1. Because
you have a n dimension random variable again you are transforming another n dimension
random variable by satisfying the two conditions that is you are solving a those equation
uniquely and the partial directive exists and continuous that makes whether you use the
Jacobian or the inverse.

Therefore, the formula changes either in the multiplication in the numerator, or


multiplication in the denominator form. Because the Jacobian and the inverse Jacobian
that determinant value product is always going to be 1. Now, let us go for a one a easy
example to explain how this theorem works.
(Refer Slide Time: 19:20)

Let x comma y be a two dimensional continuous type random variables with joint
probability density function of x comma y that is given as f of x comma y, that takes a
value 1, when x is lies between 0 to 1 and y is lies between 0 to 1, otherwise it is 0.

So, this is the joint probability density function of x comma y you can verify you can
verify by just you know this is x, this is y this is joint probability density function it takes
a value 1 between the interval 0 to 1, and y is also 0 to 1. So, the in the x y plane the
region is a square with the vertex 0 comma 0, 1 comma 0, 0 comma 1 and 1 comma 1.

And at the height 1 the surface is at the height 1 over the square in the x y plane. And if
you find the volume below that volume below the surface that plane 1 above the square
shape that is going to be 1, it is a cube volume of the cube; therefore it is easy to verify
this is a joint probability density function of two dimensional continuous type random
variable. The question is we are going to create a another two dimensional random
variable and then we are finding the distribution of a the new set of random variables that
is also two dimensional.

So, I am going to define the new set of random variable you can use the same notation Z
is x plus y that is the function H 1 of x comma y. The second function that is capital H 2
of x comma y that is x minus y ok, both x and y are continuous random variable the way
we have defined Z is x plus y is w is x minus y.
You can immediately say both are going to be again continuous type random variables,
therefore either you can find the cdf of z comma w. If the question is find the
distribution, if you know that both the random variables are of the continuous type you
can find the joint probability density function. So, here we are going for finding the joint
probability density function of two dimensional continuous type random variables Z and
W.

We can a make sure whether the assumption of the previous theorem satisfied. If it is
satisfied, then you can use the theorem and get the result if it is not satisfied. Then you
cannot find the joint probability density function using that theorem to apply the theorem
you have to make sure that the assumption satisfied. Now, we will go for whether the
first assumption is satisfied or not.

(Refer Slide Time: 23:37)

So, you try to find out that is a said z is equal to x plus y and w that is x minus y. You
solve for this two equations for x and y in terms of z and w; that means, you can get x as
z plus w by 2 correct. If you add this two equations y will be cancelled, so 2 x is equal to
z plus w. Therefore, x is equal to z plus w by 2 y is going to be you subtract, s, x will be
cancelled. So, you will get the 2 y therefore z minus w by 2, that is going to be y. So, you
are able to solve this equation uniquely and you can get the answer x and y in terms of z
and w. So, the first condition is satisfied.
We will go for second condition. Find out the partial derivative whether it exits or not the
partial derivative with respect to z of the function x that is one by two exist which is
continuous constant here that is ok. Similarly you find out the partial derivative with
respect to w partial derivative of y with respect to z partial derivative of y with respect to
w all are exist and are continuous functions also in particular. Here it is constant
therefore; the second condition is also satisfied.

Now, we can go for writing the joint probability density function of a Z and w with the
help of joint probability density function of x and y. That is oh before that we will find
out the determinant of a Jacobian that is Jacobian as a function of z comma w that is
determinant of I will substitute all the partial derivatives in the correct order. Whether
you write like this or in the transpose ways it does not matter because at the end you are
finding the determinant. That is minus 1 by 4, minus 1 by 4 therefore, it is minus 1 by 2
this is a Jacobian.

(Refer Slide Time: 26:30)

Now we can go for writing since the two assumptions are satisfied you can give the joint
probability density function of z comma w as first write down the g 1 of a z comma w, g
2 of z comma w multiplied by the absolute of Jacobian. The Jacobian has to be a non
zero, it is also very important condition, because if it is 0, then the probability density
function will become zero no. So, as long as the Jacobian is going to be a non-zero
quantity we can go for it.
Now, you substitute in this problem the joint probability density function is function of x
and y is 1 between this intervals, otherwise it is 0. So, you can replace x by z plus w by 2
y by z minus w by 2 within that range of z and w lies between 0 to 1, the value is going
to be 1. So, this is going to be since it is a constant you cannot substitute the x by g 1 and
g 2. Therefore, this is going to be again 1, and the Jacobian quantity is a minus 1 by 2
and we have to substitute which is absolute quantity. Therefore, it is 1 by 2 multiplication
provided this joint probability density function provided x lies between 0 to 1. So, here it
is 0 to z plus w by 2 is less than 1.

Similarly, y is lies between 0 to 1 that is 0 less than z minus w by 2, that is has to be less
than 1. So, as long as z and w satisfies these two conditions in which the joint probability
density function is 1 by 2, otherwise it is 0. So, the joint probability density function is 1
by 2 when z and w satisfies 0 z plus w by 2 is less than 1, 0 less than Z minus w by 2 that
is less than 1. That means, now you can think of a how the joint probability density
function of z and w look like.

(Refer Slide Time: 29:33)

Before that we can go for what is the region of z and the w in which the joint probability
density function is greater than 0 that is 1 by 2. First we will identify basically what we
want is z w the joint probability density function of z and w. For that first we are making
a what is the region in which the joint probability density function is going to be the
value is 1 by 2. So, the region is if you simplify these two in equalities you can identify
the region of z and w, z and w 0, 1, 2 and 1 minus 1 ok.

(Refer Slide Time: 30:18)

So, if you simplify those two inequalities; you can identify the region is going to be I am
not drawing the diagram in correct scaling way. Just for the illustration purpose. So, this
is the this shaded region is the region of a z and w; that means, z and w plane this is the
region in which the joint probability density function is 1 by 2, otherwise it is 0.

Now we can verify the joint probability density function integration from minus infinity
to infinity with respect to z and w is going to be 1 because the x y plane is this diagram
above that it is 1 by 2. So, the volume below that surface is 1 by 2 constant over the
region in which this diagram shaded region is there the volume is going to be 1.

So, this type of graphical representation is possible only for two dimensional variable not
for any n dimensional random variable 3 4 and so on it is very difficult to visualize. So,
this is easy to visualize one more observation in this problem given x and y you can see
it. The joint probability density function is one if you find out the probability density
function of x that is going to be one between the interval 0 to 1 for x.

Similarly, if you find the original distribution of y that is probability density function of y
that is also one between the interval 0 to 1 of y, otherwise 0. If you multiply the
probability density function of x and y that is same as joint, so we can conclude x and y
or a independent random variables, where as the joint probability density function of z
and w is 1 by 2 between this interval.

(Refer Slide Time: 33:24)

If you find out the marginal distribution of a Z if you do the simple exercise finding the
probability density function of z by integrating the joint probability density function of a
z and w with respect to w. One can get I am not doing the derivation by substituting the
joint probability density function substitute the correct interval then integrate one can get
the answer that is a z when z is lies between 0 to 1 that is 2 minus z when z is lies
between 1 to 2, otherwise it is 0. So, I can make less than, or equal to here. So, this is
going to be probability density function of z between the interval 0 to 1 that is z, and 1 to
2 it is 2 minus z, otherwise it is 0.
(Refer Slide Time: 34:41)

Similarly, one can compute the probability density function of w from the joint
probability density function of z and w by integrating with respect to z. If you do that
you will get the probability density function of w that is w plus 1, when w in the range
from minus 1 to 0 and it is 1 minus w between 0 to 1 otherwise it is going to be 0.

The interval of z and w that you can get it you can feel it from the diagram itself, the
range of z is 0 to 2 whereas, the range of w is minus 1 to 1. So, therefore, we are getting
the probability density function like this for Z and probability density function of a w in
this form. The way the probability density function of z and w is like this if you make a
multiplication you would not get the value 1 by 2 that is joint probability density
function of z and w.

Therefore, you can immediately conclude z and w are not independent random variables
x and y are independent random variable the way we defined z is x plus y w is x minus y
they are not independent random variables. So, with this example we are explaining how
the theorem works.

But sometimes the assumption first assumption that is solve uniquely it may not satisfy;
that means, you may have a more than one set of values instead of z and w in terms of x
and y uniquely in that case for every set of pairs you have to identify what is the density
function with the corresponding Jacobian. And you have to keep adding how many pairs
of solution you are going to get those many summations you have to make it to get the
joint probability density function of z and w.

It is similar to what we have done it with the function of a random variable for a
continuous type whenever it is not satisfying whenever the function is a monotonically
increasing, or decreasing or decreasing or increasing form the same technique he has to
be applied for the multidimensional random variable.

You might also like