Study of The Mellin Integral Transform With Applications in Statistics and Probability
Study of The Mellin Integral Transform With Applications in Statistics and Probability
com
ISSN 0975-508X
CODEN (USA) AASRC9
______________________________________________________________________________
ABSTRACT
The Fourier integral transform is well known for finding the probability densities for sums and differences of
random variables. We use the Mellin integral transforms to derive different properties in statistics and probability
densities of single continuous random variable. We also discuss the relationship between the Laplace and Mellin
integral transforms and use of these integral transforms in deriving densities for algebraic combination of random
variables. Results are illustrated with examples.
Keywords: Laplace, Mellin and Fourier Transforms, Probability densities, Random Variables and Applications in
Statistics.
AMS Mathematical Classification: 44F35, 44A15, 44A35, 44A12, 43A70.
______________________________________________________________________________
INTRODUCTION
The aim of this paper , we define a random variable (RV) as a value in some domain , say ℜ , representing the
outcome of a process based on a probability laws .By the information above the probability distribution ,we integrate
the probability density function (p d f)in the case of the Gaussian , the p d f is
1 X −π 2
1 − (
,−∞ <x<∞
)
2 σ
f (x)= E
σ 2π
when µ is mean and σ 2 is the variance.
We define the p d f for X+Y and XY ,where X and Y are the R Vs, by using the brief background on probability
theory and see the convolution by using Laplace – Mellin integral transforms.
In this paper we define Mellin integral transform , continuous random variable for X and its p d fs , continuous
distribution function, expectations and moments about origin and mean for independent CRVs X , mode , median
, quartiles , deciles , percentiles , skew ness ( by using mean and mode and also by using quartiles) , kurtosis (by
1294
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
using moments) : Transform for the sum of the random variables, Convolution Algebra on L (ℜ) ,: The Mellin
1
integral transform, and relation with Laplace integral transform relation in between expected values and moments of
CRVs X ,one dimensional continuous random variable and its p d fs , marginal density functions, theorems of
addition and multiplication of CRVs X and Y, relations in between expected values of CRVs X and Y and Mellin
integral transform.
3.1.2: Terminology
To avoid confusion , it is necessary to mention a few cases in which the terminology used in probability
theory * ”Distribution”,( or “law”) in probability theory means a function that assigns a probability
0 ≤ p ≤ 1 to every Borel subset of ℜ ; not a “generalized function” ,as in the Schwartz theory of
distribution.
* For historical reasons going back to Henri Poincard , the term “ characteristic function “ in probability
theory refers to as integral transform of a p d f, not to what mathematicians usually refer to as the
characteristic function .For that concept probability theory uses “ indicator function “ ,symbolized I; e.g.
I ( 0,1) ( x) is 1 for x ∈ [0,1] and 0 elsewhere .In this paper as will not use the term “characteristic function
“ at all.
* We will be talking about p d fs being in L1 (ℜ) , and this should be taken in the ordinary mathematics
a sense of a function on ℜ which is absolutely integrable . More commonly , probabilists talk about
random variables being in L1 , L2 , etc.
2
Which is quite different -in terms of a p d f f, it means that ∫ x f ( x)dx , ∫ x f ( x)dx , etc exist and are
finite. It would require an excursion into measure theory to explain why this makes sense , suffice it to
say in the latter case we should really say something like " L1 (Ω, F , P)" , which is not at all the same
as L1 (ℜ) .
As a ‘realization’ of X , with a probability law or distribution which tales us how much probability is associated
with any interval [a ,b] ⊂ ℜ .How much is given by a number 0 ≤ p ≤ 1
Formally , probabilities are implically defined by their role in the axioms of probability theories .A probability
law in ℜ can be represented by its density or p d f which is a continuous function f(x) with the probability that the
b
probabilities of finding x in [a , b] i.e. P ( x ∈ [ a , b]) = ∫ f ( x)dx ,it gives the probability “mass” per unit length
a
,which is integrated to measure the total mass in an interval .
1295
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞
and ∫
−∞
f ( x )dx = 1
∞
For any RV X ,with p d f f, it mean X =µ= ∫ xf ( x)dx ,,this is usually designated by E(X),the expectation or
−∞
expected value of X ,the variance of X is
∞
E{( x − µ ) 2 ] = ( x − µ ) 2 f ( x)dx
∫
−∞
A frequently used modes for such RVs is the Mellin distribution , with p d f.
∞
1 r −1 − x
f(x)= x e ,if x>0,0 otherwise ,when Γ(r ) = ∫ x r −1e − x dx ,where x r −1 is the Mellin kernel foe r>0 is a
Γ(r ) 0
parameter .We see expectation ,mean , variance , moments ,mode, median , skew ness , kurtosis ,etc.
Independence is profoundly important in probability theory ,and is mainly what saves probability from being
‘merely’ an application of measure theory .For the purpose of this paper , an intuitive definition suffices :two
random variables X , Y are independent if the occurrence or nonoccurrence of an event X ∈ [ a, b] does not affect
the probability of an event Y ∈ [ a, b] and vice versa .Computationally ,the implication is that “independence
means multiply” i. e. if X ,Y are independent,
P ( x ∈ [ a, b] & y ∈ [ a, b ]) = P ( X ∈ [ a, b]P (Y ∈ [ a, b]
In this paper , we will only consider independent RVs .If X ,Y are RVs ,by substituting [ −∞, ∞] for either of the
intervals of integration , it is seen that
b ∞ b ∞ b ∞
∫
a
f X ( x)dx ∫ f Y ( y )dy = ∫
−∞ a
∫
−∞
f X ( x) f Y ( y )dxdy = ∫
a
∫
−∞
f XY ( x, y )dxdy
b ∞ b ∞ b ∞
= ∫ ∫
a −∞
f ( x, y )dxdy = ∫ [ ∫ f ( x, y )dy ]dx = ∫ [ ∫ f X ( x)dx
a −∞ a −∞
∞
where f X ( x) = ∫
−∞
f XY ( x, y )dy ,this is true and f Y ( y ) is the marginal density of Y.
1296
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
One of the main problems arising in the applications is that of inverting Mellin integral transform ,i. e. the
determination of the original function f(x) from the transform
∫ f ( x)dx = P (α ≤ x ≤ β )
α
β
Clearly ∫
α
f ( x)dx represents the area under the curve f(x) ,the x-axis
1297
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞
P ( x) = P ( X ≤ x) = ∫
−∞
f ( x)dx ,−∞ ≤ x ≤ ∞
is called distribution function or cumulative distribution function of the continuous random variable X.
1 −x
If f(x) = e is the function of continuous random variable X, then
Γ(r )
∞ ∞ ∞
r −1 1 r −1 − x 1
M [f(x),s]= ∫ x f ( x) = ∫ x e dx = ∫ x r −1e − x dx
0 0
Γ(r ) Γ(r ) 0
Γ(r ) µ'
= =1= x 0
Γ(r )
1298
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞ ∞ ∞
1 −x 1
E[X] = ∫ xf ( x)dx = ∫ x r −1 e dx = ∫ x r e − x dx
−∞ −∞
Γ(r ) Γ(r ) −∞
∞
1 1 (r + 1)Γ(r )
∫ x r +1−1e − x = Γ(r + 1) = =(r+1)= µ1
'
=
Γ(r ) 0
Γ(r ) Γ(r )
E[X]= µ '
1 =(r+1)
∞ ∞ ∞
1 r −1 − x 1
∫ x f ( x)dx = ∫ ∫ x 2 x r −1e − x dx
2 2 2
E[X ] = x x e dx =
0 0
Γ(r ) Γ(r ) 0
∞ ∞
1 1
= ∫ x r +1e − x dx = ∫ x r + 2−1e − x dx
Γ(r ) 0
Γ(r ) 0
Γ( r + 2) (r + 1)(r + 2)Γ(r )
=(r+1)(r+2)= µ 2
'
= =
Γ(r ) Γ( r )
E[X ]= µ 2 =(r+1)(r+2)
2 '
∞ ∞ ∞
1 r −1 − x 1
∫ x 3 f ( x)dx = ∫ x 3 ∫ x 3 x r −1e − x dx
3
E[X ] = x e dx =
0 0
Γ( r ) Γ(r ) 0
∞
1
= ∫ x 3 x r +3−1e − x dx
Γ(r ) 0
Γ(r + 3) (r + 1)(r + 2)(r + 3)Γ(r )
=(r+1)(r+2)(r+3)= µ 3
'
= =
Γ(r ) Γ( r )
E[X ]= µ 3 =(r+1)(r+2)(r+3)
3 '
∞ ∞ ∞
1 r −1 − x 1
∫ x f ( x)dx = ∫ ∫ x 4 x r −1e − x dx
4 4 4
E[X ] = x x e dx
0 0
Γ(r ) Γ(r ) 0
∞
1
= ∫ x r + 4−1e − x dx
Γ(r ) 0
E[X ]= µ 4 =(r+1)(r+2)(r+3)(r+4)
4 '
3.1.5.2: The Mellin Integral Transform and Moments (Moments about origin)
1299
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞ ∞
2: E[ x r ] = ∫ x r f ( x)dx = ∫ x r +1−1 f ( x)dx =M[f(x),r+1]= µ x' 1 =(r+1)
0 0
(
∞ ∞
3: E[ x r +1
]= ∫ x r +1
f ( x)dx = ∫ x r + 2−1 f ( x)dx =M[f(x),r+2] = µ x' 2 =(r+1)(r+2)
0 0
∞ ∞
4: E[ x r+2
]= ∫ x r +2
f ( x)dx = ∫ x r +3−1 f ( x)dx =M[f(x),r+3]= µ x' 3 =(s+1)(s+2)(s+3)
0 0
∞ ∞
5: E[ x r +3 ] = ∫ x r +3 f ( x)dx = ∫ x r + 4−1 f ( x)dx =M[f(x),r+4]
0 0
=µ
'
x4 =(s+1)(s+2)(s+3)(s+4)
1
If f(x) = is the function of continuous random variable Y , then
Γ( s )
∞
6: E[Y r −1 ] = ∫ y s −1 f ( y )dy =M[f(y),r]= µ y' 0 =1
0
∞ ∞
7: E[Y r ] = ∫ y s f ( y )dy = ∫ x s +1−1 f ( x)dx =M[f(y),s+1]= µ y' 1 =(s+1)
0 0
(
∞ ∞
8: E[Y r +1 ] = ∫ y s +1 f ( y )dy = ∫ y s + 2−1 f ( y )dy =M[f(y),s+2] = µ y' 2 =(s+1)(s+2)
0 0
∞ ∞
9: E[Y r+2
]=∫ y s +2
f ( y )dy = ∫ y r +3−1 f ( y )dy =M[f(y),s+3]= µ y' 3 =(s+1)(s+2)(s+3)
0 0
∞ ∞
10: E[Y r +3 ] = ∫ y s +3 f ( y )dy = ∫ y s + 4−1 f ( y )dy =M[f(y),s+4]
0 0
=µ
'
y 4 =(s+1)(s+2)(s+3)(s+4)
= x2
µ =µ '
x2 − (µ ) '' 2
x1
=(r+1)(r+2-r-1)=(r+1)(r+2)-(r+1)
2
V(X) =(r+1)= µ x 2
The other moments about mean are obtained using relations in between moments
about a origin and moments about mean.
2:
µ x 3 = µ x' 3 − 3µ x' 2 µ x' 1 + 2( µ x' 1 ) 3
3
=(r+1)(r+2)(r+3)-3(r+1)(r+2)(r+1)+2(r+1)
2
=(r+1)[(r+2)(r+3)-3(r+1)(r+2)+2(r+1) ]
=(r+1)( r
2
+ 5r + 6 − 3r 2 − 9r − 6 + 2r 2 + 4r + 2)
µ x3 =2(r+1)
1300
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
3:
µ x 4 = µ x' 4 − 4 µ x' 3 µ x' 1 + 6µ x' 2 ( µ x' 1 ) 2 − 3( µ x' 1 ) 4
2 4
=(r+1)(r+2)(r+3)(r+4)-4(r+1)(r+2)(r+3)(r+1)+6(r+1)(r+2)(r+1) -3(r+1)
=(r+1)[
(r + 2)(r 2 + 7 r + 12 − 4(r + 2)(r 2 + 4r + 3) + 6(r + 2)(r 2 + 2r + 1) − 3(r 3 + 3r 2 + 3r + 1) ]
=(r+1)( r + 7 r + 12r + 2r + 14r + 24 - 4r − 16r − 12r − 8r − 32r − 24
3 2 2 3 2 2
=9(r+1)
µ x 4 =9(r+1)
3.1.5.4: Measure of Skewness ( β 1 γ 1 )
Karl Pearson’s defined the four coefficients based on moments about mean
These are used to measure the skewness and kurtosis
By using the moments about mean , we define the Karl Pearson’s skewness and as follows
µ 32 8(r + 1) 2
skewness = γ 1 =
8
β1 = = =
µ 23 (r + 1) 3
r +1
If β1 =0 , the distribution is symmetric
If β1 <0 , the distribution is negative skew
3.1.5.6: Mode
dy
If f(x) be a function of continuous random variable X , then = f ' ( x) = 0
dx
d2y
We get values of X i.e. X 1 , X 2 ,…., X n , and if dx 2 ] X = X11 <0 , then
[
X = X is the mode
1 − x r −1
If f (x) = e x be the contineius function of random variable X, then
Γ(r )
1
f ' ( x) = [ x r −1 (−e − x ) + (r − 1) x r − 2 e − x ]
Γ(r )
1301
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
1
= [ x r −1 (−e − x ) + (r − 1) x r − 2 e − x ]
Γ(r )
x r −2 e − x
= [− x + (r − 1)]
Γ( r )
x r −2 e − x
= [− x + r − 1]
Γ( r )
f’(x)=0 , then
-x+r-1=0
x=r-1, is the point
− e−x r −1 r −2 e−x
f’’(x)= [− x + (r − 1) x ] + [−(r − 1) x r −2 + (r − 1)(r − 2) x r −3 ]
Γ(r ) Γ(r )
−x
e
= [ x r −1 − (r − 1) x r −2 − (r − 1) x r − 2 + (r − 1)(r − 2) x r −3 ]
Γ(r )
e − x r −3 2
= x [ x − 2(r − 1) x + (r − 1)(r − 2)]
Γ(r )
e − ( r −1)
[ f ' ' ( x)] x =r −1 = (r − 1) r −3 [(r − 1) 2 − 2(r − 1)(r − 1) + (r − 1)(r − 2)]
Γ(r )
e − ( r −1)
= (r − 1) r −3 [(r − 1) 2 − 2(r − 1) 2 + (r − 1)(r − 2)]
Γ(r )
e − ( r −1)
= (r − 1) r −3 [−(r − 1) 2 + (r − 1)(r − 2)]
Γ(r )
e − ( r −1)
= (r − 1) r −3 [(r − 1)(r − 2 − r + 1)]
Γ(r )
e − ( r −1)
= (r − 1) r −3 [−(r − 1)]
Γ(r )
e − ( r −1)
=- (r − 1) r −2
Γ(r )
[ f ' ' ( x)] x =r −1
Then f(x) is maximum at x=r-1
The value of the mode is r-1
Mo=x=r-1
1 − x r −1
If f(x)= e x be a function of continuous random variable X , and
Γ(r )
M
1
∫
0
f ( x)dx =
2
, then M=Md is said to be Median
1302
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
M M
1 r −1 − x 1 1 r −1 − x 1
∫
0
Γ(r )
x e dx = then
2 ∫
0
Γ(r )
x e dx =
2
M
x r −1e − x 1 M
M r −1e − M 1
∑0 Γ(r ) = 2 then ∑0 Γ(r ) = 2
M
M r −1 1 −M M
−1
M M2 1
e −M ∑ = then e [ +1+ + + − − −] =
0 Γ(r ) 2 Γ ( 0) 1! 2! 2
M −1 M M2 1
e −M [ ] + e − M [1 + + + − − −] =
Γ ( 0) 1! 2! 2
1 1
e − M M −1 + e − M e M = −M
then e M +1=
−1
2 2
− 1 − 1
e − M M −1 = then e
−M
= M
2 2
M2 −1
1− M + −.− − = M
2 2
Comparing first term on both sides
−1
1= M then
2
Md=-2 , this is the value of the Median
Q1=Quartile N0.1= - 4
−4
Q3=Quartile No.3= -
3
−4
Qr = where r=1, 2, 3
r
− 10
Pr = whetre r=1, 2, --,9
r
−4
+4
Q3 − Q1 3 4
Quatile Deviation=QD= = =
2 2 3
−4 8
+4
Q3 − Q1 3 3 = − 1 =-0.5
Coefficient of Q.D= = =
Q3 + Q1 − 4 − 16 2
−4
3 3
Bowley’s Method
4 4
Skeness =2(Md)-Q1-Q3= 2(-2)+4+ =
3 3
2( Md ) − Q1 − Q3
Coeff.of Skewness =
Q3 + Q1
4 4
2(−2) + 4 +
=
3 = 3 = 1 = 0.5
−4 8 2
+4
3 3
1303
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
Kalpearson’s Method
Skeness=Mean-Mode =r+1-(r-1)=r+1-r+1=2
mean − mod e (r + 1) + (r − 1) 2
Coeff. Of Skewness = = =
std r +1 r +1
where r ≥ 1. 1
∞ ∞
1 − x r −1
∫ X − X f ( x)dx = ∫ X − X e x dx
MD=
−∞ 0
Γ(r )
∞ ∞ ∞
1 1
xe − x x r −1 dx − ∫ xe − x x r −1 dx]
= Γ(r ) [ ∫
∫ x − x e − x x r −1
= Γ(r )
−∞ 0 0
∞ ∞
1 1
[ e − x x r +1−1dx − x ∫ e − x x r −1 dx] =
= Γ(r ) ∫
[Γ(r + 1) − xΓ(r )]
0 0
Γ(r )
1 Γ(r )
= Γ(r )
[( r + 1) Γ ( r ) − xΓ ( r )] = Γ(r )
[r + 1 − x]
MD = r + 1 − x where r is positive
3.1.5.9: Probability
d
If F ( x ) = f ( x) , and f ( x) ≥ 0 then
dx
P ( a ≤ x ≤ b) = F (b) − F ( a ) or
b b
P ( a ≤ x ≤ b) = ∫ f ( x)dx = ∫ x r −1 f ( x)dx
a a
For Mellin Integral Transform
∞
P (0 ≤ x ≤ ∞ ) = ∫ x r −1 f ( x)dx , where
0
e−x
f(x)= is the continuous .function then
Γ (r )
∞
1 r −1 − x
P (0 ≤ x ≤ ∞ ) = ∫ Γ( r ) x
0
e dx =1
1304
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
If we can determine the joint density function f XZ ( x, z ) ,then the marginal density
function f z (z ) = ∫ f XZ ( x, z )dx = ∫ f XZ [ψ ( x, z )]dx
−1
ℜ ℜ
= ∫
ℜ
f XZ ( x, z − x)dx = ∫ f X ( x) f Y ( z − z )dx ,
ℜ
where X and Y are independent
= f X * fY ( z)
The nest-to-last line above is intuitive it says that we find the density for Z=X+Y by integrating the joint density of
X ,Y over all points where X+Y=Z .i.e. Y=Z-X.
1 Γ(r )
=
Γ(r ) (1 + 2πiξ ) r
1
=
(1 + 2πiξ ) r
By using Laplace Transform
1 r −1 − x −ξx 1
∫ x r −1e −(1+ξ ) x dx
Γ(r ) ℵ∫
x e e dx =
ℜ
Γ(r )
q dq
substitute (1 + ξ ) x = q , x= , dx= ,then
1+ ξ 1+ ξ
1 q r −1 −q dq 1 1
= ∫ ( ) e = ∫ q r −1e − q dq
Γ(r ) ℵ 1 + ξ 1 + ξ Γ(r ) (1 + ξ ) r ℜ
1 Γ(r ) 1
= =
Γ(r ) (1 + ξ ) r
(1 + ξ ) r
The general notion of an algebra is a collection of entries closed under operations that “look like” addition and
multiplication of numbers .In the context of function spaces (in particular L (ℜ) ,which is where probability
1
density function live) functions are the entries ,addition ,multiplication by scalars have the obvious definitions , and
we add an operation that multiplies functions.
1305
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
For linear function spaces that the complete with respect to a norm.,the most important flavour of algebra is a
Banach algebra with the following properties ,(o multiplaction operator ,which is undefined for the moment, λ is a
scalar ,and is the norm on the space)
(2) f o (g +h)=f0g+f0h
(3) (f +g)oh=f0h+goh
(5) fog ≤ f g
Since L (ℜ) is not closed under ordinary multiplication of functions ,we need a different multiplication operation,
1
fog = ∫ ∫ f ( y − x) g ( x)dx dx
ℜ ℜ
≤ ∫ ∫
ℜ ℜ
f ( y − x) g ( x) dxdy
= ∫ [∫
ℜ ℜ
f ( y − x) dy ] g ( x) dx , by Fubini’s theorem
= ∫ [∫
ℜ ℜ
f ( z ) dz ] g ( x) dx
= ∫
ℜ
f g ( x) dx = f g
fog = f g
This verifies the property (5),the norm condition, and is sometimes called Young’s inequality., similarly we verify
that gof = g f .as well as the convolution algebra is commutative ;f*g=g*f.
For computing the p d f of a product of random variables ,the key results will be that the Mellin integral transform of
a Mellin convolution is the product of Mellin integral transforms of the convolution functions.
∞ ∞
z dw
M[fog]= ∫ [∫ f ( ) g ( w) ]z s −1 dz
0 0
w w
∞ ∞
z dw z dz
= ∫ [∫ f ( ) z s −1 dz ]g ( w) , put y= ,dy= ,dz=wdy
0 0
w w w w
∞ ∞
dw
= ∫ [∫ f ( y )( yw) s −1 wdy ]g ( w)
0 0
w
1306
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞∞
= ∫∫ f ( y ) y s −1 w s −1 g ( w)dydw
0 0
∞ ∞
s −1
= ∫y f ( y )dy ∫ w s −1 g ( w)dw
0 0
=M[f](s)o M[g](s)
M[fog]=M[f](s)o M[g](s)
∞ ∞ ∞
M[f0(g0h)](r,s,p)= ∫ ∫ ∫ x r −1 y s −1 z p −1[ f 0( g 0h)]dxdydz
0 0 0
∞ ∞ ∞
r −1
= ∫x fdx ∫ ∫ y s −1 z p −1 ( g 0h)dydz
0 0 0
∞ ∞ ∞
r −1
= ∫x fdx ∫ ∫ y s −1 z p −1 ( g 0h)dydz
0 0 0
=M[f}(s)M[g0h}(s,p)
∞ ∞ ∞
r −1 s −1
M[f0(g0h)]= ∫ x y fogdx ∫ ∫ z p −1 hdz
0 0 0
M[f0(g0h)] =M[f0g}(r,s)M[h](p), then
M[f0(g0h)] (r,s,p)=M[f}(s)M[g0h}(s,p)=M[(fog)0h}(r,s,p)
M[f0(g0h)](r,s,p)=M[f0g](r,s)+M[f0h](r,p)
M[ λ (fog)](r,s)=M[( λ f)og](r,s)+M[fo( λ g)](r,s)
3.1.5.12: The Mellin Integral Transform and relation with Laplace Transform
If f ∈ M c (ℜ) for all c ∈ [ a, b] , we say that f ∈ M [ a ,b ] (ℜ) ,then we define Mellin
integral transform of f with argument
∞
F(s)= M{f(u),s]= ∫ u s −1 f (u )du ,where a ≤ Re( s ) ≤ b
0
The condition that the inverse exists is that F ( s ) x − s is analytic in a strip (a, b) X (−∞, ∞)
such that c ∈ [ a, b] The mellin integral transform is derived from Laplace integral transforms follows
∞
L [f (t),s]= ∫ e − st f (t )dt ,
−∞
−t dx
substitute x= e , t=-log(x), dt= - ,if t=- ∞ then x= ∞ and if t= ∞ then x=0
x
1307
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
∞
− dx
0
L [f (t) , s]= ∫ (e −t ) s f (t )dt ,= ∫ x s f (− log x)( )
−∞ ∞
x
∞
= ∫ x s −1 f ( x)dx = M [f(x),s],
0
s −1
this is the Mellin integral transform of f(x) of the Mellin kernel x ,s>0 is the
parameter.
The inverse Mellin integral transform is
c + i∞
1
x − s F ( s )ds , whenever this integral is exists.
2πi c −∫i∞
f(x) =
The some technique is used to obtain the Mellin inversion theorem from the
Laplace inverse
∞
−1
f(y)= T [ f (.)]( s ) =
−∞
∫ f ( s )e s log y ds
y
x=0,ψ is injective with (x,y)=ψ ( x, z ) = ( x, 2 ) and the jacobian of ψ
−1 −1
is
x
∂ψ 1−1 ∂ψ 2−1 − 2z
1 3
∂x ∂x x 1
J= = = 2
∂ψ 1−1 ∂ψ 2−1
1 x
0 2
∂y ∂y x
Then using the multivariate change of variable theorem, the marginal density of Z is computed from the
joint density of X and Y as
1308
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
f Z ( z ) = ∫ f XZ ( x, z )dx
ℜ
1
= ∫ f XY (ψ −1 ( x, z )) dx
ℜ x2
z 1
= ∫ f XY ( x, ) dx
ℜ x2 x2
z 1
= ∫
ℜ
f X ( x) f Y ( 2 ) 2 dx ,
x x
by independent of X and Y
= f X * fY ( z)
This is precisely the Mellin convolution of f X and f Y . In principle , this plus the extensibility result
(1) produces a way of finding product densities for arbitrary numbers of random variables.
3.1.5.14: Examples
As a simple illustration of the use of the Mellin transform , we use the belt and pully example .Recall
that X-uniform (1.95,2.05) ,Y-uniform (1.45,1.55) and we seek the pdf of product XY.
The problem can be simplified by using the fact that a uniform ( α , β ) random variables can be
expressed as α + ( β − α )U , where U is uniform (0,1) random variable with p d f I ( 0,1) ( x). In this case
2
,X=1.95+1U,Y=1/45+1U.Then XY=2.8275+34U+.01U . Since we already know how to compute sums, the
problem reduce to finding the pdf for the product of two uniform (0,1) random variables.
2
For Z=U , the Mellin transformation evaluated to
z 1 1 −1 1
f Z ( z ) = ∫ f X ( x) f Y (
2
) 2 dx = ∫ 2 dx = [ ]1z = -1 +
ℜ x x ℜ x
x z
1
= -1 , 0 < z ≤ 1
z
The bounds for the integration come from x ≤ 1 and y ≤ 1 ⇒ x ≥ z
The result can also be obtained as { }
M −1 M [ fu ]( s 2 ) ( x) , fu is the pdf of u.
1
1
We have M[fu](s)= ∫ [ x s −1 − 1]dx = − 1 , so we need
0
s
c + i∞
1 1 z −s
M −1 2 ( s ) =
2πi c −∫i∞ s 2
ds
s
In this simple case of the product of two uniform (0,1) RVs it is easier to compute the Mellin convolution
directly, but the use of Mellin transforma allows computation of the pdf for the product of n uniform (0,1)
( z −1 ) n −1 1
RVs almost as early, yielding −
(n − 1)! (n − 1)!
3.1.6. Remarks
1.Probability Background and Terminology for MIT is given
2.MIT and Contineous random Variable are defined
1309
Scholars Research Library
S. M. Khairnar et al Arch. Appl. Sci. Res., 2012, 4 (3):1294-1310
______________________________________________________________________________
3.Probability Density Function for Contineous Random Variable is defined
4.Contineous Distribution Function is defined
5.Probabilities of Distribution Function F(x) of a CRV.
6. The derivatives of F(x) is defined
7.Expectation and Moments about origin
8.Moments about Mean,Varience,Skewness and Kurtisis
9.Measure of Skewness and Kurtosis
10.Mode, Median, Quartiles, Deciles, Percentiles, QD,CQD, Bowley’s and Karl Pearson’s Method for
Coefficient of Skewness
11.Mean Deviation from Mean
12.MIT for sum of the Random variables
13.MIT for the product of Random variables
14.The MIT and relation with Laplace transform
15.Product of Random variables
16.Illustrated by Exanple
CONCLUSION
We have presented some background on statistics and probability theory and motivated to compute probability
density functions for sum and multiplication of continuous random variables. The use of the Laplace transform to
evaluate the convolution integral for the p d f of sum is relatively simple .The use of the Mellin integral transform
to evaluate the convolution integral for the p d f of a product is known in the theory of integral transforms.
The use of the Laplce integral transform for some of the random variables is mostly used and explained in every
advanced statistics text, now brief theory of Mellin integral transform for statistics and probability is given in this
paper .It seems for any statisticians, mathematicians and engineers will also take interest in developing Mellin
transform with statistics and probability.
REFERENCES
[1] Derek Naylor, Journal of Mathematics and Mechanics, (1963) vol.12, No.2
[2] Ian N. Sneddon ,The use of Integral Transforms ,TMH edition 1974
[3] A.H.Zemanian Generalized Integal Transformation ,Interscience Publication, New York , (1968)
[4] .A.Z. zemanian , J. SIAM Vol.14. No . 1. J an. 1908 Prited in U.S.A.
[5] I.S. Reed , The Mellin Type Double Integral ,Cambridge ,London
[6] Dave Collin ,,[email protected]
[7] Aldo Tagliani, Applied Mathematics and Computation 118 (2001) 151-159
[8] Aldo Tagliani, Applied Mathematics and Computation 123(2001) 275-284
[9] Aldo Tagliani, Applied Mathematics and Computation ,130 (2002) 525-536
[10] S.M.Khairnar, R.M.Pise,J.N.Salunke,,International J.of Multi displ. Research & Advacs in Engg(IJMRAE)
Vol.!, No.1,Nov.2009,pp 213-234
[11] S.M.Khairnar, R.M.Pise,J.N.Salunke, In International Journal Of Mathematical Sciences And
Applications (IJMSA) Vol . 1 No. 1 (Jan 2011) pp. 1-17
[12] S.M.Khairnar, R.M.Pise,J.N.Salunke, Int. J. Theorotical And Applied Physics,Vpl.1, No.1. (Nov.2011) pp
11-2814..
[13] S.M.Khairnar, R.M.Pise,J.N.Salunke, Int. J. Theorotical And Applied Physics,Vpl.1, No.1. (Nov.2011) pp
63-78
[14] F. Woldesenbet, H. Gupta , P. Sharma, Archives of Applied Science Research , Vol. 04, 2011, pp 524-535
[15] S. Chaterabory , H. O. Sharma, Archives of Applied Science Research , Vol. 03, Issue -06, pp 333-342
1310
Scholars Research Library