0% found this document useful (0 votes)
11 views

Assignment - Bayesian

This document discusses Bayesian statistics and credibility theory. It provides examples of Bayesian updating of distributions based on observed data. Several questions are answered related to parameter estimation and Bayesian modeling under different prior assumptions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Assignment - Bayesian

This document discusses Bayesian statistics and credibility theory. It provides examples of Bayesian updating of distributions based on observed data. Several questions are answered related to parameter estimation and Bayesian modeling under different prior assumptions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

assignment – Bayesian stats and credibility theory

answer 1)

X: number of email messages received each day

X_bar =

X|lambda ~ P(lambda)

and prior dist : lambda ~ exp(1/mu)

i) a)
f(X/lambda) = (exp(-lambda)* lambda^x)/ x!
L(X/lambda)= product(1,n): f(X/lambda)
= (exp(-lambda)* lambda^x1)/ x1 ! * (exp(-lambda)* lambda^x2)/ x2! * ..... * (exp(-
lambda)* lambda^xn)/ xn!
L(X/lambda) = exp(-n*lambda) * lambda(sum(1,n):xi) * constant

f(lambda) = (1/mu)* exp(-lambda/mu)


= exp(-lambda/mu) * constant

f(lambda/X_bar) is proportional to exp(-n*lambda) * lambda^(sum(1,n):xi)


*exp(lambda/mu)
f(lambda/X_bar) is proportional to exp(-lambda*(n+(1/mu))) *lambda^(sum(1,n):xi)

lambda/X_bar ~ gamma((sum(1,n):xi)+1 , n+(1/mu))

b)
under quadratic loss : lambda^hat =mean of posterior distribution
= ((sum(1,n): xi) + 1)/ (n+(1/mu))

in the form of credibility estimate : Xbar*Z +(1-Z)*mu


sum(1,n):xi /n * (n / (n+(1/mu)) + mu *((1/mu)/ (n+(1/mu)))

here , Xbar = sum(1,n):xi /n


prior mean = mu
z= (n / (n+(1/mu))
1-z = ((1/mu)/ (n+(1/mu)))

so , z= (n / (n+(1/mu))

c)
sum(1,n): xi = 550
mu= 50
n= 10
lambda^hat = ((sum(1,n): xi) + 1)/ (n+(1/mu))

lambda^hat= 54.99
ii) .λ

answer 2 )

X/lambda ~ exp(lambda)

X_bar = (X1 , X2, ... Xn)

here , n= 20 and sum(1,n):xi = 24

we know ,

alpha / lambda = 1

alpha = lambda ...(A)

and , alpha / lambda^2 = ½

from (A) ,

lambda / lambda^2 = ½

lambda = 2

so , alpha = 2

therefore ,

prior distribution : lambda ~ gamma ( 2,2 )

L(X|lambda) = product(1,20): f(x|lambda)

= lambda* exp(-lambda*x1) * lambda* exp(-lambda*x2) * ... * lambda* exp(-lambda*xn)

= lambda^20 * exp(-lambda*sum(1,20):xi)

f(lambda) = lambda* exp(-2*lambda) * constant


f(lambda|X_bar) is proportional to lambda^20 * exp(-lambda*sum(1,20):xi) *lambda* exp(-
2*lambda)

f(lambda|X_bar) is proportional to lambda^21 * exp(-lambda*(2+ sum(1,20):xi) )

lambda|X_bar ~ gamma(22 , 26)

answer 3 )

X|lambda ~ Exp(lambda)

lambda~ gamma (alpha’ , lamda’)

L(lambda) = product(1,n):f(X|lambda)

= lambda^n * exp(-lambda*sum(1,n):xi)

f(lambda) = lambda^(alpha’ -1 ) * exp(-lambda’ * lambda)

f(lambda|X_bar) ∝ lambda^( n+alpha’ -1) * exp( -lambda*( sum(1,n): (xi) + lambda’))

lambda|X_bar ~ gamma( n+alpha’ -1 , sum(1,n): (xi) + lambda’ )

answer 4)

i)

theta ~ beta( alpha, beta)

given – mean = 0.2 , variance = 0.25^2

0.2 = alpha / (alpha +beta)

alpha + beta = 5*alpha

beta = 4*alpha ......(A)

0.25^2 = (alpha*beta) / ( (alpha+beta)^2 *( alpha +beta +1) )

0.25^2 / 0.2 = (4*alpha)/ (5*alpha *(5*alpha +1 ))

0.25^2 / 0.16 = 1/ (5*alpha +1)

5*alpha +1 = 2.56

alpha = 0.312
putting value of alpha in equation A :

beta = 1.248

ii )

X|theta ~ bin(50, theta)

L(theta) = 50 choose 12 * theta^12 * (1-theta)^38

= theta^12 * (1-theta)^38 * constant

f(theta)= theta^(alpha – 1 ) * (1-theta)^(beta-1)

f(theta|X_bar) ∝ theta^(12+alpha-1) *(1theta)^(38+beta-1 )

∝ theta^(11 + alpha) *(1-theta)^(37+beta)

theta|X_bar ~ beta(12 + alpha , beta +38)

putting is values of alpha and beta :

theta|X_bar ~ beta(12.312 , 39.248)

E(theta|X_bar) = 12.312 / (12.312+39.248) = 0.2387

iii )

theta|X_bar ~ beta ( x+ alpha , n-x+beta)

mean =( x+ alpha)/ (n+alpha+beta)

(x/n)* (n/ n+ alpha +beta) + (alpha/alpha+beta) *(alpha +beta/ n+alpha+beta)

here ,

z = n / ( n+ alpha + beta )

iv ) a )

z= 0.9697

as z tends to1 , more credible towards self risk

b)

1 ) as SD increases , data is more variable , confidence on prior data decreases and there is more
confidence on self risk , eventually z increases .
2 ) as n increases , more credibility on self risk , z increases

c)

limiting value of z is 1 , as sigma and mu increases , this means we are more confident on self risk .

answer 6 )

X : number of claims from drivers

i)
X1 ~ P(lambda) and X2 ~ P(2lambda)
L=(( exp(-lambda) * lambda^(n1)) / n1!) * (( exp(-2lambda) * 2lambda^(n2)) / n2!)
L= exp(-3lambda) * lambda^( n1 + n2) * constant
log (L) = -3lambda + (n1+n2) *ln(lambda)
d log(L) / d lambda = -3 + (n1+n2)/lambda
putting d log(L)/d lambda = 0
(n1+n2)/lambda = 3
lambda hat = (n1 + n2) / 3
ii)
a) lambda~ Exp(v)
f(lambda) = v* exp(-v*lambda) = exp(-v*lambda) * constant
f( lambda | X_bar) ∝ exp(-lambda(v+3)) * lambda^(n1+n2)

lambda|X_bar ~ gamma ( n1 + n2 + 1 , v+3 )

under quadratic loss : lambdahat = mean of posterior distribution


= (n1 + n2 + 1) / (v+3)

credibility premium:
((n1+n2) / 3) * (3/(v+3)) + (1/v) * (v/(v+3))

so , Z = 3/(v+3)

answer 7 )

L(lambda, d) = (lambda – d)^2 + d^2

E(L(lambda,d) ) = E((lambda – d)^2 + d^2)

= E(lambda^2 + d^2 – 2lambda*d + d^2)

= E(lambda^2) + d^2 -2*d*E(lambda)


= V(lambda) +(E(lambda))^2 – 2*d*E(lambda) + 2*d^2
= alpha/ beta^2 + (alpha/beta)^2 – 2*d*(alpha/beta) + 2*d^2
E(L(lambda,d) ) = (alpha*(alpha+1))/ beta^2 – (2*d*alpha)/beta + 2*d^2

d (E(L(lambda,d)) )/ d (d) = -2*lambda/beta + 4*d = 0

4d = 2*alpha / beta

dhat = alpha / 2beta

d^2(E(L(lambda,d) )) / d(d)^2 = 4 > 0

minimised at dhat = alpha / 2beta

answer 8)

i) when prior and posterior dist belong to the same family


ii) f(x) = lambda*exp(-lambda*x)

L= lambda^n *exp(-lambda*sum(1,n):xi)

lambda~ gamma (alpha’ , lambda’)


f(lambda) = lambda^(alpha’ – 1 ) * exp(-lambda*lambda’)

f(lambda|X_bar) ∝ lambda^(alpha’ + n -1) *exp( lambda’ + sum(1,n) :xi)

lambda|X_bar G(alpha’ +n , lambda’ + sum(1,n):xi)

iii) a)
lambda ~ gamma (alpha , s )
E(1/lambda) = INT (0 , Inf ): ((1/lambda) * (s^lambda *lambda^(alpha-1)
*exp(-lambda*s) )/ (alpha-1)! )d(lambda)
= s^alpha / (alpha-1)! * INT(0,inf) : [ lambda^(alpha- 2) * exp(-s*lambda) ] *d(lambda
= ( s^alpha / (alpha-1)!)* ( (alpha-2)! / s^(alpha-1))
= (s^alpha *(alpha-2) ! )/ ((alpha-1)! *s^(alpha-1) )
E(1/lambda) = s / (alpha -1)

b)
X|lambda ~ exp(lambda)
lambda~ gamma ( alpha , s)
lamda|X_bar ~ gamma (n + alpha , s+ sum(1,n):xi)
E(1/lambda) =( s+ sum(1,n):xi ) / (n + alpha – 1)

credibility premium :
((sum(1,n)xi)/n)* (n/(n+alpha-1)) + (s/(alpha-1)) * ((alpha-1)/(n+alpha-1))

so ,
Z= n/(n+alpha-1)
iv) S_A/ (sigma_A -1) = 3
9 / (sigma_A – 2) = 0.5^2
9 = 0.25 *( sigma_A – 2)
sigma_A = 9.5/0.25
sigma_A = 38

answer 9 )

X| theta ~ N(theta , sigma^2)

L(theta) = product(1,n) : f(X|theta)

= exp((-(x1-mu)/sigma)^2) / (sigma*sqrt(2*pie))

= exp(-1/2sigma^2)*((sum(1,n):(xi^2 + theta^2 _ 2theta*xi)) * const

= exp((-1/2sigma^2 ) *(ntheta^2 – 2theta*sum(1,n):xi)) * constant

for prior :

theta ~ N(mu , sigma2^2)

f(theta) = (1/sigma2*sqrt(2*pie)) * exp ( (1/2) * ((theta-mu)/sigma2^2)^2)

f(theta) = exp( (-1/(2*sigma2^2))*(theta – mu)^2 ) * constant

f(theta) = exp( -(1/(2*sigma2^2) ) * (theta^2 + 2*theta*mu) ) * constant

f(theta|X_bar) ∝ exp[ {(-1/(2*sigma^2)) * (n*theta^2 – 2*theta*sum(1,n):x_i)} – {(1/(2*sigma2^2)*


(theta^2 +2*theta*mu)} ]

f(theta|X_bar) ∝ ((-1/2) *[ ( (theta^2/sigma2^2) – (2*theta*mu/ sigma^2) ) + ( ((n*theta^2) /


sigma^2) – ((2*theta* sum(1,n): xi)/sigma^2) ) ]

f(theta|X_bar) ∝ exp[ (-1/2)*( (n/sigma^2)+(1/sigma2^2) ) * ( theta^2 – (2*theta*((mu/sigma2^2)+


((sum(1,n):xi)/sigma^2) )/( (1/sigma2^2) + (n/sigma^2) ) )]

mu|X_bar ~ N (( (( sum(1,n):xi )/ sigma^2 ) + (mu / sigma2^2) )/ ( (1/sigma2^2) + (n/sigma^2) ), ( 1/


( (n/sigma2^2) +(1/sigma^2) ) )

ii ) under quadratic loss : theta^hat

= ( (( sum(1,n):xi )/ sigma^2 ) + (mu / sigma2^2) )/ ( (1/sigma2^2) + (n/sigma^2) )

iii ) theta^hat= [ ((sum(1,n):xi)*sigma2^2)+ (mu*sigma^2) ] / [ (n*sigma2^2) + (sigma^2) ]

credibility premium:
[ (n*sigma2^2) / ( (n*sigma2^2) +sigma^2 ) ] * [ (sum(1,n):xi ) / n ] + [ sigma^2 /
( (n*sigma2^2) + ( sigma^2) ) ] * (mu)

here , z= (n*sigma2^2) / ( (n*sigma2^2) +sigma^2 )

company A :

Z= (5*800) /( 5* 800+500) = 0.8889

credibility premium : ( (0.8889*439)+(0.1111 *400) )* 1.25 = 21677.115

Company B:

Z = (5*600)/(5*600+350) = 0.8955

credibility premium : (0.8955 * 356 + 0.1045*300) *1.25 = 437.685

answer 11 )

ii )

X|p ~ bin(m,p)

p~ U( 0,1 )

a) because there is no information about prior dist we will take a suitable uniform dist
L(X|p) = ( N choose m ) * p ^m * (1-p)^(N-m)
L (X|p) = p^m * (1-p)^(N-m) * constant

f(p) = 1 / (1-0) = 1

f( p | X_bar ) ∝ p^m * (1-p)^(N-m)

p|X_bar ~ beta( m+1 , N-m+1)

b) under all or nothing loss , p^hat = mode of posterior dist


p^hat = (m+1-1)/(m+1+N-m-2+1)
p^hat = m/N

answer 12 )

P(X=k) = theta^k * (1-theta) ; k = 0 ,1 , 2 ....... ; 0<theta<1

f(theta) ∝ (theta^(alpha -1 ) )* (1-theta) ^(alpha-1) ; alpha>0

i) a) L(theta) = product(i=0,k): (theta^(sum(0,k):xi) * (1-theta)^n )


logL(theta) = (sum(0,k):xi) *ln (theta) + n* ln(1-theta)
d(logL(theta)) / d(theta) = ((sum(0,k):xi)/theta) – (n/(1-theta))

putting d(logL(theta)) / d(theta) = 0


(sum(0,k):xi ) – ((sum(0,k):xi)* theta) – n*theta = 0
sum(0,k):xi = theta*((sum(0,k):xi)+n)
theta^hat = (sum(0,k):xi) / ((sum(0,k):xi)+n)

b ) f(theta|X_bar) ∝ theta ^(sum(0,k):xi) * (1-theta)^n * theta^(alpha-1) *


(1- theta)^(alpha-1)

theta|X_bar ~ beta ( alpha + sum(0,k):xi , n + alpha )

c)

answer 14 )

X: claim amount
X|mu ~ N(mu , 50^2)
mu~ N(300,20^2)

i) P( mu < 270 ) = P( (mu-300/ 20) < (270-300/20) )


= P( Z < (270-300/20) )
= P( Z < -1.5) = P(Z > 1.5)
= 1- P( Z<= 1.5)
= 0.06681 [ from tables ]

ii) a)
mu | X_bar ~ N ( [ ((2700/50^2) + (300 / 20^2)) / ((10/50^20) + (1/20^2)) ] ,
[ (10/50^20) + (1/20^2) ] )
mu |X_bar ~ N( 281.54 , 12.40^2 )

b)
P( mu < 240 ) = P(Z < (270 – 281.54)/12.40)
= P(Z< - 0.93 ) = 1- P( Z< = 0.93 )

answer 16 )

0.015 = a / (a +b)

(a+b) = 66.67 * a

b = 65.67 * a ..... (A)

and we have ,

0.005^2 = (a*b) / ((a+b) (a+b+1))

substituting values from above ,


0.005^2 = (0.015 * 65.67*a) / (( 66.67a) *( 66.67a + 1))

0.00169 = 1 / (66.67*a + 1)

66.67a + 1 = 591.01

a = 8.86

putting value of a in (A)

b = 581.836

X| q ~ bin( 58, q )

L(X|q) = 4800 choose 58 * q ^58 * (1-q) ^ 4442

= q ^ 58 * (1-q) ^ 4442 * constant

q~ beta ( 8.86 , 581.836)

f(q) = q^7.86 * (1-q)^ 580.836

f(q|X_bar) ∝ q^(58 + 7.86) * (1-q) ^ ( 4442 + 580.836)

q|X_bar ~ beta ( 66.85 , 5023.836)

answer 18 )

i) X_bar|p ~ bin( n , p )
and p ~ beta(alpha,beta)

f(X_bar | p ) = (n choose k) * p^k * (1-p)^(n-k)


L = p^k * (1-p)^(n-k) * constant

f(p) = p^(α -1) *(1-p) ^ (beta-1)

f(p |X_bar) ∝ p^k * (1-p) ^ (n-k) *p^(alpha -1 ) * (1-p) ^(beta-1)

p|X_bar ~ beta ( k + alpha , n-k+beta)

ii) X~ beta ( alpha , beta)

1
E(1/X) = ∫ ¿ ¿ x^(a -1) * (1-x)^(b-1) * (a+b-1)! / (a! * b!) )dx
0

1
= (a+b-1)! / (a! * b!) * ∫ ¿ ¿ ^(a-2) * (1-x)^(b-1) ) dx
0
= ((a+b-1)! / (a! * b!)) *((a-2)!*(b-1)!) / (a+b-2)!
= (a-2)/ (x+b-2)
wrrrrrrrrrrrrooooooooooooongggggggggggggggggg

answer 19 )

P(p=0.4) = 0.6 , P(p=0.75) = 0.4

X_bar | p ~ bin(6, p )

P(X=x | p) = (6 choose x) * p^x * (1-p)^(6-x)

P(X=4 | p=0.4) = (6 choose 4) * 0.4^4 * 0.6^2 = 0.1382

P(X=4 | p=0.75) = (6 choose 4) * 0.75^4 * 0.25^2 = 0.2966

P(p=0.4 | X=4) = formula = 0.4113

P(p=0.75 | X=4)= 0.5887

answer 20 )

X: no . of claims per day

X_bar | mu ~ P(mu)

and

mu ~ gamma(11,0.22)

n
L(mu) = ∏ ¿ ¿
i=0

n
L(mu) = e^(-mu*n) * mu^(∑ ( xi ) ) *constant
1

f(mu) = mu^10 * x^(-0.22*mu)

f(mu|X_bar) ∝ mu^(10+ sum(1,n):xi) *e^(-mu^(n+0.22))

mu|X_bar ~ gamma ( 10+ sum(1,n):xi , n+0.22) ~ gamma( 641 , 10.22)


under all or nothing loss , mu^hat = mode of posterior dist
mu^hat = (641-1)/10.22 = 62.622

answer 21 )

i) Y_bar | theta ~ U(0,∞ ) , theta>0


f(theta ) = alpha* (beta^alpha) * theta^(1+alpha) , theta> beta
=0 , otherwise
L(theta) = 1/theta

f(theta) = theta^(1+alpha) * constant

f( theta | Y_bar ) ∝ (theta^ (-(1+alpha)) ) / theta , for theta > max(beta_i , y_i1)
f(theta | Y_bar ) ∝ theta^ (-(1+alpha+1) ) , for theta > max (beta_i , y_1)

ii) f(theta |Y_bar) ∝ theta (-(1+alpha+ n) , theta> max(beta_i , y1 , y2 , .... , yn)

answer 22)

X_bar | P ~ bin( 7,p)

P( p=0.5 ) = 0.8 ; P(P~ U(0.5, 1) = 0.2)

P(X= x |p) = (8 choose x) *(p^x) * ((1-p)^(8-x))

P(X= 7 |p=0.5) = (8 choose 7) *(0.5^7) * 0.5 = 8 * 0.5^8


1
P(X= 7 | P~ U(0.5 , 1) ) = ∫ ((8 choose 7)∗( p )∗(1− p))dp
7

0.5

P(0.5 |X=7) =

answer 23)

X: claim amount per annum

X_bar |theta ~ N ( theta , 200^2)

theta ~ N( 600 , 50^2)

n
L = ∏ f ( X ¯¿ theta)= (exp( (-1/80000) * ( xi -mu) ^2 )) / (sigma*√ 2 π )
1

n n
= exp( - ( ∑ (xi) + (n* theta^2 ) – 2*theta * ∑ xi )/80000)
2

1 1
n
= exp ( -(n*theta^2 – 2* theta * ∑ xi ) / 80000) *constant
1

f( theta ) =

answer 25 )

X- No of claims in a year

X_bar | mu ~ P(mu )

mu~ gamma( 10,2)

8 8
L(mu) = ∏ ((exp(−mu)∗mu (xi )
)/xi !)= exp(-n*mu) * mu^( ∫ xi¿ ¿
i=1 i=1

f(mu) = mu^9 * exp(-2mu)

f(mu|X_bar ) ∝ mu^(9+ sum xi) * exp(-mu (2+n))

mu|X_bar ~ gamma ( sum xi + 10 , 2+n)

~ gamma (18,3)

ANSWER 26 )

i) X : no of claims per month


X_bar | lambda ~ P(lambda)
lambda~ exp( 5 )

n
L = exp ( -n*lambda) * lambda ^ ∑ xi * constant
1
f(lambda) = exp( -5*lambda) * constant

n
f(lambda|X_bar) ∝ exp( -lambda*(n+5)) * lambda^(∑ xi )
1

lambda|X_bar ~ gamma ( sum xi + 1 , n+ 5)

ii) a) Bayesian estimate under quadratic loss = mean of posterior


= (sum xi + 1 ) / ( n+ 5)
b ) Bayesian estimate under all or nothing loss = mode of posterior dist
= sum xi / (n+5)

iii) n = 5 , sum xi = 1
lambda|X_bar ~ gamma ( 2,10) ⇒ 20 X ~ chi-squared_4

under absolute error loss = median of posterior = P(Y<= M) = P(Y > M) = 0.5
20 M = 3.357
M = 0.16

answer 27 )

X: no of claims registered per week

X_Bar | lambda ~ P(lambda)

P( lambda= 1) = 0.4 ; P(lambda = 2)

P(X=x | lambda) = ( exp(-lambda) * lambda^x) / x!

P(X=3 | lambda=1) = ( exp(-1) * 1^3) / 3! = 0.061

P(X=3 | lambda=2) = ( exp(-2) * 2^3) / 3! = 0.18

P(lambda = 1 | x=3) = formula = 0.184

P(lambda =2 | x=3) = 1- 0.184 = 0.816

Lambda 1 2
P(lambda |x=3) 0.184 0.816
cdf 0.184 1

squared error loss :mean : 1.816

zero one loss : mode : 2

absolute loss : median : 2

answer 28 )

i) X: no of claims arising each month


X_bar | lambda ~ P(lambda)

alpha’ / lambda’ = 250


alpha’ / lambda’^2 = 45

on solving these equations , we get


alpha ‘ = 1387.5
lambda’ = 5.55

lambda~ gamma (1387.5 , 5.55 )


lambda|X_bar ~ ( ∑ xi + alpha’ , n + lambda’ )
lambda|X_bar ~ ( 11888.88 , 55.55 )

under quadratic loss = lambda^ hat = 11888.88 / 55.55


lambda^hat = 214 . 02

ii) Y : no of claims per day


Y_bar | lambda ~ P(lambda/30)

P(lambda = 230 ) = 0.2 , P(lambda = 250 ) = 0.5 , P(lambda = 270) = 0.3

P(Y= y | lambda) =( exp(-lambda/30) * (lambda/30)^ (-y) ) / y!


P(Y= 7 | lambda= 230) = 0.144
P(Y= 7 | lambda = 250)= 0.133
P(Y= 7 | lambda= 270 )= 0.117

P( lambda = 230 | Y = 7 ) = formula = 0.221


P( lambda = 250 | Y = 7 ) = 0.509
P( lambda = 270 | Y = 7 ) = 0.27

Lambda 230 250 270


P( lambda|Y=7) 0.221 0.509 0.27
Cdf = 0.221 0.73 1

under quadratic loss = lambda^hat = 250.98

answer 29)

P(L) = 1/3

P(L’) = 2/3

A= late to work by more than 20 mins

P(A|L) = P(X> 20) = exp( -20/15) = 0.263

P(A|L’) = (Y>20) = (25-20) / (25-0) = 0.2

P(L|A) = (0.33 * 0.263) / ( 0.33 * 0.263 + 0.67* 0.2)

P(L|A) = 0.393

answer 30)
X_bar | p ~ bin( n, p)

p~ beta( alpha , beta)

L= (n choose k) * ( p^k ) * ( ( 1-p) ^ ( n-k) )

L = ( p^k ) * ( ( 1-p) ^ ( n-k) )* constant

f( p ) = p ^ ( alpha -1 ) * (1-p) ^ ( beta -1 )

f(p|X_bar) ∝ p^ ( k+alpha-1) * ( 1-p) ^ (n-k+beta-1)

p|X_bar ~ beta ( k+alpha , n-k+beta )

under all or nothing loss = phat = ( k+alpha-1 ) /(n+beta + alpha -2)

credibility premium :

( ( k /n) * ( n/( n+beta+alpha-2)) ) +( (( alpha-1)/(beta+alpha-2))*( (beta+alpha-2)/(n+beta+alpha -2)) )

so, Z = n/( n+beta+alpha-2)

answer 31 )

X : heights of adult males

X_bar|mu ~ N ( mu , 15^2)

mu ~ N( 187,10^2)

i)

P(mu> 180)

P(Z > (180-187)/10) =

P( Z> -0.7) = P( Z< 0.7)

= 0.75804

ii) p|X_bar ~ N ( 66.5812/ 0.3656 , 2.7578)


p |X_bar ~ (182.136 , 2.757)

p(mu>180) = P ( Z> (180-182.136)/sqrt(2.7578))


= P(Z>-2.18)
=P(Z<1.286)

answer 32

X|theta ~ bin(n,theta)
theta ~ beta ( alpha , beta )

mean = mu , var= sigma^2

i) doubt
ii) a)
f(X_bar |theta) = (n choose d)* ( theta^d) * ( 1-theta)^(n-d)
f(theta) = theta ^(alpha -1) *(1-theta)^(beta-1) * constant
f(theta|X_bar) ∝ theta ^( d+alpha -1) *(1-theta)^(n-d+beta-1)

theta|X_bar ~ beta ( d+ alpha , n-d+beta)

b ) mean = (d+alpha) / (d+n+beta)

credibility premium :
[(d/n)*(n/(alpha+n+beta))] * [(alpha/(alpha+beta) ) * ( (d+alpha) / ( d+n+beta) ) ]

so , here
Z= n/(alpha+n+beta)

answer 33 )

X: random number calls every day

X_bar | beta ~ P( beta)

we have , prior mean = 200 and prior variance = 50^2

so , on solving , we get

beta~ gamma ( 16,0.08)

L(Beta , X_bar ) = exp( - beta) * beta^240 * constant

f( beta) = beta ^ 15 * exp( -beta*0.08)

f(beta|X_bar ) ∝ exp(-beta*0.08) * beta^(255)

Beta|X_bar ~ beta( 265, 1.08)

under quadratic loss : betahat = (266) / 1.08

= 237.037

answer 34)

X_bar|p ~ bin ( n,p)

f(p) ∝ [p(1-p)]^a
i) f(X_bar , p ) = p^sumxi * (1-p) *(nm -sumxi) * const
f(p) = [p(1-p)]^a

f(p|X_bar )∝ p^( sumxi +a) * (1-p)^(a+mn-sumxi)


p|X_bar ~ beta ( sumxi + a+1 , mn-sumxi+a +1)

ii) L(p,X_bar) = p^sumxi * (1-p) *(nm -sumxi) * const


log(L) = sumxi *log(p) + (mn-sumxi) log(1-p)
d(logL) / d(p) = sumxi/p – (mn-sumxi)/(1-p)
on equating with zero ,
phat = sumxi /(m*n)

p|X_bar beta ( sumxi + a+1 , mn-sumxi+a +1)


L=
doubt

answer 35)

X_bar |p ~ bin( 100,p)

p~ beta ( 2,8)

f(X_bar|p) = ((100 choose 13)* (p^13) * (( 1-p)^87) ) * ( ((100+g) choose 20)* (p^20) * (( 1-p)^80+g) )

L( p, X_bar) = (p^33)*((1-p) ^ (167+g) )

f(p) = p * (1-p) ^7

f(p|X_bar) ∝ (p^34) * ((1-p) ^(174+g) )

p|X_bar ~ beta ( 35 , 175+g)

under quadratic loss : mean of posterior = p^hat


p^hat = 35/(210+g)

under all or nothing loss : mode of posterior = p^hat

p^hat = 34/(208+g)

PHAT UNDER ALL OR NOTHING LOSS ≠ UNDER QUAD LOSS

answer 36)

theta ~ U ( 0,1 )

f( X_bar |theta ) = ((10 choose 5) * theta^5 * (1-theta) ^5) * ( (20choose 11) * theta ^ 11 * (1-
theta)^9)

L(theta , X-bar) = theta ^16 * (1-theta) ^ 14 * constant


f(theta) = 1

f(theta|X_bar) ∝ theta ^16 *(1-theta) ^ 14

theta|X_bar ~ beta ( 17,15)

i) theta hat = 17/(17+15)


ii) f(theta) = (theta ^(alpha -1)) * (1-theta)^(beta-1)
= (theta*(1-theta))^(alpha-1)

f(theta|X_bar) ∝ theta^(alpha+15) *(1-theta)^(alpha+13)


theta|X_bar~ beta ( alpha + 16 , alpha + 14)

under all or nothing loss :


betahat = (alpha+15 ) /(2alpha + 28 )

answer 37)
X: no of claims

X_ bar|theta ~ N (theta, τ ^2 )

theta ~ N( mu , sigma^2)

i) post : book se dekhlo


ii) under quadratic loss : basic sa ques hai main ahi kar rahi

answer 38)

P(p = 2/6) = 1/3 ; P(p = 1/6) = 2/3

P( X=4 |p) = (10choose 4) * p^4 *(1-p)^6

P( X= 4 |p = 2/6) = (10 choose 4) * (2/6) ^4 * ( 4/6)^6 = 0.2276

P( X= 4 |p = 1/6) = (10 choose 4) * (1/6) ^4 * ( 5/6)^6= 0.0542

You might also like