100% found this document useful (1 vote)
76 views

Module 2.1 Slides PDF

This document provides an overview of parameter estimation techniques, including the method of moments and maximum likelihood estimation. It defines an estimator as a statistic used to estimate unknown population parameters based on a sample. The method of moments matches sample moments to their corresponding population moments to estimate parameters. Maximum likelihood estimation selects the parameters that maximize the likelihood function, defined as the joint probability density of the sample. Examples are provided for both techniques applied to binomial and normal distributions.

Uploaded by

Tharitha Murage
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
76 views

Module 2.1 Slides PDF

This document provides an overview of parameter estimation techniques, including the method of moments and maximum likelihood estimation. It defines an estimator as a statistic used to estimate unknown population parameters based on a sample. The method of moments matches sample moments to their corresponding population moments to estimate parameters. Maximum likelihood estimation selects the parameters that maximize the likelihood function, defined as the joint probability density of the sample. Examples are provided for both techniques applied to binomial and normal distributions.

Uploaded by

Tharitha Murage
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Parameter Estimation

Module 2.1: Estimation Techniques


c University of New South Wales
School of Risk and Actuarial Studies

1/47
Parameter Estimation

Parameter estimation
Definition of an estimator
The method of moments
Example & exercise

Maximum likelihood estimator


Maximum likelihood estimation

Examples: MME and MLE

Estimator III: Bayesian estimator


Introduction
Bayesian estimation

Examples: Bayesian estimation

2/47
Parameter Estimation
Parameter estimation
Definition of an estimator

Definition of an Estimator

- Problem of statistical estimation: a population has some


characteristics that can be described by a r.v. X with density
fX ( | ).

- Density has unknown parameter (or set of parameters) .

- We observe values of the random sample X1 , X2 , . . . , Xn from


the population fX ( | ). Denote this observed sample values
by x1 , x2 , . . . , xn .

- We then estimate the parameter (or some function of the


parameter) based on this random sample.

3/47
Parameter Estimation
Parameter estimation
Definition of an estimator

Definition of an Estimator
- Any statistic, i.e., a function T (X1 , X2 , . . . , Xn ), that is a
function of observable random variables and whose values are
used to estimate (), where () is some function of the
parameter , is called an estimator of ().

- A value b of the statistic evaluated at the observed sample


values by x1 , x2 , . . . , xn , will be called an (point) estimate.

- For example:
1 Pn
T (X1 , X2 , . . . , Xn ) = X = Xj , estimator;
n j=1
= 0.23,
b point estimate.

Note can be a vector, then the estimator is a set of


equations.
4/47
Parameter Estimation
Parameter estimation
The method of moments

The Method of Moments


Let X1 , X2 , . . . , Xn be a random sample from the population with
density fX (|) which we will assume has k number of parameters,
say = [1 , 2 , . . . , k ]> .

The method of moments estimator () procedure is:


1. Equate (the first) k sample moments to the corresponding k
population moments;

2. Equate the k population moments to the parameters of the


distribution;

3. Solve the resulting system of simultaneous equations.


The method of moment point estimates (b ) are the estimate values
of the estimator corresponding to the data set.
5/47
Parameter Estimation
Parameter estimation
The method of moments

The Method of Moments


- Denote the sample moments by:
n n n
1 X 1 X 2 1 X k
m1 = xj , m2 = xj , . . . , mk = xj ,
n n n
j=1 j=1 j=1

- and the population moments by:

1 (1 , 2 , . . . , k ) = E [X ] , 2 (1 , 2 , . . . , k ) = E X 2 ,
 
h i
. . . , k (1 , 2 , . . . , k ) = E X k .

- The system of equations to solve for (1 , 2 , . . . , k ) is given


by:
mj = j (1 , 2 , . . . , k ) , for j = 1, 2, . . . , k.

Solving this provides us the point estimate .


b
6/47
Parameter Estimation
Parameter estimation
Example & exercise

Example: MME & Binomial distribution


Suppose X1 , X2 , . . . , Xn is a random sample from Bin (n, p)
distribution, with known parameter n.

Question: Use the method of moments to find point


estimators of = p.

1. Solution: Equate population moment to sample moment:


n
1 X
E[X ] = xj = x.
n
j=1

2. Equate population moment to the parameter:


E[X ] = n p.
3. Then the method of moments estimator is (i.e., solving it):
x =np pb = x/n.
7/47
Parameter Estimation
Parameter estimation
Example & exercise

Exercise: MME & Normal distribution


Suppose X1 , X2 , . . . , Xn is a random sample from N , 2


distribution.

Question: Use the method of moments to find point


estimators of and 2 .

1. Solution: Equate population moment to sample moment:


n
1 X
E [X ] = xj = x
| {z } n
j=1
population moment | {z }
sample moment
n
1 X 2
E X2
 
= xj .
| {z } n
j=1
population moment | {z }
sample moment
8/47
Parameter Estimation
Parameter estimation
Example & exercise

Exercise: MME & Normal distribution


2. Equate population moment to the parameters:

E[X ] = and E[X 2 ] = Var (X ) + E[X ]2 = 2 + 2 .

3. The method of moments estimators are:


b =E [X ] = x
b2 =E X 2 (E [X ])2
 

n n
1X 2 1X n1 2
= 2
xj x = (xj x)2 = s ,
n n n
j=1 j=1

n 2
(xj x )
P
* using s 2 = j=1n1 is the sample variance.
 2
Note: E b 6= 2 (biased estimator), more on this later.
9/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Maximum Likelihood function

- Another example (mostly used) of an estimator is the


maximum likelihood estimator.

- First, we need to define the likelihood function.

- If x1 , x2 , . . . , xn are drawn from a population with a parameter


(where could be a vector of parameters), then the
likelihood function is given by:

L (; x1 , x2 , . . . , xn ) = fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) ,

- where fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) is the joint probability density


of the random variables X1 , X2 , . . . , Xn .

10/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Maximum Likelihood Estimation

I Let L () = L (; x1 , x2 , . . . , xn ) be the likelihood function for


X1 , X2 , . . . , Xn .

I The set of parameters b = b (x1 , x2 , . . . , xn ) (note: function of


observed values) that maximizes L () is the maximum
likelihood estimate of .

I The random variable b (X1 , X2 , . . . , Xn ) is called the maximum


likelihood estimator.

11/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Likelihood function

When X1 , X2 , . . . , Xn is a random sample from fX (x|), then the


likelihood function is (using i.i.d. property):

n
Y
L (; x1 , x2 , . . . , xn ) = fX (xj |) .
j=1

This is just the product of the densities evaluated at each of the


observations in the random sample.

12/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Maximum Likelihood Estimation

I If the likelihood function contains k parameters so that:

L (1 , 2 , . . . , k ; x) = fX (x1 |) fX (x2 ; ) . . . fX (xn ; ) ,

then (under certain regularity conditions), the point where the


likelihood is a maximum is a solution of the k equations:
L (1 , 2 , . . . , k ; x) L (; x) L (; x)
= 0, = 0, ..., = 0.
1 2 k
I Normally, the solutions to this system of equations give the
global maximum, but to ensure, you should usually check for
the second derivative (or Hessian) conditions and boundary
conditions for a global maximum.

13/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Maximum Likelihood Estimation


I Consider the case of estimating two variables, say 1 and 2 .
I Define the gradient vector:
L

1
D (L) =


L
2
and define the Hessian matrix:
2L 2L

2 1 2
1
H (L) = .

2L 2
L
1 2 22
14/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Maximum Likelihood Estimation


I From calculus we know that the maximum choice 1 and 2
should satisfy not only:

D (L) = 0,

I but also H should be negative definite which means:

2L 2L

2 1 2
 1
 
 h1
h1 h2 h2 < 0,

2L 2L
1 2 22

for all [h1 , h2 ] 6= 0.


15/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Log-Likelihood function
I Generally, maximizing the log-likelihood function is easier.
I Not surprisingly, we define the log-likelihood function as:

` (1 , 2 , . . . , k ; x) = log (L (1 , 2 , . . . , k ; x))

Y n
= log fX (xj |)
j=1
n

X
= log (fX (xj |)) .
j=1

I Maximizing the log-likelihood function maximizing the


likelihood function (log is monotonic increasing).
16/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

MLE procedure

The general procedure to find the ML estimator is:

1. Determine the likelihood function L (1 , 2 , . . . , k ; x);

2. Determine the log-likelihood function


` (1 , 2 , . . . , k ; x) = log (L (1 , 2 , . . . , k ; x));

3. Equate the derivatives of ` (1 , 2 , . . . , k ; x) w.r.t.


1 , 2 , . . . , k to zero ( global/local minimum/maximum).

4. Check whether second derivative is negative (maximum) and


boundary conditions.

17/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Example: MLE and Poisson

1. Suppose X1 , X2 , . . . , Xn are i.i.d. and Poisson(). The


likelihood function is given by:
n
e x1
   x2   xn 
Y e e
L (; x) = fX (xj |) = ...
x1 ! x2 ! xn !
j=1
 x1
x2 xn


=e n ... .
x1 ! x2 ! xn !

2. So that taking the log of both sides, we get:


n
X Xn Xn
` (; x) = log (fX (xj |)) = n+log () xk log (xk !) .
j=1 k=1 k=1

18/47
Parameter Estimation
Maximum likelihood estimator
Maximum likelihood estimation

Example: MLE and Poisson


Now we need to maximize this log-likelihood function with
respect to the parameter .
3. Taking the first order condition (FOC) with respect to we
have:
n
1X
` () = 0 n + xk = 0.

k=1

This gives the maximum likelihood estimate (MLE):


n
b= 1
X
xk = x,
n
k=1
which equals the sample mean.
4. Check for second derivative condition to ensure global
19/47 maximum.
Parameter Estimation
Examples: MME and MLE

Example: MLE and Normal


Suppose X1 , X2 , . . . , Xn are i.i.d. and Normal , 2 where

I
both parameters are unknown.
I The p.d.f. is given by:
 2 !
1 1 x
fX (x) = exp .
2 2

1. Thus the likelihood function is given by:


n  2 !
Y 1 1 xk
L (, ; x) = exp .
2 2
k=1

I Question: Find the MLE of and 2 .


20/47
Parameter Estimation
Examples: MME and MLE

Exercise: MLE and Normal

Solution:
2. Its log-likelihood function is:
n  2 !!
X 1 1 xk
` (, ; x) = log exp
2 2
i=1
n
n 1 X
=n log() log(2) 2 (xk )2 .
2 2
k=1

* using log(1/a)
= log(a1 ) = log(a), with a =
and log(1/ b) = log(b 0.5 ) = 0.5 log(b), with b = 2.

Take the derivative w.r.t. and and set that equal to zero.

21/47
Parameter Estimation
Examples: MME and MLE

n
1 X
` (, ; x) = 2 (xk ) = 0

k=1
n
X
xk n = 0
k=1

b=x
Pn
n (xk )
` (, ; x) = + k=1 3 =0

n
P
(xk )
k=1
n=
2
n
1X
b2 = (xk x)2 .
n
k=1

22/47 (See 9.7 and 9.8 of W+(7ed) for further details.)


Parameter Estimation
Examples: MME and MLE

Example: MME & MLE and Gamma


I You may not always obtain closed-form solutions for the
parameter estimates with the maximum likelihood method.

An example of such problem when estimating the parameters


using MLE is the Gamma distribution.
I As we will see in the next slides, using MLE yields one
parameter estimate in closed-form solution; not so for the
second parameter.
I To find the MLE one should do the following: numerically
estimate the estimates (!) by solving a non-linear equation.
This can be done by employing an iterative numerical
approximation (e.g. Newton-Ralphson).
23/47
Parameter Estimation
Examples: MME and MLE

Example: MME & MLE and Gamma

In such cases an initial value may be needed so that other means of


estimating first may be used, such as using the method of
moments. Then use it as the starting value.

Question: Consider X1 , X2 , . . . , Xn i.i.d. and Gamma(, )


find the MME of the Gamma distribution.
(+r )
fX (x) = x 1 e x ;
() E [X r ] = r ()
   
MX (t) = E e tX = t ; Var (X ) = 2
.

24/47
Parameter Estimation
Examples: MME and MLE

Solution:
1. Equate sample moments to population moments:

(1)
1 = MX (t) = E [X ] = x

t=0
n
(2)
  X xi2
2 = MX (t) = E X2 = .

t=0 n
i=1

2. Equate population moments to the parameters:



1 =

   
( + 1) +1 1
2 = = = 1 1 + .
2

25/47
Parameter Estimation
Examples: MME and MLE

Example: MME & MLE and Gamma


3. Therefore, the method of moments estimates are given by:
2 1 1
1 = 1 + = 2 21
21
= 1 = 2 21
.

Using (from step 1.)1 and 2 , we have


n
X x2
2 21 = i
x2 =
b2
n
i=1

So that estimators are:


x x2
b = 2 and
b= .

b b2

26/47
Parameter Estimation
Examples: MME and MLE

Example: MME & MLE and Gamma


Question: Find the Maximum Likelihood Estimates.

1. Solution: Now, X1 , X2 , . . . , Xn are i.i.d. and Gamma(, ) so


likelihood function is:
n
Y 1
L (, ; x) = xi1 e xi .
()
i=1

2. The log-likelihood function is then:

` (, ; x) = n log ( ()) + n log()


Xn n
X
+ ( 1) log(xi ) xi .
i=1 i=1

27/47
Parameter Estimation
Examples: MME and MLE

Example: MME & MLE and Gamma


3. Maximizing this:
() n
X
` (, ; x) = n + n log() + log(xi ) = 0
()
i=1
n
n X
` (, ; x) = xi = 0.

i=1

Easy to solve for second equation:


n
b = n ,
b
P
xi
i=1

but need numerical (iterative) techniques for solving the first


equation.
28/47
Parameter Estimation
Examples: MME and MLE

Example: MLE and Uniform


1
I Suppose X1 , X2 , . . . , Xn are i.i.d. U [0, ], i.e., fX (x) = , for

0 x , and zero otherwise. Here the range of x depends
on the parameter .
I The likelihood function can be expressed as:
 n Yn
1
L (; x) = I{0xk } ,

k=1

where I{0xk } is an indicator function taking 1 if x [0, ]


and zero otherwise.
I Question: How to find the maximum of this Likelihood
function?
29/47
Parameter Estimation
Examples: MME and MLE

Example: MLE and Uniform


Let use first graph the likelihood function.

1.4

1.2

0.8
L(; x)

0.6

0.4

0.2

0
x(4) x(2) x(1) x(3)

30/47
Parameter Estimation
Examples: MME and MLE

Example: MLE and Uniform


Solution: Non-linearity in the indicator function cannot use
calculus to maximize this function, i.e., setting FOC equal to zero.

You can maximize it by looking at its properties:


- nk=1 I{0xk } can only take value 0 and 1;
Q
Note: it will take the value 0 if < x(n) and 1 else!
- (1/)n is a decreasing function in ;
function is maximized for the lowest value of for
- Hence,Q
which nk=1 I{0xk } = 1 i.e.:

b = max {x1 , x2 , . . . , xn } = x(n) .

31/47
Parameter Estimation
Estimator III: Bayesian estimator
Introduction

Introduction
We have seen:
I Method of moment estimator:
Idea: first k moments of the estimated special distribution and
sample are the same.
I Maximum likelihood estimator:
Idea: Probability of sample given a class of distribution is the
highest with this set of parameters.

Warning: Bayesian estimation is hard to understand. Partly


due to non-standard notation in Bayesian estimates.
Pure Bayesian interpretation: Suppose you have, a priori,
prior belief about a distribution;
Then you observe data more information about the
distribution.
32/47
Parameter Estimation
Estimator III: Bayesian estimator
Introduction

Example frequentist interpretation: Let Xi Ber() be whether


individual i lodge a claim at the insurer:
PT
- i=1 Xi = Y Bin(T , ) be the number of car accidents;
- The probability of insured having a car accident depends on
adverse selection;
- A new insurer does not know the amount of adverse selection
in his pool;
- Now, let , with Beta(a, b) the distribution of the
risk among individuals (i.e., representing adverse selection);
- Use this for estimating the parameter what is our prior for
?
Similar idea: Bayesian updating, in case of time varying
parameters:
- Prior: Last years estimated claim distribution;
- Data: This years claims;
33/47
- Posterior: revised estimated claim distribution.
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Notation for Bayesian estimation


I Under this approach, we assume that is a random quantity
with density () called the prior density.
(This is usual notation, rather than f ().)
I A sample X = x(= [x1 , x2 , . . . , xT ]> ) is taken from its
population and the prior density is updated using the
information drawn from this sample and applying Bayes rule.
This updated prior is called the posterior density, which is the
conditional density of given the sample X = x is (|x)
(=f|X (|x)).

So were using a conditional r.v., |X , associated with the


multivariate distribution of and the X .
I Use for example E [(|x)] as the Bayesian estimator.
34/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, theory

I First, let us define a loss function L(;


b ) on T which is an
estimator of () with:

L(;
b ) 0, for every ;
b
L(;
b ) = 0, when = .
b

Interpretation loss function: for reasonable functions we have:


a loss function has a lower value better estimator.
I Examples of the loss function:
b ) = (b )2
- Mean squared error: L(, (mostly used);
- Absolute error: L(,
b ) = |b |.

35/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, theory

I Next, we define a risk function, the expected loss:


h i Z
Rb() =Eb L(; ) = L((x);
b b ) fx| (x|)dx.

Note: estimator is a random variable (e.g. T = b = X ,


() = = ) depending on observations.

Interpretation risk function: loss function is a random


variable taking expectation returns a number given .

Note: Rb() is a function of (we only know prior density).

36/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, theory

Define Bayes risk under prior as:


Z
 
B ()
b = E R b() =

Rb() ()d.

Goal: minimize Bayes risk.

37/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, estimators


Rewriting the Bayes risk we have:
Z Z Z
B () =
b Rb() ()d = L((x),
b ) fx| dx ()d

Z Z

= L((x),
b ) fx| (|x)dxd

Z Z

= L((x),
b ) (|x)d fx| dx

| {z }
r (|x)
b
Z
= r (|x)
b fx| dx.

Implying: minimizing B ()
b is equivalent to minimizing r (|x)
b for
all x.
38/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, estimators


I For the squared error loss function (used in *) we have:

b for all x r (|x) = 0


n o b
min B ()
b minimizing r (|x)
b |x
b
Z

2 ( (x))
b (|x)d = 0
Z
bB (x) = (|x)d

bB (x) = E|x [] .

Interpretation: Bayesian estimator under squared error loss


function is the expectation of the posterior density, i.e.,
bB = E[(|x)]!
39/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, derivation


I The posterior density (i.e., f|X (|x)) is derived as:

fX | (x1 , x2 , . . . , xT | ) ()
(|x) = R (1)
fX | (x1 , x2 , . . . , xT | ) () d
fX | (x1 , x2 , . . . , xT | ) ()
=
fX (x1 , x2 , . . . , xT )

* Using Bayes formula: Pr(Ai |B) = PnPr(B|Ai )Pr(Ai ) , with


j=1 Pr(B|Aj )Pr(Aj )
A1 , . . . , An a complete partition of .
** Using LTP: Pr(A) = ni=1 Pr(A|Bi ) Pr(Bi )
P

I Hence, denominator is is the marginal density of the


X = [x1 , x2 , . . . , xT ]> (=constant given the observations!).
40/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, derivation

Notation: is proportional to, i.e., f (x) g (x) f (x) = c g (x).


I We have that the posterior is given by:

(|x) fX | (x1 , x2 , . . . , xT | ) () . (2)

Either use equation (1) (difficult/tidious integral!) or (2).


Equation (2) can be used to find the posterior density by:
R
I. Find c such that c fX | (x1 , x2 , . . . , xT | ) () d = 1.
II. Find a (special) distribution that is proportional to
fX | (x1 , x2 , . . . , xT | ) (). (fastest way, if possible!)

41/47
Parameter Estimation
Estimator III: Bayesian estimator
Bayesian estimation

Bayesian estimation, derivation

Estimation procedure:
1. Find posterior density using (1) (difficult/tidious integral!) or
(2).
2. Compute the Bayesian estimator (using the posterior) under a
given loss function (under mean squared loss function: take
expectation of the posterior distribution).

42/47
Parameter Estimation
Examples: Bayesian estimation

Example Bayesian estimation: Bernoulli-Beta


Let X1 , X2 , . . . , XT be i.i.d. Bernoulli(), i.e.,
(Xi | = ) Bernoulli(). Assume the prior density of is
Beta(a, b) so that:

(a + b)
() = a1 (1 )b1 .
(a) (b)

We know that the conditional density (density conditional on the


true value of ) of our data is given by:

fX | (x | ) =x1 (1 )1x1 x2 (1 )1x2 . . . xT (1 )1xT


T
P T
P
xj T xj

=j=1 (1 ) j=1
= s (1 )T s .

This is just the likelihood function.


* Simplifying notation, let s = T
P
43/47 j=1 xj .
Parameter Estimation
Examples: Bayesian estimation

Easy method: The posterior density, the density of given


X = x is proportional to:

(|x) fX | (x1 , x2 , . . . , xT | ) ()
(a + b)
= (a+s)1 (1 )(b+T s)1 (3)
(a) (b)

I. Posterior density is also solvable by finding c such that:


Z
(a + b)
c (a+s)1 (1 )(b+T s)1 d = 1.
(a) (b)

Posterior density is c fX | (x1 , x2 , . . . , xT | ) ().


II. However, we observe (3) is proportional to the p.d.f. of
Beta (a + s, b + T s).
44/47
Parameter Estimation
Examples: Bayesian estimation

1. Tedious method: To find the posterior density, we first need


to find the marginal density of the X .

Z 1

fX (x) = fX | (x | ) ()d
0
Z 1
(a + b)
= (a+s)1 (1 )(b+T s)1 d
0 (a) (b)
(a + b) (a + s) (b + T s)
= .
(a) (b) (a + b + T )

We can use the marginal fX (x) to find the posterior density.

45/47
Parameter Estimation
Examples: Bayesian estimation

Using the marginal from the previous slide, we can derive the
posterior density,

fX | (x | ) ()
(|x) =
fX (x)
s (1 )T s (a+b)
(a)(b)
a1 (1 )b1
= (a+b) (a+s)(b+T s)
(a)(b) (a+b+T )
(a + b + T )
= (a+s)1 (1 )(b+T s)1 ,
(a + s) (b + T s)

which is the density of a Beta distribution, Beta (a + s, b + T s).

46/47
Parameter Estimation
Examples: Bayesian estimation

2. The mean of this r.v. with the above posterior density is then:
a+s
bB = E[|X = x] = E [ Beta (a + s, b + T s)] =
a+b+T
gives the Bayesian estimator of .
We note that we can write the Bayesian estimator as a weighted
average of the prior mean (which is a/(a + b)) and the sample
mean (which is s/T ) as follows:
     
B T s a+b a
=
b + .
a+b+T T}
| {z a+b+T a+b
| {z } | {z } | {z }
weight sample sample mean weight prior prior mean

47/47

You might also like