0% found this document useful (0 votes)
12 views

Point Estimation

Uploaded by

Ayonija Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Point Estimation

Uploaded by

Ayonija Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Point Estimation

Statistical Methods In Economics-II


Lesson: Point Estimation
Lesson Developer: Kamlesh Aggarwal and Nidhi Aggarwal
College/Department: Department Of Economics, Spm
College and Mata Sundari College, University Of Delhi

Institute of Lifelong Learning, University of Delhi


Point Estimation

TABLE OF CONTENTS
Section No. and Heading Page No.

Learning Objectives 2

1. General Concepts of Point Estimation 2

2. Desirable Properties of Point Estimators 4


2.1 Unbiased Estimators 4
2.2 Efficient Estimators 8
2.3 Consistent Estimators 10

3. Precision of the Estimate 13

4. Methods of Point Estimation 14


4.1 The Method of Moments 14
4.2 The Method of Maximum Likelihood 15

Practice Questions 19

Institute of Lifelong Learning, University of Delhi


Point Estimation

Reference: Jay L .Devore : Probability and Statistics for


Engineering and the Sciences, 8th Edition.

Institute of Lifelong Learning, University of Delhi


Point Estimation

POINT ESTIMATION
Learning Objectives

The need to obtain estimates of relevant population parameters in business and


economics can be given by numerous examples, e.g. , a marketing organization may be
interested in estimates of average income in a metropolitan area; a production department
may desire an estimate of the percentage of defective articles produced by a new
production process; or a bank may want an estimate of average interest rates on mortgages
in a certain section of the country. In all of these cases, it is very costly or simply impossible
to study complete universe to get the required information. Further, in such cases, exact
accuracy is not required and estimates derived from sample data would probably provide
appropriate information to meet the demands of the practical situation. After completing
study of this chapter you will become familiar with such statistical estimation procedures
which provide us with the means of obtaining estimates of population parameters with
desired degree of precision. You will be able to choose the most appropriate value of a
parameter (or the values of several parameters) for a given situation from a possible set of
alternatives, as we will discuss various desirable properties of estimators and develop the
concept of sampling distribution of statistic and standard error.
Two different types of estimates of population parameters are of interest: 'point
estimates' and 'interval estimates'. Suppose we say that the average height of female
students in XYZ College is 5.28 feet, we are giving a point estimate. If, on the other hand,
we say that the height is 5.28 0.02 feet, that is, the height lies between 5.26 and 5.30
feet, we are giving an interval estimate. In this chapter we will concentrate on point
estimates.

1. General Concepts of Point Estimation


When we use the value of a statistic to estimate a population parameter, we call this
point estimation and we refer to the value of the statistic as a point estimate of the
parameter. Correspondingly, we refer to the statistics themselves as point estimators. For
example, sample mean, , may be used as a point estimator of population mean,µ , in which
case is a point estimate of this parameter. Similarly, sample variance, ,may be used as a
point estimator of population variance, , in which case is a point estimate of this
parameter. These estimates are called point estimates because in each case a single
number or a single point on the real axis, is used to estimate the parameter.

Institute of Lifelong Learning, University of Delhi


Point Estimation

Now we will explain that estimators themselves are random variables. Usually we
describe a sample of size n by the values , ... of the random variables , , ...
. If sampling is with replacement, , , ... would be independent, identically
distributed random variables having probability distribution ( ). Their joint distribution
would then be
P( = , = ,........, = )= .....
Now we can use the sample values , ... to compute some statistic (mean, variance
etc.) and use this as an estimate of population parameter. Algebraically, a statistic for a
sample of size n can be defined as a function of the random variables , , ... , i.e.,
g( , , ... ).The function g( , , ... ), that is any statistic, is another random
variable, whose values can be represented by g( , ... ). The same holds true if
we have more than one sample. Suppose we take two samples of heights of m male
students and n female students at a particular university. We represent sample values by
, ... and , , ... respectively. The difference between the two sample mean
heights is - , and is the sensible statistic for estimating - , the difference between
the two population mean heights. Now the statistic - is a linear combination of two
random variables , , ... and , , ... and so itself is a random variable.

Since estimators are random variables, one of the key problems of point estimation
is to study their sampling distributions to make a comparison among different estimators.
For instance, when we estimate the variance of a population on the basis of a random
sample, we can hardly expect that the value of we get will actually equal ,but it would
be reassuring, at least, to know whether we can expect it to be close. Similarly, suppose we
draw a random sample of size n from a normal population with mean value . Now sample
arithmetic mean is a natural statistic for estimating . However, median of the population,
average of the two extreme observations in the population and k% trimmed mean are also

equal to , since normal distributions are symmetric. So we can consider any of the
following estimators for :
(a) Estimator = =Arithmetic Mean
(b) Estimator = =Median

(c) Estimator = = the average of the two extreme observations

in the sample

Institute of Lifelong Learning, University of Delhi


Point Estimation

(d) Estimator = = the % trimmed mean(discard the smallest and largest %

of the sample and then average)


Each one of the estimators (a) - (d) are reasonable point estimators of . Since each uses a
different measure of the center of the sample to estimate so each estimator will give a
different estimate for .

Example 1 : Consider the accompanying 20 observations on weights of six year old


children.
24.46 25.61 26.25 26.42 26.66 27.15 27.31 27.54 27.74 27.94
27.98 28.04 28.28 28.49 28.50 28.87 29.11 29.13 29.50 30.88
We assume that the distribution of weights is normal with mean . So we can consider
, as the point estimators for . The estimates are 27.793, 27.960, 27.670

and 27.838 respectively. So each estimator is giving a different estimate for .

Which of these estimates is closest to the true value? We cannot answer this without
knowing the true value of (in which case estimation is unnecessary).Questions that can be
answered are, "which estimator, when used on other samples of 's will tend to produce
estimates closest to the true value, which will expose us to the smallest risk, which will give
us the most information at the lowest cost and so forth?"To decide which estimator is most
appropriate in a given situation, various statistical properties of estimators can be used.

2. Desirable Properties of Point Estimators


The particular properties of estimators that we shall discuss are unbiasedness,
efficiency and consistency.

2.1 Unbiased Estimators


In real world there are no perfect estimators that always give the right answer.
Thus, it would seem reasonable that an estimator should do so at least on the average,
i.e.its expected value should equal the parameter that it is supposed to estimate. If this is
the case, the estimator is said to be unbiased, otherwise, it is said to be biased. In other
words, if we repeatedly draw random samples from the same population and calculate same
statistic for each sample, then the value of statistic will be different for different samples
due to sampling fluctuations but the expected or mean value of this statistic should be equal
to true parameter value.

Institute of Lifelong Learning, University of Delhi


Point Estimation

Definition: Suppose we denote a point estimator by . Then is an unbiased estimator of


the true parameter value , if expected value of is equal to the true parameter value of
for every possible value of . If this does not hold true then is a biased estimator of . The
difference between the expected value of and is called the bias of . It should be noted
that expected value means only the arithmetic mean and not any other measure of central
value like median, mode etc. of the distribution of .
In figure 1 below we picture the distributions of biased and unbiased estimators.

Figure 1. The pdf's of a biased estimator and an unbiased estimator for a parameter .

In figure 1, the sampling distribution of is centered at the true parameter value


i.e.E( )= while the sampling distribution of is centered at i.e.E( )= . So is an
unbiased estimator of and is a biased estimator of and bias of =( - ).

One may feel that, it is necessary to know the true parameter value to see whether
an estimator is biased or unbiased. This is not usually the case because unbiasedness is a
general property of the estimator's sampling distribution-where it is centered-which is
typically not dependent on any particular parameter value.The following examples will
illustrate this:

Example 2: If X, the number of sample successes, is a binomial random variable with


parameters n and , then the sample proportion, = is an unbiased estimator of p

irrespective of the true value of .

Proof: E =E = E(X) =

Hence the distribution of the estimator will be centered at the true value p.

Institute of Lifelong Learning, University of Delhi


Point Estimation

Example 3: Let , ,----- be a random sample from a normal population with mean

and variance then the estimator = is an unbiased estimator of µ while = is

a biased estimator of .
Proof: Since , ,----- are random variables having the same distribution as the
population, which has mean µ, we have
E( )=µ for i=1,2,........n

Then since the sample mean is defined as =

We have as required
E( )= = (nµ)=µ

Hence is an unbiased estimator of µ irrespective of the true value of µ.


However

E( )=E

= E

Then, since E (given) and E = as shown below

it follows that E( )= =
which is very nearly only for large values of n(say, n 30). The desired unbiased estimator
is defined by

= = so that E( )=

Again is an unbiased estimator of irrespective of the true parameter value.

It can be noted that we have divided the sum of squared deviations by (n-1) instead of n.
The reason for this is that by definition we should have taken deviations from µ rather than
. But we do not know the value of µ so we have to take deviations from . Since s will
always be closer to than to µ so the sum of squared deviations is underestimating the true
sum of squared deviations.

Institute of Lifelong Learning, University of Delhi


Point Estimation

Proof:
Denote by L. Now L will be minimised when its first derivative with respect to c is
zero and second derivative with respect to c is positive. Differentiating with respect to c we
get

=2 (-1) = 0

= =

=2 >0

Hence is minimum for = . So if µ then > .

In order to make a correction for this underestimation we divide by (n-1) rather than
n.

Now we will discuss two basic difficulties associated with the concept of
unbiasedness. One difficulty associated with the concept of unbiasedness is that it may not
be retained under functional transformations, i.e. if is an unbiased estimator of , it does
not necessarily follow that g is an unbiased estimator of g For example, although is

an unbiased estimator of but is not an unbiased estimator of .Taking the square root
messes up the property of unbiasedness. Second difficulty associated with the concept of
unbiasedness is that unbiased estimators are not necessarily unique. The following example
will illustrate this:

Example 4: Suppose y is approximately proportional to ,that is, y for some value

.So for any fixed , Y is a random variable having mean value . That is, we assume that
the mean value of Y is related to by a line passing through (0,0) but that the observed
value of Y will typically deviate from this line. Now we can consider any of the following
three estimators of

(1) =

(2) =

(3) =

We can show that all three are unbiased.

(1) E = = = = =

Institute of Lifelong Learning, University of Delhi


Point Estimation

(2) E = = ( )= =

(3) E = = = = ( )=

Similarly, if , ,----- is a random sample from a normal distribution with mean µ, then
, and trimmed mean with any percentage are all unbiased estimators of µ.

So the principle of unbiasedness (preferring an unbiased estimator to a biased one)


cannot be invoked to select an estimator. What we now need is a criterion for choosing
among unbiased estimators.

is said to be an unbiased estimator of if E =

2.2 Efficient Estimators


Suppose there are more than one unbiased estimators of . Then the question arises which
one is the best. To answer this question we look at the spreads of the distributions about
of various unbiased estimators and select the one which has least spread. In figure 2 below
we have shown probability density functions (pdf’s)of two estimators and .

Figure 2: Graphs of the pdf's of two different unbiased estimators

It can be seen that both and are unbiased estimators of as pdf of each is centered at
, but has more spread as compared to . So we select . is also called minimum
variance unbiased estimator (MVUE) of as it has least variance among all unbiased
estimators of .

Example 5: For a normal population, the sampling distributions of the mean and median
both have the same mean, namely, the population mean. So both are unbiased estimators.

Institute of Lifelong Learning, University of Delhi


Point Estimation

However, the variance of the sampling distribution of mean is equal to which is smaller

than that of the variance of sampling distribution of median which is equal to =

Therefore, the mean provides a more efficient estimate than the median and the
efficiency of the median relative to the mean is approximately

= =

or about 64%. It means that mean requires only 64% as many observations as the median
to estimate with the same reliability.

However, if a choice is to be made among different estimators on the basis of


efficiency criterion, it is quite possible that sometimes a biased estimator is preferable to
MVUE as e.g. in figure 3 given below.

Figure 3: A biased estimator that is preferable to the MVUE.

So we choose , a biased estimator, as it has smaller spread as compared to


which is MVUE.
However, if is not an unbiased estimator of a given parameter , we judge its
merits and make efficiency comparisons on the basis of the expected or mean squared

error (MSE), E ,instead of the variance of . If is unbiased, then MSE( )= V( ),

but in general MSE( )= V( )+ So if is a biased estimator of and is an unbiased


estimator of then we should compare V( ) with MSE( )to make efficiency comparisons.

Question 1: Show that is a biased but more efficient estimator of population variance

, as compared to where = and =

Institute of Lifelong Learning, University of Delhi


Point Estimation

Solution: We have already proved that is a biased estimator of population variance


while is an unbiased estimator. Now to make efficiency comparisons we will have to
compare MSE( ) with MSE( ). Now since is an unbiased estimator so
MSE( ) = Var( ) while MSE( )=Var( )+ .
It can be shown that

Var( )=

Since by definition Var( )= Var , it means that

Var = and Var =

So Var( )=Var =

Now bias of is equivalent to E( )- = - =-

Hence MSE( )= + =

Comparing MSE of the two estimators we get

MSE( )- MSE( )= - = <0(since the numerator is always negative and

the denominator is always positive).


So MSE( ) > MSE( ). It means that although is a biased estimator of population
variance but it is an efficient estimator as compared to unbiased estimator .Hence,

whether we choose or as an estimator of will depend on whether unbiasedness or


efficiency is more important in a particular situation.

Among all estimators of , the efficient estimator is one


that has minimum mean squared error (MSE), E . If
V( ) V( ) where E( ) = E( )= then is efficient.

2.3 Consistent Estimators


Clearly, one would in practice prefer to have estimates that are both efficient and
unbiased, but this is not always possible. So the general practice is to consider all unbiased
and asymptotically unbiased estimators of and select the one that has minimum variance
among these. The reason is that sometimes we want to be assured that, at least for large n,
the estimators will take on values which are very close to the respective parameters. This
concept of closeness is generalized in the following definition of consistency.

Institute of Lifelong Learning, University of Delhi


Point Estimation

Definition: If is an unbiased or asymptotically unbiased estimator of the parameter and


variance 0 as n , then is a consistent estimator of . Informally the definition says

that when n is sufficiently large, we can be practically certain that the error made with a
consistent estimator will be less than any small pre-assigned positive constant.

Figure 4 : variance 0 as n

Note that consistency is an asymptotic property, namely, a limiting property of an


estimator. There may be more than one unbiased estimators which are consistent but there
can be only one minimum variance unbiased estimator.

Question 2: Show that is a consistent estimator of the binomial parameter .

Solution: Since is an unbiased estimator of , it remains only to be shown that Var( ) 0

as n

= = = which tends to zero as n as desired.

Question 3: Show that is a consistent estimator of the mean of a normal population


which has a finite variance.
Solution: Since we have already shown that is an unbiased estimator of , it remains
only to be shown that Var( ) 0 as n

By definition Var =E = (as already shown in example 3)

which tends to zero as n as desired.

The statistic is a consistent estimator of the parameter if

Institute of Lifelong Learning, University of Delhi


Point Estimation

and only if for each positive constant c, =0

Now having discussed the desirable properties of point estimators, it is also


important to know as to what are the main factors which decide whether an estimator
possesses these properties? The most important factor is the sampling distribution of the
estimator. However, the sampling distribution of the estimator depends on the distribution
of the population from which the sample is drawn. In particular,

1) If we draw a random sample from a normal population, then is the best among the
four estimators ( , , and ), since its variance is least among all unbiased
estimators.
2) If we draw a random sample from a Cauchy distribution,

Figure 5 : Cauchy Distribution

then and are bad estimators for , while is reasonably good. is bad as it is very
sensitive to extreme observations, and due to heavy tails of the Cauchy distribution it is
very likely that a few such observations appear in any sample.

3) If we draw a random sample from a uniform distribution, then is the best estimator.
is very sensitive to extreme observations but such observations are unlikely to
appear in any sample as uniform distribution does not have any tails.

4) The trimmed mean is not best in any of these three situations. However it is quite good
in all three. Hence with small trimming percentage is called a robust estimator i.e.
one that performs reasonably well for a wide variety of population distributions.

Institute of Lifelong Learning, University of Delhi


Point Estimation

So both i.e. distribution of population and sampling distribution of estimator are important
to decide which estimator is best for a given situation.

3. Precision of the Estimate


Whenever we are making an inference about a population parameter on the basis of sample
statistic, we are also interested in, as to, how much it is reliable. The best indicator is

standard error of the relevant estimator which we can denote by . It is the size of

an average deviation between . If we use estimated values of some unknown


parameters, then we call it estimated standard error and denote it by or by .

Example 6: Let , ,----- be a random sample from a normal population, then the

standard error of = is given by = . If, we do not know, the value of then we can

substitute the estimate =s into to obtain the estimated standard error = .


Question 4: Find out the standard error of sample proportion = where X is a binomial

random variable with parameters n and p.

Solution: = = = = .Since p and q are unknown so we

substitute = and = into yielding the estimated standard error = .

We can also use the standard error of the estimator used to convert point estimate
into interval estimate.

Suppose sample size is large, then distribution of point estimator will be


approximately normal and we can be reasonably confident that the true value of would lie
within approximately two standard errors of . Thus point estimate translates to the
interval estimate ± .

If is unbiased but distribution is not normal, then we can be reasonably confident


that the true value of would lie within approximately four standard error of . In
summary, the standard error tells us roughly within what distance of we can expect the
true value of to lie.

4. Methods of Point Estimation

Institute of Lifelong Learning, University of Delhi


Point Estimation

As we have seen in this chapter, there can be many different ways (estimators) of
estimating a parameter of a population. Further different estimators have various desirable
properties to varying degrees. Therefore, it would seem desirable to have some general
methods that yield estimators with reasonable desirable properties. Here we will discuss two
such methods, the method of moments, which is historically one of the oldest methods
and the method of maximum likelihood. Although maximum likelihood estimators are
generally preferable to moment estimators because of certain efficiency properties, they
often require significantly more computation than do moment estimators.

4.1 The Method of Moments


Let , ,----- be a random sample from a pmf or pdf ( ). For k=1,2,3,......, the
kth population moment, or kth moment of the distribution ( ), is E .The kth sample

moment is denoted by ; symbolically, =

Thus the first population moment is E(X)= , and the first sample moment is = .

The second population and sample moments are E and respectively.

The method of moments consists of equating the first few moments of a population
to the corresponding moments of a sample, thus getting as many equations as are needed
to solve for the unknown parameters of the population.
Thus the method of moments consists of solving the system of equations

= k=1,2-----p

for the p parameters of the population.

Example 7: If we want to estimate the parameter p of the binomial distribution when n is


known, then the system of equations we have to solve is =

Since =np so =np

Hence =

If both n and p are unknown, then the system of equations we shall have to solve is

= and =

Since = np and = npq+

we get

=np and npq +

Institute of Lifelong Learning, University of Delhi


Point Estimation

and solving these two equations for n and p, we find the estimates of the two parameters of
the binomial distribution.

Since npq +

= q+

= =(1- )

=1-

Similarly, since =np

= =

Question 5: Given a random sample of size n from a uniform population with =1, use the
method of moments to obtain a formula for estimating the parameter .

Solution: The equation that we shall have to solve is = where = and

= = . Thus, = and we can write the estimate of as =2 -1.

4.2 The Method of Maximum Likelihood


The method of maximum likelihood looks at the values of a random sample and
then chooses as our estimate of the unknown population parameter, the value for which the
probability of obtaining the observed data is a maximum. The principle on which the method
of maximum likelihood is based can be understood with the following example.

Example 8: Suppose Mr X receives five letters on some particular day, but unfortunately
one of them gets misplaced before he has a chance to open it. If among the remaining four
letters three contain credit-card billings and the other one does not, what might be a good
estimate of k, the total number of credit-card billings among the five letters received?

Institute of Lifelong Learning, University of Delhi


Point Estimation

Clearly k must be three or four. Assuming that each letter had the same chance of being
misplaced, we find that the probability of the observed data is

= for k=3

and

= for k=4

Therefore, if we choose as our estimate of k the value that maximizes the probability of
getting the observed data, we obtain k=4. We call this estimate a maximum likelihood
estimate and the method by which it was obtained is called the method of maximum
likelihood.

In the general case, if the observed sample values are , ,...... ,we can write in the
discrete case

P( = = ,......, = )= ( ,
; ) which is just the value of the joint
,......
probability distribution of the random variables , ,...... at the sample point ( ,
,...... ). Since the sample values have been observed and are therefore fixed numbers,
we regard ( , ,...... ; ) as the value of a function of the parameter ,referred to as
the likelihood function L( ). A similar definition applies when the random sample comes
from a continuous population, but in that case ( , ,...... ; ) is the value of the joint
probability density at the sample point ( , ,...... ). The method of maximum
likelihood consists of maximizing the likelihood function with respect to , and we refer to
the value of which maximizes the likelihood function as the maximum likelihood estimate
of .To maximize L( )= ( , ,...... ; ) we take the derivative of L( ) with respect
to and set it equal to zero.

The method is capable of generalization. In case there are several parameters, we


take the partial derivatives with respect to each parameter, set them equal to zero, and
solve the resulting equations simultaneously. Moreover if we draw a large sample from a
population which has well specified distribution function then maximum likelihood estimate
of any parameter will be approximately MVUE i.e. it will be approximately unbiased and
approximately have least variance.

Question 6: Given "successes" in n trials, find the maximum likelihood estimator of the
parameter of the binomial distribution.

Solution: To find the value of which maximizes

Institute of Lifelong Learning, University of Delhi


Point Estimation

L( )=b( ;n, )=( ) , it will be convenient to make use of the fact that the
value of which maximizes L( ) will also maximize

ln L( )=ln( )+ ln +(n- ) ln(1- )

Thus we get = -

and, equating this derivative to 0 and solving for , we find that the likelihood function has

a maximum at = . Hence the maximum likelihood estimator of the parameter of the

binomial distribution is = .
Question 7: Suppose that n observations, , ,...... are made from a normally
distributed population. Find
(a) the maximum likelihood estimate of the mean if variance is known but mean is unknown
(b) the maximum likelihood estimate of the variance if mean is known but variance is
unknown.
Solution:

(a) Since f( , )=

we have

(1) L = f( , )........ f( , )=

Therefore,

(2) ln L = - ln -

Taking the partial derivative with respect to yields

(3) =

Setting = 0 gives

(4) = 0 i.e. =0

or

(5) =

Therefore the maximum likelihood estimate is the sample mean.

(b) Since f( , )=

Institute of Lifelong Learning, University of Delhi


Point Estimation

we have

(1) L = f( , )........ f( , )=

Therefore,

(2) ln L = - ln -

Taking the partial derivative with respect to yields

(3) =- +

Setting = 0 gives

Question 8: Prove that the maximum likelihood estimate of the parameter of a

population having density function: ( ,0 , for a sample of unit size is ,

being the sample value. Show also that the estimate is biased.
Solution: Sample of unit size =1

likelihood function L( ) = ( = f(

logL( ) = log2 - log + log(


= log2 - 2log + log(
Differentiating w.r.t. we get

=- +

= -

For maxima or minima =0

- + =0 = = =

When = ,

Maximum likelihood estimator of is given by =

E( ) = E( )=2 = =

Since E( ) , = is not an unbiased estimate of .

Institute of Lifelong Learning, University of Delhi


Point Estimation

Practice Questions:
Q.1 Assuming that the population is normal, give examples of estimators (or estimates)
which are
(a) unbiased and efficient
(b) unbiased and inefficient
(c) biased and inefficient.
Q.2 Show that is a minimum variance unbiased estimator of the mean of a normal
population.
Q.3 If is an estimator of a parameter , its bias is given by b=E( )- . Show that

E =V( )+ .

Q.4 If and are unbiased estimators of the same parameter , what condition must be
imposed on the constants and so that + is also an unbiased estimator of ?
Q.5 Suppose that we use the largest value of a random sample of size n to estimate the
parameter of the population.

( )=

=0 Otherwise
Check whether this estimator is (a) unbiased and (b) consistent.
Q.6 Show that for a random sample from a normal population, the sample variance is a

consistent estimator of where = .

Q.7 In estimating the mean of a normal population on the basis of a random sample of
size 2n+1, what is the efficiency of the median relative to the mean?

Q.8 If , ,...... are the values of a random sample of size n from a population having

the density
( ; )=
=0 otherwise
find an estimator for by the method of moments.
Q.9 Let ,... be a random sample from a gamma distribution with parameters and .
a. Derive the equations whose solutions yield the maximum likelihood estimators of
and . Do you think they can be solved explicitly ?
b. Show that the mle of = is = .
Q.10 Among N independent random variables having identical binomial distribution with the
parameters and n=2, take on the value zero, take on the value one, and take on
the value two. Find an estimate of using

Institute of Lifelong Learning, University of Delhi


Point Estimation

(a) the method of moments


(b) the method of maximum likelihood.

Institute of Lifelong Learning, University of Delhi

You might also like