0% found this document useful (0 votes)
17 views5 pages

Experiment 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views5 pages

Experiment 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Experiment-1

Aim of the experiment- To implement the maximum likelihood estimation (MLE).


Software Required- Matlab
Theory –
maximum likelihood estimation (MLE) is a method of estimating the parameters of
an assumed probability distribution, given some observed data. MLE is a technique utilized
to estimate the parameters of a presumed probability distribution based on observed data.
By maximizing a likelihood function, the most probable occurrence of the observed data is
determined within the assumed statistical model. The maximum likelihood estimate refers
to the parameter space point that optimizes the likelihood function. The method's intuitive
and adaptable logic has established it as a prominent approach for statistical inference.If we
assume that the likelihood function is differentiable, then we can utilize the derivative test
to identify the maxima. Analytically solving the first-order conditions of the likelihood
function is also possible in certain cases. For instance, in a linear regression model, if all
observed outcomes are assumed to have normal distributions with the same variance, then
the maximum likelihood is achieved by the ordinary least squares estimator.
A set of observations is modeled as a random sample from an unknown joint
probability distribution that is expressed in terms of a set of parameters. The objective of
maximum likelihood estimation is to identify the parameters that maximize the joint
probability of the observed data. The vector that governs the joint distribution is used to
represent the parameters are-
so that this distribution falls within a parametric family where

is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density
at the observed data sample gives a real-valued function,

which is called the likelihood function. For independent and


identically distributed random variables, It will be the product of univariate density functions:

The goal of maximum likelihood estimation is to find the values of the model parameters that maximize
the likelihood function over the parameter space that is

Intuitively, selecting the parameter values that make the observed data most probable
yields the specific value that maximizes the likelihood function.

1
that maximizes the likelihood function Ln is called the maximum likelihood estimate.

In practice, it is often convenient to work with the natural logarithm of the likelihood function, called
the log-likelihood:-
the necessary conditions for the occurrence of a maximum (or a minimum) are

known as the likelihood equations. For some models, these equations can be explicitly solved for ø^ but
in general no closed-form solution to the maximization problem is known or available, and an MLE can
only be found via numerical optimization. Another problem is that in finite samples, there may exist
multiple roots for the likelihood equations.[9] Whether the identified root of ø^ the likelihood equations
is indeed a (local) maximum depends on whether the matrix of secod-order partial and cross-partial
derivatives, the so-called Hessian matrix.

is negative semi-definite at ø^ , as this indicates local concavity. Conveniently, most common probability
distributions – in particular the exponential family – are logarithmically concave.
In this experiment we have used the binary symmetry for finding the maximum likelihood function.
Properties
sequences of maximum likelihood estimators have these properties:

• Consistency: the sequence of MLEs converges in probability to the value being estimated.
• Functional equivariance: If ø^ is the maximum likelihood estimator for ø and if g(ø) is
any transformation of ø then the maximum likelihood estimator for

• Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to
infinity. This means that no consistent estimator has lower asymptotic mean squared
error than the MLE (or other estimators attaining this bound), which also means that MLE
has asymptotic normality.
• Second-order efficiency after correction for bias.

2
Coding-

% Maximum likelihood Estimation using Binary Symmetric Channel (BSC) (p=0.5)


d=410; %Number of bits in error
n=90*10; %Total number of bits sent
k=n-d; %Number of Bits NOT in error
q=0:0.002:1; %range of success probability to test likelihood

y=binopdf(k,n,q); % assuming binomial distribution

plot(q,y);
xlabel('Probaility q');
ylabel('Likelihood');
title('Maximum Likelihood Estimation');

[maxY,maxIndex]=max(y); % Finding the Max and its index


disp(sprintf('MLE of q is %f',q(maxIndex))) %print the probability corresponding
to the max(y)
A=1.3;
N=10; %Number of Samples to collect
x=A+randn(1,N);

s=1; %Assume standard deviation s=1

rangeA=-2:0.1:5; %Range of values of estimation parameter to test


L=zeros(1,length(rangeA)); %Place holder for likelihoods

for i=1:length(rangeA)
%Calculate Likelihoods for each parameter value in the range
L(i) = exp(-sum((x-rangeA(i)).^2)/(2*s^2)); %Neglect the constant term
(1/(sqrt(2*pi)*sigma))^N as it will pull %down the likelihood value to zero for
increasing value of N
end

[maxL,index]=max(L); %Select the parameter value with Maximum Likelihood


display('Maximum Likelihood of A');
display(rangeA(index));

%Plotting Commands
plot(rangeA,L);hold on;
stem(rangeA(index),L(index),'r'); %Point the Maximum Likelihood Estimate
displayText=['\leftarrow Likelihood of A=' num2str(rangeA(index))];
title('Maximum Likelihood Estimation of unknown Parameter A');
xlabel('\leftarrow A');
ylabel('Likelihood');
text(rangeA(index),L(index)/3,displayText,'HorizontalAlignment','left');

figure(2);
plot(rangeA,log(L));hold on;
YL = ylim;YMIN = YL(1);
plot([rangeA(index) rangeA(index)],[YMIN log(L(index))] ,'r'); %Point the Maximum
Likelihood Estimate
title('Log Likelihood Function');
xlabel('\leftarrow A');
ylabel('Log Likelihood');
text([rangeA(index)],YMIN/2,displayText,'HorizontalAlignment','left');

3
Output-

4
Conclusion- from the experiment we get to know Maximum Likelihood Estimation (MLE) is a
powerful statistical method used to estimate the parameters of a probability distribution by
maximizing the likelihood function. In this experiment, we used MLE to estimate the
parameters of a normal distribution using a simulated dataset.Through our experiment, we
found that the MLE method was able to accurately estimate the mean and variance of the
normal distribution from the simulated dataset. We also observed that the accuracy of the
MLE estimates improved as the sample size increased.

Submitted by- Sourav Pal

Regd No.- 2205070004


Course: M.Tech
Semester- 2nd
Branch: ETC
Specialization: CSE

You might also like