0% found this document useful (0 votes)
4 views17 pages

UNIT-IV - Gaussian Mixture and EM

The Expectation-Maximization (EM) algorithm iteratively estimates missing data and updates parameter values using observed data. It consists of two main steps: the E-step, where missing data is estimated and log-likelihood is computed, and the M-step, where parameters are updated to maximize the log-likelihood. This process continues until convergence is achieved, indicated by stable parameter values or log-likelihood changes below a predefined threshold.

Uploaded by

jsontineni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views17 pages

UNIT-IV - Gaussian Mixture and EM

The Expectation-Maximization (EM) algorithm iteratively estimates missing data and updates parameter values using observed data. It consists of two main steps: the E-step, where missing data is estimated and log-likelihood is computed, and the M-step, where parameters are updated to maximize the log-likelihood. This process continues until convergence is achieved, indicated by stable parameter values or log-likelihood changes below a predefined threshold.

Uploaded by

jsontineni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

The essence of the Expectation-Maximization algorithm is to use the available

observed data of the dataset to estimate the missing data and then use that data to
update the values of the parameters. Let us understand the EM algorithm in detail.
Initialization:
Initially, a set of initial values of the parameters are considered. A set of incomplete
observed data is given to the system with the assumption that the observed data
comes from a specific model.
E-Step (Expectation Step): In this step, we use the observed data in order to
estimate or guess the values of the missing or incomplete data. It is basically used
to update the variables.
Compute the posterior probability or responsibility of each latent variable given the
observed data and current parameter estimates.
Estimate the missing or incomplete data values using the current parameter
estimates.
Compute the log-likelihood of the observed data based on the current parameter
estimates and estimated missing data.
M-step (Maximization Step): In this step, we use the complete data generated in the
preceding “Expectation” – step in order to update the values of the parameters. It is
basically used to update the hypothesis.
Update the parameters of the model by maximizing the expected complete data
log-likelihood obtained from the E-step.
This typically involves solving optimization problems to find the parameter values
that maximize the log-likelihood.
The specific optimization technique used depends on the nature of the problem
and the model being used.
Convergence: In this step, it is checked whether the values are converging or
not, if yes, then stop otherwise repeat step-2 and step-3 i.e. “Expectation” – step
and “Maximization” – step until the convergence occurs.
Check for convergence by comparing the change in log-likelihood or the
parameter values between iterations.
If the change is below a predefined threshold, stop and consider the algorithm
converged.
Otherwise, go back to the E-step and repeat the process until convergence is
achieved.
Example
•L(H) of coin A=No.of heads X P(A)
Contd..

•The process will be continued until


you get stable values.

You might also like