Session No: CO2-1 Session Topic: Motion Analysis: Digital Video Processing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 29

Session No: CO2-1

Session Topic: Motion Analysis

DIGITAL VIDEO PROCESSING


(Course code: 19cs3278)
Prepared & Presented by: Dr. Lambodar Jena
Session Objective
To understand the
• Motion analysis
• GMM model
Poll Question-01
• A video consists of a sequence of
A. Frames
B. Signals
C. Packets
D. Slots
Key Concepts
• Introduction to Motion Analysis
• How motion analysis works?
• Uses of motion analysis
Motion Analysis

• Motion analysis is a measuring technique used in computer vision,


image processing and high-speed photography applications to detect
movement.
Objectives of motion analysis

The objectives of motion analysis are


to detect motion within an image
track an object’s motion over time
group objects that move together
identify the direction of motion
Specific techniques
Specific techniques for implementing motion or movement analysis
include
electromyography,
background segmentation and
differential equation models
How motion analysis works

• The basic function of motion analysis is to compare two or more


consecutive images captured by sensors or cameras to return
information on the apparent motion in the images.
• This is usually done by programming the recording device to produce 
binary images based on movement.
• All of the image points, or pixels, that correspond with motion are set
to a value of 1 while stationary pixels are set to 0.
How motion analysis works cont…

• The resulting image can be processed even further to remove noise,


label objects and group neighboring values of 1 into a singular object.
• The data produced by motion analysis tools often correlates to a
specific image at a specific point in time based on its position in the
sequence.
• Therefore, the motion capture data is time-dependent, which is a
crucial component in most tracking applications.
Uses of motion analysis
• Motion analysis is used in a variety of fields and applications,
including:
1. Manufacturing- Motion analysis can be applied to the
manufacturing process through software that monitors and
analyzes supply chains for inefficiencies or malfunctions. Similarly,
motion analysis can be used by manufacturers to conduct product
safety, collision or efficiency tests.
2. Video surveillance- Human activity recognition through motion
analysis is commonly used for security monitoring and surveillance
purposes.
Uses of motion analysis cont…
3. Healthcare and physical therapy- Motion analysis can help
healthcare providers track muscle activity, perform gait analysis and
diagnose potential mobility issues. Treatment for patients with
cerebral palsy, spina bifida, muscular dystrophy and joint issues can
involve regular motion analysis tests in a laboratory.
4. Autonomous vehicles- Motion analysis systems can be used in 
self-driving cars to aid with traffic navigation and obstruction
identification.
5. Biological sciences- Specific motion analysis software can be used
to count and track tiny particles such as bacteria and viruses.
Why Gaussian Mixture Model (GMM) in
Image Processing?
• In Machine Learning, there are two areas which are- Supervised
learning and Unsupervised Learning.
• The difference between both is the approach used to solve the
problem statement and the data used in the approach.
• There is a term in Unsupervised Learning called Clustering in which
we find the clusters of data points with some common characteristics.
Example:

Fig. : Datapoints
• Now clustering means to find the set of points that are close together
as compared to some other data points.

Here there are two sets of


clusters or set of data points
that are close together. The
two sets are represented in
blue and red.

Fig. : Clusters
Why Gaussian Mixture Model cont…
• Here in the above image, there is one more notation for the centroids
of both clusters and act as the parameters that identify each one of
the clusters.
• There are multiple methods or approaches that are used to do the
clustering. They are-
K-means Clustering
Gaussian Mixture Model
Hierarchical Clustering
etc….
Why Gaussian Mixture Model cont…
• K-means is quite a popular clustering algorithm that updates the
parameters of each cluster by an iterative approach. Basically, it
calculates the centroid (means) of each cluster and then calculates
their distance to each of the data points. This process is repeated until
some criteria are fulfilled. 
• K-means is a hard clustering method which means that it associates
each point to one and only cluster. The limitation of this method is
that there is no probability that tells us how much a data point is
associated with the cluster.
• This is when GMM (Gaussian Mixture Model) comes to the picture.
• Let’s recall types of clustering methods:
• Hard clustering: clusters do not overlap (element either belongs to
cluster or it does not) — e.g. K-means, K-Medoid.
• Soft clustering: clusters may overlap (strength of association between
clusters and instances) — e.g. mixture model using Expectation-
Maximization algorithm.
GMM (Gaussian Mixture Model)
• The core idea of this model is that it tries to model the dataset in the
mixture of multiple Gaussian mixtures.
What is Gaussian Mixture ?
• Gaussian Mixture is a function that includes multiple Gaussians equal
to the total number of clusters formed.
• Each Gaussian in the mixture carries some parameters which are- 
A mean, that defines the center.
A covariance, that defines the width.
A probability.
Fig. : GMM clusters
• Here it can be seen that there are three clusters that mean three
Gaussian functions.
• Each Gaussian explains the data present in each of the clusters
available. 
• Since there are three( k=3) clusters and the probability density is
defined as a linear function of densities of all these k distributions.
• As there will be some n number of sample points in k(th) cluster
and the parameters cannot be estimated in closed form.
• Now question is that how will you find out the missing or hidden data
points? 

• So the answer is the Expectation-Maximization(EM) algorithm.


Expectation-Maximization(EM) Algorithm
• What Expectation-Maximization(EM) algorithm does and how is it
helpful for us?
• It finds the maximum-likelihood estimates for model parameters when the
data is missing or incomplete or has some variables hidden.
• This algorithm chooses some random values for the missing data points and
calculates a new set of data. These values are used to filling the missing points
until the values get first.
• This is how we will get all the data points of the cluster and the clusters will
be formed.
• So we can form the clusters using the Gaussian Mixture Model.
• EM alternates between performing an expectation E-step, which
computes an expectation of the likelihood by including the latent
variables as if they were observed, and a maximization M-step, which
computes the maximum likelihood estimates of the parameters by
maximizing the expected likelihood found on the E-step.
• The parameters found on the M-step are then used to begin another
E-step, and the process is repeated until convergence.
Applications of EM Algorithm
• The latent variable model has several real-life applications in Machine
learning:
• Used to calculate the Gaussian density of a function.
• Helpful to fill in the missing data during a sample.
• Used for finding the values of latent variables.
• Used in image reconstruction in the field of Medicine and Structural
Engineering.
• It finds plenty of use in different domains such as Natural Language
Processing (NLP), Computer Vision, etc.
• Used for estimating the parameters of the Hidden Markov Model (HMM) and
also for some other mixed models like Gaussian Mixture Models, etc.
Advantages of EM algorithm

• The basic two steps of the EM algorithm i.e, E-step and M-step are
often pretty easy for many of the machine learning problems in terms
of implementation.
• The solution to the M-steps often exists in the closed-form.
• It is always guaranteed that the value of likelihood will increase after
each iteration.
Disadvantages of EM algorithm

• It has slow convergence.
• It is sensitive to starting point, converges to the local optimum only.
• It cannot discover K (likelihood keeps growing with number of
clusters)
• It takes both forward and backward probabilities into account. This
thing is in contrast to that of numerical optimization which considers
only forward probabilities.
References

You might also like