0% found this document useful (0 votes)
58 views9 pages

Image Segmentation Using Gaussian Mixture Models

This document describes an image segmentation technique using Gaussian mixture models. The technique involves: 1. Modeling the pixels of an image as a Gaussian mixture model, with each class representing a different region or object in the image. 2. Using an Expectation-Maximization (EM) algorithm to learn the parameters of the Gaussian mixture model from the image pixels. 3. Applying Bayes' rule during the EM algorithm to assign pixel labels, producing a segmented "labeled" image where each pixel is assigned to a class/region. 4. Introducing a new EM-MAP algorithm which generates a sequence of priors and posteriors that converge to a "reference posterior" used to determine maximum a

Uploaded by

ullisrinu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views9 pages

Image Segmentation Using Gaussian Mixture Models

This document describes an image segmentation technique using Gaussian mixture models. The technique involves: 1. Modeling the pixels of an image as a Gaussian mixture model, with each class representing a different region or object in the image. 2. Using an Expectation-Maximization (EM) algorithm to learn the parameters of the Gaussian mixture model from the image pixels. 3. Applying Bayes' rule during the EM algorithm to assign pixel labels, producing a segmented "labeled" image where each pixel is assigned to a class/region. 4. Introducing a new EM-MAP algorithm which generates a sequence of priors and posteriors that converge to a "reference posterior" used to determine maximum a

Uploaded by

ullisrinu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/250791017

Image Segmentation using Gaussian Mixture Models

Article · July 2008

CITATIONS READS

44 1,911

3 authors, including:

R. Farnoosh Behnam Zarpak


Iran University of Science and Technology Shahed University
73 PUBLICATIONS   293 CITATIONS    4 PUBLICATIONS   75 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

functional connectivity View project

Numerical of Stochastic Differential Delay Equations View project

All content following this page was uploaded by Behnam Zarpak on 17 June 2015.

The user has requested enhancement of the downloaded file.


Image Segmentation using Gaussian Mixture
Models
Rahman Farnoosh, Gholamhossein Yari and Behnam Zarpak
Department of Applied Mathematics, University of Science and Technology, 16844,
Narmak,Tehran, Iran

Abstract. Recently stochastic models such as mixture models, graphical models, Markov random
fields and hidden Markov models have key role in probabilistic data analysis. Also image segmen-
tation means to divide one picture into different types of classes or regions, for example a picture
of geometric shapes has some classes with different colors such as ’circle’, ’rectangle’, ’triangle’
and so on. Therefore we can suppose that each class has normal distribution with specify mean and
variance. Thus in general a picture can be Gaussian mixture model. In this paper, we have learned
Gaussian mixture model to the pixel of an image as training data and the parameter of the model
are learned by EM-algorithm. Meanwhile pixel labeling corresponded to each pixel of true image is
done by Bayes rule. This hidden or labeled image is constructed during of running EM-algorithm.
In fact, we introduce a new numerically method of finding maximum a posterior estimation by using
of EM-algorithm and Gaussians mixture model which we called EM-MAP algorithm.
In this algorithm, we have made a sequence of the priors, posteriors and they then convergent to
a posterior probability that is called the reference posterior probability. So Maximum a posterior
estimation can be determined by this reference posterior probability which will make labeled im-
age. This labeled image shows our segmented image with reduced noises. This method will show in
several experiments.
Keywords: Bayesian Rule, Gaussian Mixture Model (GMM),Maximum a Posterior (MAP),
Expectation-Maximization (EM) Algorithm, Reference Analysis.
PACS: 29. 85. +c, 87. 58. Mj , 07. 05. Tp

INTRODUCTION
Automatically image processing means image segmentation(i.e. dividing an image into
different types of regions or classes), recognizing of objects and detecting of edges,etc
by machine. All of these can be done after segmentation of a pictures. So image seg-
mentation is the most important image problems. In addition noise removing and noise
reduction of pictures always are important in classical image problems. In this paper, we
do both segmentation and noise reduction with a probabilistic approach.
There are many mathematical and statistical methods for image problems, but this paper
argues about GMM as a general Gaussian distribution, EM-algorithm and Bayesian rule.
But Bayesian framework usually has many difficulties because the posterior probability
has complex form. So we must use Markov Chain Monte Carlo algorithms or Variational
methods with high computing to find MAP estimation. These methods have worked well
in the last decades [1,5].
In this paper, a new numerically EM-MAP algorithm base on Bayesian rule has con-
structed. We have used Bernardo’s theory about reference analysis [2] in practice and
in image segmentation. it means that in reference analysis a sequence of priors and
posteriors are made and they convergent to a posterior probability which is called ref-
erence posterior probability. In this paper, we have used this idea. So we have modified
EM-algorithm for our image segmentation. After finding reference posterior probability,
MAP estimation and pixel labeling can easily make segmented of image.
This paper organized as follows. It first reviews of GMM and its properties. Then we in-
troduce the EM-MAP algorithm for learning parameters for a given image as the training
data. Choosing initial values of EM-algorithm have discussed in next section. EM -MAP
algorithm never can not convergent without choosing a suitable starting points. But these
initial information are made by histogram of image. Finally we show some experiments
with simulated images.

GAUSSIAN MIXTURE MODELS


Image is a matrix which each element is a pixel. The value of the pixel is a number that
shows intensity or color of the image. Let X is a random variable that takes these values.
For a probability model determination, we can suppose to have mixture of Gaussian
distribution as the following form
k
f x  ∑ piN x µi   σi2  (1)
i1

Where k is the number of regions and pi  0 are weights such that ∑ki1 pi  1

x  µ 
2
1
N µi  σi2    exp i
(2)
σ 2pi 2σi2

Whereµi  σi are mean, standard deviation of class i. For a given image X, the lattice data
are the values of pixels and GMM is our pixel base model. However, the parameters are
θ   p1      pk  µ1      µk  σ12      σk2  and we can guess the number of regions in GMM
by histogram of lattice data. This will show in experiments.

EM-MAP ALGORITHM
There is a popular EM algorithm for GMM in several papers [3,6]. We modify it to the
following algorithm with the name of EM-MAP algorithm.
1. Input:Observed Image in a vector x j  j  1 2     n and i  1 2     klabels set
2. Initialize: θ 0   p10      pk0  µ10      µk0  σ12      σk2 
0 0

3. (E-step)
pi r N x j µir  σi2 
r


pi jr1
P
 r1
ix j   (3)
f x j 
4. (M-step)

1 n r
p̂i r1 
n j∑
pi j (4)
1

∑ j1 n
pi rj 1 x j
µ̂ r1  (5)
i
n p̂i r1

∑nj1 pi rj 1 x j  µ̂i
r1 
r1
σ̂i2  (6)
n p̂i r1
5. Iterate steps 3 and 4 until an specify error i.e. ∑i e2i  ε
6. Compute
pl j  ArgMaxi pi jf inal   j  1 2     n (7)
7. Construct labeled image corresponding of each true image pixel.
This EM-MAP algorithm is a pixel labeling base method such that the labeled image
shows each segment or object by different type of labels. Note that formula (3) is Bayes
rule, pri is discrete prior probability in stage r and p i rj 1 is discrete posterior probability
in the next stage.
In this algorithm, we make a sequence of prior and then posterior until to get conver-
gence. The labeled images chooses with MAP of the final posterior.

CHOOSING A PRIOR WITH MAXIMUM ENTROPY PROPERTY


In EM-MAP algorithm, there are some difficulties. How can we choose?
• the number of classes
• the weights
• the means
• the variances

A practical way that we can guess the prior of parameters is drawing the histogram
of the observed image. Not only image histogram gives us four above parameters for
using initial values in the EM algorithm, but also this extracted information usually has
maximum entropy. This claim has shown in experiments. Besides, the final posterior
probability will get a stable entropy. We also compute the number of misclassifications
of results. This shows that how much our algorithm is well.

EXPERIMENTS
In first example, we make three boxes in an image and add it white noise. The observed
image and its histogram in the figure 1 and figure 2 are shown. Information extracted
1000

900

5
800

700

10
600

500

15
400

300

20
200

100

25
0
5 10 15 20 25 30 35 40 45 50 0 50 100 150 200 250

FIGURE 1. a)The Observed Image of Boxes,b)Histogram of the Observed Image

0.78

5 0.76

0.74
10

0.72

15

0.7

20
0.68

25
0.66
5 10 15 20 25 30 35 40 45 50 1 2 3 4 5 6 7 8 9 10

FIGURE 2. a)Labeled image with reduced noises,b)Entropy curve in each iteration

from the histogram are k  4 p0  00320 01344 00576 07760 or empirical prob-
ability with entropy 07411 µ 0  40 75 210 220andσ 0  100 100 100 100.
The stopping time occurs when L-2 norm of absolute error has very small value. After
running EM-MAP, we had ten-times iteration in figure 2 and the entropy of each itera-
tion which goes to a stable or maximum case. We see in segmented image that Blue  l,
cyan  2,Yellow  3 andRed  4. There is 0.0008 percentage misclassification that is
only one red instead of yellow pixel is wrong. In this example, pixel labeling and noise
removing are well done.
In the second example, some different shapes such as circle, triangle, rectangle,
etc have considered. The observed image and its histogram in figure 3 have shown.
For EM-MAP finding, we need to get initial values by histogram of this image. We
3000

5
2500
10

15
2000
20

25 1500

30
1000
35

40
500
45

50 0
10 20 30 40 50 60 70 80 90 100 −50 0 50 100 150 200 250

FIGURE 3. a)The Observed Image of Circle,b)Histogram of the Observed Image

0.9

5 0.85

10 0.8

15 0.75

20 0.7

25 0.65

30 0.6

35 0.55

40 0.5

45 0.45

50 0.4
10 20 30 40 50 60 70 80 90 100 0 5 10 15 20 25

FIGURE 4. a)Labeled image with reduced noises,b)Entropy curve in each iteration

choose k  3 p  008 034 058 as empirical probability or relative frequency,


µ  38 63 88 and σ  12 12 12 with norm of error less than 0.01.
In figure 4, we made 3 classes Blue  1,Green  2 andRed  3. There are only 25
missing of classifications or percentage of 0.005. If in EM-MAP algorithm, we compute
the entropy of the posterior probability in Each stage of iteration, this entropy will be
decreasing to reach a stable form.
The third example is more complex. The number of components is great. In addition,
there is dependent noise in image which noise reduction in this case is more difficult.
The true image and observed image have shown in figure 5.
Again,information extraction can find by drawing histogram of observed image, it
means
5 5

10 10

15 15

20 20

25 25

30 30

5 10 15 20 25 30 5 10 15 20 25 30

FIGURE 5. a)The True Image of Partition,b)The Observed Image of Partition

200

180
5
160

140 10

120
15
100

80 20

60
25
40

20
30

0
0 1 2 3 4 5 6 7 8 9 5 10 15 20 25 30

FIGURE 6. a)Histogram of Image,b)The Segmented Image of Partition

• k  10
• p  00977 01377 00625 00693 00361 00361 01182 01904 01572 00947
• µ  1 2 25 35 45 55 6 7 75 85
• σ  05 05 05 05 05 05 05 05 05 05

The results have shown in figure 6 with 20 times iteration. In figure 7 the results are
shown in 50 times iteration and entropy has reached in a stable case.
2.19

5
2.185

10
2.18

15
2.175

20
2.17

25
2.165

30

2.16
5 10 15 20 25 30 0 10 20 30 40 50

FIGURE 7. a)Labeled image with reduced noises,b)Entropy curve in each iteration

CONCLUSIONS
In this paper, we make a new numerical EM-GMM-Map algorithm for image segmen-
tation and noise reduction. This paper is used BernardoŠs idea about sequence of prior
and posterior in reference analysis. We have used known EM-GMM algorithm and we
added numerically MAP estimation. Also the initial values by histogram of image have
suggested which is caused to convergence of EM-MAP method. After convergence of
our algorithm, we had stability in entropy. EM-algorithm is iteration algorithm of first
order [3], so we had slow convergence. We used acceleration convergence such as Stef-
fensen algorithm to have the second order convergence. But later we note that in EM-
MAP method, the number of classes will reduce to real classes of image. Finally, EM-
algorithm is linear iteration method, so our method is suitable for simple images. It is
important to note that "for segmentation of real images, the results depend critically on
the features and feature models used" [4] that is not the focus of this paper.

ACKNOWLEDGMENTS
We have many thanks to prof.Mohammad-Djafari for his excellent ideas.

REFERENCES
1. C.Andrieu, N.D.Freitas, A.Doucet, M.I.Jordan, An Introduction to MCMC for Machine Learning,
Journal of Machine Learning, 2003, 50, pp. 5-43.
2. J.M. Bernardo and A.F.M. Smith, Bayesian Theory, John Wiley & Sons, 2000.
3. L.Xu, M.I.Jordan, On Convergence Properties of the EM Algorithm for Gaussian Mixture, Neural
Computation, 8, 1996, pp. 129-151.
4. M.A.T.Figueiredo, Bayesian Image Segmentation Using Gaussian Field Prior, EMMCVPR 2005,
LNCS 3757, pp. 74-89.
5. M.I.Jordan, Z.Ghahramani, T.S.Jaakkola, L.K.Saul, An Introduction to Variational Methods for
Graphical Models, Journal of Machine Learning, 1999, 37, pp. 183-233.
6. R.Farnoosh, B.Zarpak, Image Restoration with Gaussian Mixture Models, Wseas Trans. on Mathe-
matics, 2004, 4, 3, pp.773-777.

View publication stats

You might also like