Joint NDT Image Restoration and Segmentation Using GaussMarkovPotts Prior Models and Variational Bayesian Computation
Joint NDT Image Restoration and Segmentation Using GaussMarkovPotts Prior Models and Variational Bayesian Computation
Abstract—In this paper, we propose a method to simultaneously where hyperparameters , , called the
restore and to segment piecewise homogeneous images degraded likelihood, is obtained using the forward model (1) and the as-
by a known point spread function (PSF) and additive noise. For this signed probability law of the errors, is the as-
purpose, we propose a family of nonhomogeneous Gauss–Markov
fields with Potts region labels model for images to be used in a signed prior law for the unknown image and
Bayesian estimation framework. The joint posterior law of all the
unknowns (the unknown image, its segmentation (hidden variable) (3)
and all the hyperparameters) is approximated by a separable prob-
ability law via the variational Bayes technique. This approxima-
tion gives the possibility to obtain practically implemented joint is the evidence of the model . Assigning Gaussian priors
restoration and segmentation algorithm. We will present some pre-
liminary results and comparison with a MCMC Gibbs sampling (4a)
based algorithm. We may note that the prior models proposed in
this work are particularly appropriate for the images of the scenes
or objects that are composed of a finite set of homogeneous mate- (4b)
rials. This is the case of many images obtained in nondestructive
testing (NDT) applications.
It is easy to show that the posterior law is also Gaussian
Index Terms—Bayesian estimation, image restoration, segmen-
tation, variational Bayes approximation.
(5a)
I. INTRODUCTION
with
(1) (5c)
where is the observed image, is a known point spread which can also be obtained as the solution that minimizes
function, is the unknown image, and is the measure-
ment error, and equivalently, , and are vectors containing (6)
samples of , , and , respectively, and is a huge
matrix whose elements are determined using samples. where we can see the link with the classical regularization
is the whole space of the image surface. theory [1].
In a Bayesian framework for such an inverse problem, one For more general cases, using the MAP estimate
starts by writing the expression of the posterior law
(7)
(2)
we have
Manuscript received February 27, 2009; October 12, 2009; accepted March (8)
09, 2010. First published April 08, 2010; current version published August 18,
2010. The associate editor coordinating the review of this manuscript and ap-
proving it for publication was Dr. Eero P. Simoncelli. where and . Two families of
The authors are with Laboratoire des Signaux et Systèmes, Unité mixte de priors could be distinguished
recherche 8506 (Univ Paris-Sud — CNRS — SUPELEC), Supélec, Plateau de
Moulon,91192 Gif-sur-Yvette, France (e-mail: [email protected]).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2010.2047902
Fig. 1. Proposed a priori model for the images: the image pixels f (r ) are
assumed to be classified in K classes, z (r ) represents those classes (segmen-
tation). In MIG prior (a), we assume the image pixels in each class to be inde-
pendent while in MGM prior (b), image pixels these are considered dependent.
In both cases, the hidden field values follows Potts model.
Fig. 2. Hierarchical prior model where our variable of interest f can be mod-
class and elements of the hidden vari- eled by a mixture of Gaussians (MIG) or mixture of Gauss–Markov (MGM)
with mean values m and variances v . Hidden field z prior follows an external
ables . In this paper, we assign Potts model field Potts model with and as hyperparameters. Meanwhile, the error prior is
for the hidden field in order to obtain more homogeneous supposed Gaussian with unknown variance . Conjugate priors were chosen
classes in the image. Meanwhile, we present two models for the for m; v ; ; , while is chosen as a fixed value.
image pixels ; the first is independent, while the second is a
Gauss–Markov model. In the following, we give the prior prob-
ability of the image pixels and the hidden field elements for the with
two models.
(12)
(17)
where is the energy of singleton cliques, and is Potts
B. Mixture of Gauss–Markovs (MGM) constant. The hyperparameters of the model are class means
, variances , and finally singleton clique energy
In the MIG model, the pixels of the image in different regions . The graphical model of the observation generation mech-
are assumed independent. Furthermore, all the pixels inside a anism assumed here is given in Fig. 2.
region are also assumed conditionally independent. Here, we
relax this last assumption by considering the pixels in a region
Markovian with the four nearest neighbors III. BAYESIAN RECONSTRUCTION AND SEGMENTATION
unknown image and its hidden field, we use the joint posterior can not be obtained in an analytical form. Therefore, we ex-
law plore two approaches to solve this problem. The first is the
Monte Carlo technique and the second is Variational Bayes
approximation.
(18)
A. Numerical Exploration and Integration via Monte Carlo
This requires the knowledge of , and which Techniques
we have already provided in the previous section, and the model
This method solves the previous problem by generating a
likelihood which depends on the error model. Clas-
great number of samples representing the posterior law and then
sically, it is chosen as a zero mean Gaussian with variance ,
calculating the desired estimators numerically from these sam-
which is given by
ples. The main difficulty comes from the generation of these
samples. Markov Chain Monte Carlo (MCMC) samplers are
(19)
used generally in this domain and they are of great interest be-
cause they explore the entire space of the probability density.
In fact the previous calculation assumes that the hyperparame-
The major drawback of this nonparametric approach is the com-
ters values are known, which is not the case in many practical
putational cost. A great number of iterations are needed to reach
applications. Consequently, these parameters have to be esti-
the convergence; also many samples are required to obtain good
mated jointly with the unknown image. This is possible using
estimates of the parameters.
the Bayesian framework. We need to assign a prior model for
To apply this method to our problem, we use a Gibbs sampler.
each hyperparameter and write the joint posterior law
The basic idea in this approach is to generate samples from the
posterior law (20) using the following general algorithm:
(20)
(22a)
where groups all the unknown hyperparame- (22b)
ters, which are the means , the variances , the singleton en- and
ergy , and error inverse variance . While, the Potts constant
(22c)
is chosen to be fixed due to the difficulty of finding a conju-
gate prior to it. We choose an Inverse Gamma for the model of
We have the expressions for all the necessary probability laws in
the error variance , a Gaussian for the means , an Inverse
the right hand side of the previously mentioned three conditional
Gamma for the variances , and finally a Dirichlet for
laws to be able to sample from them. Indeed, it is easy to show
that the first one is a Gaussian which is then
(21a)
easy to handle. The second is a Potts field where
(21b) many fast methods exist to generate samples from it [31]. The
(21c) last one is also separable in its components, and
(21d) due to the conjugate property, it is easy to see that the posterior
laws are either Inverse Gamma, Inverse Wishart, Gaussian, and
where , , , , , and are fixed for a given Dirichlet for which there are standard sampling schemes [28],
problem. The previous choice of conjugate priors is very [32].
helpful for the calculation that follows in the next section.
B. Variational or Separable Approximation Techniques
IV. BAYESIAN COMPUTATION One of the main difficulties to obtain an analytical expression
In the previous section, we found the necessary ingredients for the estimator is the posterior dependence between the un-
to obtain the expression of the joint posterior law. However, known parameters. For this reason, we propose, for this method,
calculating the joint maximum posterior (JMAP) a separable form of the joint posterior law, and then we try to
or the posterior means (PM) find the closest posterior to the original posterior under this con-
straint. The idea of approximating a joint probability law
by a separable law is not new [33]–[36]. The
selection of the parametric families of for which the com-
putations can be easily done has been addressed recently for data
mining and classification problems [37]–[44], and [40]. How-
ever, their use for Bayesian computations for the inverse prob-
lems in general and for image restoration in particular, using this
and class of prior models, is one of the contributions.
We consider the problem of approximating a joint pdf
by a separable one . The first
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
AYASSO AND MOHAMMAD-DJAFARI: JOINT NDT IMAGE RESTORATION AND SEGMENTATION 2269
step for this approximation is to choose a criterion. A natural where the shaping parameters of these laws are mutually de-
criterion is the Kullback–Leibler divergence pendent. So, an iterative method should be applied to obtain the
optimal values. In the following, we will give the expression of
each shaping parameter for the iteration as function of the pre-
vious iteration .
We can unify both priors by means of contour variable
which is set to 1 in the MIG prior and the value defined in (16)
(23) in the MGM case.
We start by the conditional posterior of the image
where is the expectation of w.r.t . So, the main mathe- in (26a) where and are given by the following
matical problem is finding which minimizes . relations:
We first make two points:
1) the optimal solution without any constraint is the trivial
solution ;
2) the optimal solution with the constraint
where is a given constant is the one which maximizes
the entropy and is given by using the properties of the (27a)
exponential family. This functional optimization problem
can be solved and the general solution is
MIG
(24) (27b)
MGM
where and are the normalizing fac- MIG case
MGM case (27c)
tors [16], [45].
However, we may note that, first the expression of de-
pends on the expressions of , . Thus, this compu- (27d)
tation can be done only in an iterative way. The second point
is that in order to compute these solutions we must compute (27e)
. The only families for which these computa-
tions are easily done are the conjugate exponential families. At (27f)
this point, we see the importance of our choice of priors in the
(27g)
previous section.
The first step is to choose a separable form that is appro-
priate for our problem. In fact there is no rule for choosing the (27h)
appropriate separation; nevertheless, this choice must conserve
the strong dependences between variables and break the weak The expression for , , , and are given later in (29).
ones, keeping in mind the computation complexity of the poste- The expression for , and of the posterior law of the
rior law. In this work, we propose a strongly separated posterior, hidden field in (26c) are given by the following relations:
where only dependence between image pixels and hidden fields
is conserved. This posterior is given by
(25)
(26a)
(26b)
(26c)
(26d) (28b)
(26e)
(26f)
(28c)
(26g)
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
2270 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 9, SEPTEMBER 2010
TABLE I
HYPER-HYPERPARAMETERS VALUES
TABLE II
MODEL GENERATED IMAGE PROPERTIES MIG
Fig. 3. Restoration results from MIG model: (a) original image, (b) distorted
image, (c) original segmentation, (d) MIG segmentation, (e) VB MIG recon-
(29a) struction, and (f) VB MGM reconstruction
(29b)
(29c)
(29d)
(29e)
(29f)
(29g)
Several observations can be made for these results. The most Fig. 4. Restoration results from MGM model: (a) original image, (b) distorted
image, (c) original segmentation, (d) MGM segmentation, (e) VB MIG restora-
important is that the problem of probability law optimization tion, and (f) VB MGM restoration.
turned into simple parametric computation, which reduces sig-
nificantly the computational burden. Indeed, although for our
choice of a strong separation, posterior mean value dependence V. NUMERICAL EXPERIMENT RESULTS AND DISCUSSION
between image pixels and hidden field elements is present in the
equations, which justifies the use of spatially dependent prior In this section, we show several restoration results using our
model with this independent approximated posterior. On the method. We start first by defining the values of the hyper-hy-
other hand, the iterative nature of the solution requires a choice perparameter that were used during the different experiments.
of a stopping criterion. We have chosen to use the variation of Then, we apply the proposed methods on a synthesized restora-
the negative free energy tion problem for images generated from our model. Afterwards,
the method is applied on real images. Finally, we compare the
(30) performance of our method to some other ones and especially to
a restoration method based upon the same prior model but with
to decided the convergence of the variables. This seems nat- MCMC estimator.
ural since it can be expressed as the difference between Kull- We choose the value of hyper-hyperparameters in a way that
back–Leibler divergence and the log-evidence of the model our priors stay as noninformative as possible. However, for the
Potts parameter we fixed the value that worked the best for us.
(31)
A. Model Generated Image Restoration
We can find the expression of the free energy using the shaping
parameters calculated previously with almost no extra cost. Fur- We start the test of our method by applying it on two simple
thermore, its value can be used as a criterion for model selec- images generated by our prior models (MIG and MGM). Then,
tion. We will present in the next section some restoration results a box triangle convolution kernel is applied and white Gaussian
using our method. noise is added. The chosen values are given in Table II.
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
AYASSO AND MOHAMMAD-DJAFARI: JOINT NDT IMAGE RESTORATION AND SEGMENTATION 2271
Fig. 5. Comparison between different histograms of the test images: (a) MIG, (b) VB MIG restoration for MIG image, (c) VB MIG restoration for MGM, (d) MGM,
(e) VB MGM restoration for MIG image, and (f) VB MGM restoration for MGM image.
TABLE V
TABLE III TEXT RESTORATION EXPERIMENT CONDITIONS
MIG IMAGE RESULTS SUMMARY
TABLE IV
MGM IMAGE RESULTS SUMMARY
We can see from Fig. 3 that our method was able to restore the
image with a small error (results details are available in Table III.
However, we can see that the quality of construction of VB MIG
is better than VB MGM, as expected, since in the MIG, pixels
in the same class are modeled as Gaussian, while in the MGM
the Gaussian property is imposed on the derivative (Markovian
Fig. 6. Text restoration results: (a) original image, (b) distorted image,
property). Similar results are found in the case of the MGM (c) original segmentation, (d) initial segmentation, (e) VB MIG segmentation,
model generated image (see Table IV), the VB MGM method and (f) VB MIG restoration.
has a better restoration performance than the VB MIG method
since it is more adaptive.
We can see this property more clearly by comparing the his- B. Testing Against “Real” Images
togram of each of the images. From Fig. 5, the histograms of the Herein, we show that our algorithm does not only work for
MIG and the VB MIG restoration are very similar. The same ob- images generated from the prior model but it works also for
servation can be made for the MGM and the VB MGM restored images resulting from several real applications. We start with
images. a text restoration problem (see Table V). We apply the VB
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
2272 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 9, SEPTEMBER 2010
Fig. 7. Text restoration hyperparameters evolution vs iteration: (a) error precision ~ , (b) classes mean m
~ , z = 1,2,3 (c), precision of class mean ~ , z = 1,2,3,
(d) class variance parameter 1 ~
b, (e) class variance parameter 2 c~, (f) Singleton energy parameter ~ , z = 1,2,3, (g) Negative free energy, and (h) RMS of the error.
TABLE VI
NORMALIZED L DISTANCE FOR DIFFERENT METHODS. THE METHODS
ARE: MATCHED FILTER (MF), WIENNER FILTER (WF), LEAST SQUARE (LS),
MCMC FOR A MIG PRIOR (MCMC), AND FINALLY OUR
TWO METHODS VB MIG (MIG), AND VB MGM (MGM)
Fig. 10. Cameraman restoration: (a) original image, (b) distorted image,
(c) VB MIG restoration, and (d) VB MGM restoration.
A. Gaussian
Fig. 11. Comparison between different restoration methods: (a) matched filter
(MF), (b) Wienner filtering (WF), (c) least square (LS), (d) MCMC MIG, (e) VB Let be a random variable with Gaussian distribution with
MIG, and (f) VB MGM.
mean and variance . Then its distribution is given as
(39c)
(35)
(40b)
(36)
With
(37) (40c)
(40d)
and
(40e)
(38)
(41a)
APPENDIX B
DERIVATION OF VARIATIONAL BAYES POSTERIORS
We present herein the derivation of Variational Bayes poste-
rior of our problem. For the sake of simplicity, we will omit the
iteration number .
(39a)
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
AYASSO AND MOHAMMAD-DJAFARI: JOINT NDT IMAGE RESTORATION AND SEGMENTATION 2275
(42a) (47a)
(48a)
(44a)
(48c)
(45a)
With a similar technique to the one used in the model error pos-
terior , we get the posterior with
(45b)
(49)
(45c)
F. Singleton Energy Posteriors
Summing the last two terms lead us to the expression given in
(29a) and (29b), with
(50a)
(50b)
(50c)
(46)
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
2276 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 9, SEPTEMBER 2010
Consequently, we obtain the same expression for given in [22] R. Molina, J. Mateos, and A. K. Katsaggelos, “Blind deconvolution
(29g). using a variational approach to parameter, image, and blur estimation,”
IEEE Trans. Image Process., vol. 15, no. 12, pp. 3715–3727, Dec. 2006.
[23] S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Parameter
estimation in TV image restoration using variational distribution
ACKNOWLEDGMENT approximation,” IEEE Trans. Image Process., vol. 17, no. 3, p.
326, Mar. 2008.
The authors would like to thank S. Fekih-Salem for her [24] G. Chantas, N. Galatsanos, A. Likas, and M. Saunders, “Variational
careful reading and revision of the paper. bayesian image restoration based on a product of t-distributions image
prior,” IEEE Trans. Image Process., vol. 17, no. 10, pp. 1795–1805,
Oct. 2008.
[25] S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational
REFERENCES bayesian blind deconvolution using a total variation prior,” IEEE
Trans. Image Process., vol. 18, no. 1, pp. 12–26, Jan. 2009.
[1] A. N. Tikhonov, “Solution of incorrectly formulated problems and the [26] D. G. Tzikas, A. C. Likas, and N. P. Galatsanos, “Variational bayesian
regularization method,” Sov. Math., pp. 1035–1038, 1963. sparse Kernel-based blind image deconvolution with student’s-t
[2] C. Bouman and K. Sauer, “A generalized Gaussian image model for priors,” IEEE Trans. Image Process., vol. 18, no. 4, pp. 753–764, Apr.
edge-preserving MAP estimation,” IEEE Trans. Image Process., vol. 2009.
2, no. 3, pp. 296–310, Jul. 1993. [27] H. Snoussi and A. Mohammad-Djafari, “Fast joint separation and seg-
[3] P. J. Green, “Bayesian reconstructions from emission tomography data mentation of mixed images,” J. Electron. Imag., vol. 13, p. 349, 2004.
using a modified EM algorithm,” IEEE Trans. Med. Imag., vol. 9, no. [28] O. Feron, B. Duchene, and A. Mohammad-Djafari, “Microwave
1, pp. 84–93, Mar. 1990. imaging of inhomogeneous objects made of a finite number of dielec-
[4] S. Geman and D. E. McClure, “Bayesian image analysis: Application to tric and conductive materials from experimental data,” Inv. Prob., vol.
single photon emission computed tomography,” Amer. Statist. Assoc., 21, no. 6, p. 95, 2005.
pp. 12–18, 1985. [29] F. Humblot, B. Collin, and A. Mohammad-Djafari, “Evaluation and
[5] G. Demoment, “Image reconstruction and restoration: Overview of practical issues of subpixel image registration using phase correlation
common estimation structures and problems,” IEEE Trans. Acoust., methods,” in Proc. Physics in Signal and Image Processing Conf.,
Speech, Signal Process., vol. 37, no. 12, pp. 2024–2036, Dec. 2005, pp. 115–120.
1989. [30] A. Mohammad-Djafari, “2D and 3D super-resolution: A Bayesian ap-
[6] A. K. Katsaggelos, “Digital image restoration,” in Proc. Statist. proach,” in Proc. AIP Conf., 2007, vol. 949, p. 18.
Comput. Sect., Berlin, Germany, 1991, vol. 6, Lecture Notes in Math- [31] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov Chain
ematics 1832, pp. 12–18. Monte Carlo in Practice. London, U.K.: Chapman & Hall, 1996.
[7] A. K. Jain, Fundamentals of Digital Image Processing. Upper Saddle [32] A. Mohammad-Djafari, “Super-resolution: A short review, a new
River, NJ: Prentice-Hall, 1989. method based on hidden markov modeling of hr image and future
[8] S. Geman and D. Geman, Stochastic Relaxation, Gibbs Distributions challenges,” Comput. J., 2008, 10,1093/comjnl/bxn005:126-141.
and the Bayesian Restoration of Images. San Mateo, CA: Morgan [33] Z. Ghahramani and M. Jordan, “Factorial hidden Markov models,”
Kaufmann, 1987. Mach. Learn., no. 29, pp. 245–273, 1997.
[9] F. C. Jeng, J. W. Woods, and B. Morristown, “Compound [34] W. Penny and S. Roberts, “Bayesian neural networks for classification:
Gauss–Markov random fields for image estimation,” IEEE Trans. How useful is the evidence framework?,” Neural Netw., vol. 12, pp.
Signal Process., vol. 39, no. 3, pp. 683–697, Mar. 1991. 877–892, 1998.
[10] M. Nikolova, J. Idier, and A. Mohammad-Djafari, “Inversion of large- [35] S. Roberts, D. Husmeier, W. Penny, and I. Rezek, “Bayesian ap-
support illposed linear operators using a piecewise,” IEEE Trans. Image proaches to gaussian mixture modelling,” IEEE Trans. Pattern Anal.
Process., vol. 7, no. 4, pp. 571–585, Apr. 1998. Mach. Intell., vol. 20, no. 11, pp. 1133–1142, Nov. 1998.
[11] M. Nikolova, “Local strong homogeneity of a regularized estimator,” [36] W. Penny and S. Roberts, “Dynamic models for nonstationary signal
SIAM J. Appl. Math., vol. 61, no. 2, pp. 633–658, 2000. segmentation,” Comput. Biomed. Res., vol. 32, no. 6, pp. 483–502,
[12] M. Nikolova, “Thresholding implied by truncated quadratic regular- 1999.
ization,” IEEE Trans. Signal Process., vol. 48, no. 12, pp. 3437–3450, [37] W. Penny and S. Roberts, “Bayesian multivariate autoregresive models
Dec. 2000. with structured priors,” Proc. IEE Vis., Image Signal Process., vol. 149,
[13] G. Wang, J. Zhang, and G. W. Pan, “Solution of inverse problems in no. 1, pp. 33–41, 2002.
image processing by wavelet expansion,” IEEE Trans. Image Process., [38] S. Roberts and W. Penny, “Variational bayes for generalised autore-
vol. 4, no. 5, pp. 579–593, May 1995. gressive models,” IEEE Trans. Signal Process., vol. 50, no. 9, pp.
[14] J. M. Bioucas-Dias, “Bayesian wavelet-based image deconvolution: A 2245–2257, Sep. 2002.
GEM algorithm exploiting a class of heavy-tailed priors,” IEEE Trans. [39] W. Penny and K. Friston, “Mixtures of general linear models for
Image Process., vol. 15, no. 4, pp. 937–951, Apr. 2006. functional neuroimaging,” IEEE Trans. Med. Imag., vol. 22, no. 4, pp.
[15] G. E. Hinton and D. van Camp, “Keeping the neural networks simple by 504–514, Apr. 2003.
minimizing the description length of the weights,” in Proc. 6th Annu. [40] R. A. Choudrey and S. J. Roberts, “Variational mixture of bayesian
Conf. Computational Learning Theory, New York, 1993, pp. 5–13, independent component analysers,” Neural Computat., vol. 15, no. 1,
ACM. 2003.
[16] D. J. C. MacKay, “Ensemble learning and evidence maximization,” in [41] W. Penny, S. Kiebel, and K. Friston, “Variational bayesian inference
Proc. NIPS, 1995. for fmri time series,” NeuroImage, vol. 19, no. 3, pp. 727–741, 2003.
[17] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, “An in- [42] N. Nasios and A. Bors, “A variational approach for bayesian blind
troduction to variational methods for graphical models,” Mach. Learn., image deconvolution,” IEEE Trans. Signal Process., vol. 52, no. 8, pp.
vol. 37, no. 2, pp. 183–233, 1999. 2222–2233, Aug. 2004.
[18] T. S. Jaakkola and M. I. Jordan, “Bayesian parameter estimation via [43] N. Nasios and A. Bors, “Variational learning for gaussian mixture
variational methods,” Statist. Comput., vol. 10, no. 1, pp. 25–37, 2000. models,” IEEE Trans. Syst., Man Cybern. B, Cybern., vol. 36, no. 4,
[19] , M. Opper and D. Saad, Eds., Advanced Mean Field Methods: Theory pp. 849–862, Aug. 2006.
and Practice. Cambridge, MA: MIT Press, 2001. [44] K. Friston, J. Mattout, N. Trujillo-Barreto, J. Ashburner, and W. Penny,
[20] H. Attias, “Independent factor analysis,” Neural Computat., vol. 11, no. “Variational free energy and the laplace approximation,” Neuroimage
4, pp. 803–851, 1999. 2006, (2006.08.035), Available Online.
[21] A. C. Likas and N. P. Galatsanos, “A variational approach for bayesian [45] R. A. Choudrey, “Variational Methods for Bayesian Independent
blind image deconvolution,” IEEE Trans. Signal Process., vol. 52, no. Component Analysis,” Ph.D. dissertation, Univ. Oxford, Oxford,
8, pp. 2222–2233, Aug. 2004. U.K., 2002.
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.
AYASSO AND MOHAMMAD-DJAFARI: JOINT NDT IMAGE RESTORATION AND SEGMENTATION 2277
Hacheme Ayasso (S’08) was born in Syria in 1980. Ali Mohammad-Djafari (M’02) was born in Iran.
He received the engineer’s degree in electronic sys- He received the B.Sc. degree in electrical engi-
tems from the Higher Institute of Applied Science neering from Polytechnique of Teheran in 1975,
and Technology, (ISSAT), Damascus, Syria, in 2002, the diploma degree (M.Sc.) from Ecole Supérieure
the M.S. degree in signal and image processing from d’Electricité (SUPELEC), Gif sur Yvette, France in
the University of Paris-Sud 11, Orsay, France, in 1977, the “Docteur-Ingénieur” (Ph.D.) degree and
2007, and is currently working toward the Ph.D. the “Doctorat d’Etat” in Physics from the Université
degree at Paris-Sud 11 University in the inverse Paris Sud 11 (UPS), Orsay, France, in 1981 and
problems group (GPI) and Departement of Electro- 1987, respectively.
magnetics (DRE), part of laboratory of signals and He was Associate Professor at UPS for two years
systems, (L2S), Gif-sur-Yvette, France. (1981–1983). Since 1984, he has a permanent posi-
He was a research assistant in the electronic measurements group in ISSAT tion at “Centre National de la Recherche Scientifique (CNRS)” and works at
from 2003 to 2006, where he worked on non-destructive testing techniques. His “Laboratoire des Signaux et Systèmes (L2S)” at “SUPELEC.” From 1998 to
research interests include the application of Bayesian inference techniques for 2002, he has been at the head of Signal and Image Processing division at this
inverse problems, X-ray, and microwave tomographic reconstruction. laboratory. In 1997–1998, he has been visiting Associate Professor at University
of Notre Dame, South Bend. IN. Presently, he is “Directeur de Recherche” and
his main scientific interests are in developing new probabilistic methods based
on Bayesian inference, information theory and maximum entropy approaches
for inverse problems in general, and more specifically for signal and image re-
construction and restoration. His recent research projects contain: blind sources
separation (BSS) for multivariate signals (satellites images, hyperspectral im-
ages), data and image fusion, superresolution, x-ray computed tomography, mi-
crowave imaging and spatial-temporal positrons emission tomography (PET)
data and image processing. The main application domains of his interests are
computed tomography (x-rays, PET, SPECT, MRI, microwave, ultrasound, and
Eddy current imaging) either for medical imaging or for non-destructive testing
(NDT) in industry.
Authorized licensed use limited to: to IEEExplore provided by University Libraries | Virginia Tech. Downloaded on February 21,2025 at 02:05:09 UTC from IEEE Xplore. Restrictions apply.