0% found this document useful (0 votes)
57 views21 pages

Markov Chain Monte Carlo (MCMC) Methods: Example 11 (Matlab)

1. Markov chain Monte Carlo (MCMC) methods like the Gibbs sampler can be used for image deblurring problems with certain priors. 2. Example 11 uses a Gibbs sampler to find a conditional mean estimate for constellation image deblurring with an L1 norm prior, restricting the problem to a region of interest to speed up sampling. The conditional mean estimate is compared to the maximum a posteriori estimate from Example 7. 3. Example 12 uses a Gibbs sampler with a conditionally Gaussian prior model to find conditional mean estimates for the constellation image deblurring problem, using both gamma and inverse gamma hyperpriors. The conditional mean estimates are compared to the corresponding maximum a posteriori estimates from Examples 6 and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views21 pages

Markov Chain Monte Carlo (MCMC) Methods: Example 11 (Matlab)

1. Markov chain Monte Carlo (MCMC) methods like the Gibbs sampler can be used for image deblurring problems with certain priors. 2. Example 11 uses a Gibbs sampler to find a conditional mean estimate for constellation image deblurring with an L1 norm prior, restricting the problem to a region of interest to speed up sampling. The conditional mean estimate is compared to the maximum a posteriori estimate from Example 7. 3. Example 12 uses a Gibbs sampler with a conditionally Gaussian prior model to find conditional mean estimates for the constellation image deblurring problem, using both gamma and inverse gamma hyperpriors. The conditional mean estimates are compared to the corresponding maximum a posteriori estimates from Examples 6 and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Markov chain Monte Carlo (MCMC) methods

Gibbs Sampler
Example 11 (Matlab)
Consider again deblurring of the constellation image (b) in
of Example 7 in which ` 1 -norm based prior
the case (ii)P
πpr(x) ∝ exp( ni=1 −α|xi |) was used resulting to a posterior of
the form
n
 1 X 
π(x | y ) ∝ exp − 2 ky − Axk2 − α |xi | .

i=1
Find using Gibbs sampler a conditional mean (CM) estimate
with σ = 0.005 and α = 20 and compare the result to the
MAP estimate computed in Example 7. In order to speed up
the sampling procedure, restrict the problem first into a
region of interest (ROI), referring here to a set in which the
pixels a priori seem to be nonzero based on the blurred
image.
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 11 (Matlab) continued


Solution
The ROI consisted of seven 7 × 7 pixel sets cocentric with the
stars, one set per each star. With the ROI, he dimension of
the inverse problem was 73 = 343 and without it 642 = 4096.

Exact Blurred ROI


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 11 (Matlab) continued
The challenge in implementing the Gibbs sampler was
mainly in localizing the "essential" part of the conditional
posterior density π(xi | y , x1 , . . . , xi−1 , xi+1 , . . . , xn ) =
 1 
π(xi | yei ) ∝ exp − 2 ke y − ai xi k2 − αxi ,

where aiP denotes the i-th column of A = (a1 , a2 , . . . , an ),
yei = y − j6=i aj xj and xi = |xi |, since pixel values must a
priori be positive xi ≥ 0, i = 1, 2, . . . , n. The maximum is
located at the point satisfying
d  1 
0= − 2 ke yi − ai xi k2 − αxi
dxi 2σ
d  1 1 1 
− 2 yeiT yei + 2 yeiT ai − α xi − 2 aiT ai xi2

dxi 2σ 2σ
=
σ
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 11 (Matlab) continued

1 T 1 ασ 2 − yeiT ai
= −α + 2
yei ai − 2 aiT ai xi = 0, xi = ,
aiT ai
i.e.
σ σ

if (ασ 2 − yeiT ai )/(aiT ai ) > 0, and otherwise the maximizer is


xi = 0. Denoting the maximum value of
1 T 1 1
yei yei + 2 yeiT ai − α xi − 2 aiT ai xi2 ,

pi (xi ) = − 2
xi ≥ 0
2σ σ 2σ
by M, the essential part of the conditional posterior density
can be defined as the interval, in which pi (xi ) ≥ M − c with
some suitably chosen constant c > 0. Since pi (xi ) is a second
degree polynomial with the second-degree coefficient
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 11 (Matlab) continued


aiT ai /(2σ 2 ) and maximizer max{0, (ασ 2 − yeiT ai )/(aiT ai )}, the
interval in which pi (xi ) ≥ M − c and xi > 0 is given by
s s
h n ασ 2 − yeT a 2c o ασ 2 − yeiT ai 2c i
i i
Ic = max 0, − σ , σ ,
aiT ai aiT ai aiT ai aiT ai
+

if (ασ 2 − yeiT ai )/(aiT ai ) > 0, and otherwise this interval is


s
h ασ 2 − yeT a ασ 2 − yeiT ai 2 2σ 2 c i
i i
Ic = 0, .
aiT ai aiT ai
+ + T
ai ai
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 11 (Matlab) continued
The interval Ic chosen as shown on the previous page
contains the maximum value of the conditional posterior
density and all the values on the interval are larger or equal
to exp(−c) times the maximum. If the constant c is chosen
appropriately, then the Gibbs sampler will both be
numerically stable and produce unbiased results. Here, the
choice was c = 7 and the conditional posterior was evaluated
on Ic in 20 equally distributed points (examples below).
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 11 (Matlab) continued


In the sampling process, a total number 10000 sample points
were generated first 1000 of which were labeled as the
burn-in sequence, which was excluded from the eventual
sample enseble yielding the CM estimate.

Exact MAP (Example 6) CM


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 11 (Matlab) continued
The sampling history and sample based marginal density
were analyzed for two individual pixels (A) and (B).

(A), History (A), Marginal (A), Burn-in

(B), History (B), Marginal (B), Burn-in


purple = exact, green = conditional mean, black = 95 % credibility
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab)
Consider deblurring of the constellation image in the case of
the conditionally gaussian (hierarchical) prior model

πpr (x, θ) = π(x | θ)πhyper (θ)

of Examples 6 and 7. Use the Gibbs sampler to find a


conditional mean (CM) estimate corresponding to both
gamma and inverse gamma hyperprior πhyper (θ). Use the
noise standard deviation of σ = 0.005 and shape and scaling
parameter values β = 1.5 and θ0 = 0.00125 for the gamma
and inverse gamma density. Compare the CM estimates
obtained to the corresponding MAP estimates (see,
Examples 6 and 7).
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued


Solution
The sampling procedure was based on utilizing the
conditional posterior densities of the form π(x | y , θ) and
π(θ | y , x) according to the following algorithm:
1. Choose x (0) and θ (0) and set k = 1.
2. Pick x (k) from π(x | y , θ (k−1) ).
3. Pick θ (k) from π(θ | y , x (k) ).
4. If k is smaller than the desired number of sampling
points, set k = k + 1 and go back to the second step.
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued


The conditional density ofthe second step is of the form
1 1 
π(x | y , θ (k−1) ) ∝ exp − 2 ky −Axk2 − x T Dθ−1 x ,
2σ 2 (k−1)

 1 σ −1 A
  
σ −1 y 2 

= exp − x −

2 Dθ−1/2 0

(k−1)

with Dθ (k−1) = diag(θ1 , θ2 , . . . , θn


(k−1) (k−1) (k−1)
). By defining
 −1   −1 
σ A σ y
Mθ = and b = .
Dθ−1/2 0

we have π(x | y , θ) ∝ exp(− 12 kMθ x − bk2 ) the mean and


covariance matrix of which are given by x θ = (MθT Mθ )−1 MθT b
and Γθ = (MθT Mθ )−1 , respectively.
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 12 (Matlab) continued
Let now X satisfy the equation Mθ (X − x θ ) = Z in the
least-squares sense, that is, MθT Mθ (X − x θ ) = MθT Z . If Z is
zero-mean Gaussian white noise Z ∼ N(0, I ), i.e. E [ZZ T ] = I ,
then X = (MθT Mθ )−1 MθT Z + x θ is also Gaussian random
variable with the mean x θ and the covariance matrix
T 
E [(X −x θ )(X −x θ )T ] = E (MθT Mθ )−1 MθT Z (MθT Mθ )−1 MθT Z )
 

= E (MθT Mθ )−1 MθT ZZ T Mθ (MθT Mθ )−1


 

= (MθT Mθ )−1 MθT E [ZZ T ]Mθ (MθT Mθ )−1


= (MθT Mθ )−1 (MθT Mθ )(MθT Mθ )−1
= (MθT Mθ )−1 = Γθ ,

that is, X ∼ N(x θ , Γθ ).


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued


Consequently, if z is picked from the zero-mean Gaussian
white noise distribution N(0, I ), then

x = (MθT Mθ )−1 MθT z + x θ = (MθT Mθ )−1 MθT (z + b)

is distributed according to π(x | y , θ). If now z = (z1 , z2 ), then


expanding the above equation for θ (k−1) gives the formula

x = (σ −2 AT A + Dθ−1
(k−1) ) (σ A y + σ −1 AT z1 + Dθ−1/2
−1 −1 T
(k−1) z2 )

= (AT A + σ 2 Dθ−1
(k−1) )
−1
(σAT y + σAT z1 + σ 2 Dθ−1/2
(k−1) z2 ),

which was used to pick x (k) from π(x | y , θ (k−1) ) on the


second step of the sampling procedure.
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 12 (Matlab) continued
Gibbs sampler was used on the third step to pick θ (k) from
π(θ | y , x (k) ). If the gamma hyperprior is in question, then
n  (k)i 2
h X x θi 3 i
π(θ | y , x (k)
) ∝ exp − − (β − )log θi ,
2θi 2
+
θ0
i=1

and, again, if the inverse gamma is used, then


n  (k) 2
h X xi + 2θ0 3 i
π(θ | y , x (k)
) ∝ exp − + (β + )log θi .
2θi 2
i=1

In both cases, each one-dimensional conditional density is of


the form π(θi | y , x (k) , θ1 , . . . , θi−1 , θi+1 , . . . , θn ) ∝ exp[−fi (θi )].
The essential part of exp[−fi (θi )] was sought by first finding
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 12 (Matlab) continued
the maximizer θimax satisfying fi 0 (θimax ) = 0. After that, the
quadratic approximation
1
fi (t) ≈ fi (θimax ) + fi 0 (θimax )(t − θimax ) + fi 00 (θimax )(t − θimax )2
2
was utilized to estimate the solution of fi (t) = fi (θimax ) − c by
solving the equation
1
c + fi 0 (θimax )(t − θimax ) + fi 00 (θimax )(t − θimax )2 = 0.
2
The solution t was then chosen as the initial iterate t [0] to
Newton’s iteration
t [`] = t [`−1] − [f (t [`−1] ) − fi (θimax ) + c]/f 0 (t [`−1] ).
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 12 (Matlab) continued
The third iterate t [3] was used as the final estimate. The
interval Ic = [ε, t [3] ], where ε = 2.2204 · 10−16 , was used to
approximate the (essential) support of exp[−fi (θi )]. This
density was evaluated in 20 points equally spaced on Ic with
c = 7 (see the figures below). A total number 10000 sample
points were generated first 1000 of which were labeled as
the burn-in sequence excluded from the eventual sample
enseble.

Gamma Inv. G.
Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued

Gamma Hyperprior

Exact MAP (Example 6) CM


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued

Gamma Hyperprior

(A), History (A), Marginal (A), Burn-in

(B), History (B), Marginal (B), Burn-in


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued

Inverse Gamma Hyperprior

Exact MAP (Example 7) CM


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler

Example 12 (Matlab) continued

Inverse Gamma Hyperprior

(A), History (A), Marginal (A), Burn-in

(B), History (B), Marginal (B), Burn-in


Markov chain Monte Carlo (MCMC) methods
Gibbs Sampler
Example 13 (Matlab)
Compare the MAP and CM estimates of Ex. 6, 7, 11, and 12.
Solution

Exact MAP, ` 1 MAP, Gamma MAP, Inv. G.

Exact CM, ` 1 CM, Gamma CM, Inv. G.

You might also like