pnp_slides
pnp_slides
1 UCLA Mathematics
2 Texas A&M Computer Science and Engineering
Image processing via optimization
I x is image
I f (x) is data fidelity (a posteriori knowledge)
I g(x) is noisiness of the image (a priori knowledge)
I γ ≥ 0 is relative importance between f and g
2
Image processing via ADMM
with σ 2 = αγ.
3
Image processing via ADMM
xk+1 = Proxσ2 g (y k − uk )
y k+1 = Proxαf (xk+1 + uk )
uk+1 = uk + xk+1 − y k+1 .
4
Interpretations of ADMM subroutines
5
Other denoisers
6
Plug and play!
xk+1 = Hσ (y k − uk )
y k+1 = Proxαf (xk+1 + uk )
uk+1 = uk + xk+1 − y k+1 .
8
Example: Poisson denoising
Experimental validation
where α > 0.
x? = Hσ (I − α∇f )(x? ).
xk+1 = Hσ (y k − uk )
y k+1 = Proxαf (xk+1 + uk ) (PNP-ADMM)
k+1 k k+1 k+1
u =u +x −y
where α > 0.
x? = Hσ (x? − u? )
x? = Proxαf (x? + u? ).
xk+1/2 = Proxαf (z k )
xk+1 = Hσ (2xk+1/2 − z k ) (PNP-DRS)
k+1 k k+1 k+1/2
z =z +x −x
where α > 0.
x? = Proxαf (z ? )
x? = Hσ (2x? − z ? ).
Experimental validation
Theorem
Assume Hσ satisfies assumption (A) for some ε ≥ 0. Assume f is
µ-strongly convex, f is differentiable, and ∇f is L-Lipschitz. Then
T = Hσ (I − α∇f )
satisfies
Theorem
Assume Hσ satisfies assumption (A) for some ε ≥ 0. Assume f is
µ-strongly convex and differentiable. Then
1 1
T = I + (2Hσ − I)(2Proxαf − I)
2 2
satisfies
1 + ε + εαµ + 2ε2 αµ
kT (x) − T (y)k ≤ kx − yk
1 + αµ + 2εαµ
for all x, y ∈ Rd . The coefficient is less than 1 if
ε
< α, ε < 1.
(1 + ε − 2ε2 )µ
Corollary
Assume Hσ satisfies assumption (A) for some ε ∈ [0, 1). Assume f is
µ-strongly convex. Then PNP-ADMM converges for
ε
< α.
(1 + ε − 2ε2 )µ
PNP-FBS and PNP-ADMM share the same fixed points 6 7 . They are
distinct methods for finding the same set of fixed points.
Experimental validation
We use DnCNN9 , which learns the residual mapping with a 17-layer CNN.
Conv + BN + ReLU
Conv + BN + ReLU
Conv + ReLU
Conv
...
17 Layers
9 Zhang, Zuo, Chen, Meng, and Zhang, Beyond a Gaussian Denoiser: Residual
Conv + ReLU
Conv + ReLU
Conv + ReLU
Conv
4 Layers
Note
(I − Hσ )(y) = y − Hσ (y) = R(y),
with denoiser Hσ , residual R, and identity I.
Enforcing
k(I − Hσ )(x) − (I − Hσ )(y)k ≤ εkx − yk (A)
is equivalent to constraining the Lipschitz constant of R. We propose a
variant of the spectral normalization for this.
While this basic methodology suits our goal, Miyato et al.’s SN uses an
inexact implementation that underestimates the true spectral norm.
40 × 40 patches
BSD500 40 × 40
corrupted with
original images (clean) patches
Gaussian noise
On an Nvidia GTX 1080 Ti, DnCNN took 4.08 hours and realSN-DnCNN
took 5.17 hours to train, so the added cost of realSN is mild.
Outline
Experimental validation
Experimental validation 33
Poisson denoising
yi ∼ Poisson((xtrue )i )
11
For further details of the experimental setup, see the main paper or .
11 Rond, Giryes, and Elad, Poisson inverse problems by the plug-and-play scheme, J.
Experimental validation 35
Poisson denoising
0.95 1 1.05 1.1 1.15 1.2 0.86 0.88 0.9 0.92 0.94 0.96 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76
=0.484 =0.464
0.43 0.44 0.45 0.46 0.47 0.48 0.4 0.41 0.42 0.43 0.44 0.45 0.46
Experimental validation 37
Single photon imaging
where Kj1 is the number of ones in the j-th unit pixel, Kj0 is the number
of zeros in the j-th unit pixel.
12
For further details of the experimental setup, see the main paper or .
12 Elgendy and Chan, Image reconstruction and threshold design for quanta image
Experimental validation 39
Single photon imaging
PnP-FBS, α = 0.005
Average PSNR BM3D RealSN- RealSN-
DnCNN SimpleCNN
Iteration 50 28.7933 27.9617 29.0062
Iteration 100 29.0510 27.9887 29.0517
Best Overall 29.5327 28.4065 29.3563
PnP-ADMM, α = 0.01
Average PSNR BM3D RealSN- RealSN-
DnCNN SimpleCNN
Iteration 50 30.0034 31.0032 29.2154
Iteration 100 30.0014 31.0032 29.2151
Best Overall 30.0474 31.0431 29.2155
Experimental validation 40
Compressed sensing MRI
PnP is useful in medical imaging when we do not have enough data for
end-to-end training: train the denoiser Hσ on natural images, and “plug”
it into the PnP framework to be applied to medical images.
y = Fp xtrue + εe ,
13
For further details of the experimental setup, see the main paper or .
Experimental validation 42
Compressed sensing MRI
PSNR (in dB) for 30% sampling with additive Gaussian noise σe = 15.
RealSN generally improves the performance.
Sampling approach Random Radial Cartesian
Image Brain Bust Brain Bust Brain Bust
Zero-filling 9.58 7.00 9.29 6.19 8.65 6.01
TV14 16.92 15.31 15.61 14.22 12.77 11.72
RecRF15 16.98 15.37 16.04 14.65 12.78 11.75
BM3D-MRI16 17.31 13.90 16.95 13.72 14.43 12.35
BM3D 19.09 16.36 18.10 15.67 14.37 12.99
DnCNN 19.59 16.49 18.92 15.99 14.76 14.09
PnP-FBS RealSN-DnCNN 19.82 16.60 18.96 16.09 14.82 14.25
SimpleCNN 15.58 12.19 15.06 12.02 12.78 10.80
RealSN-SimpleCNN 17.65 14.98 16.52 14.26 13.02 11.49
BM3D 19.61 17.23 18.94 16.70 14.91 13.98
DnCNN 19.86 17.05 19.00 16.64 14.86 14.14
PnP-ADMM RealSN-DnCNN 19.91 17.09 19.08 16.68 15.11 14.16
SimpleCNN 16.68 12.56 16.83 13.47 13.03 11.17
RealSN-SimpleCNN 17.77 14.89 17.00 14.47 12.73 11.88