0% found this document useful (0 votes)
86 views

Image Restoration: B14 Image Analysis Michaelmas 2014 A. Zisserman - Image Degradations

This document summarizes key concepts in image restoration, including: 1) Image restoration aims to remove degradations like blur and noise from an image to estimate the original image, while modeling the degradation process. Common degradations include motion blur, defocus blur, noise. 2) Degradations can be modeled as a convolution of the original image with a point spread function (PSF) representing the blur, plus additive noise. Estimating the original image involves deconvolving the observed degraded image with the PSF. 3) The inverse filter directly inverts the PSF but amplifies noise. The Wiener filter applies a regularization term to suppress noise, balancing restoration and noise amplification.

Uploaded by

yb
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

Image Restoration: B14 Image Analysis Michaelmas 2014 A. Zisserman - Image Degradations

This document summarizes key concepts in image restoration, including: 1) Image restoration aims to remove degradations like blur and noise from an image to estimate the original image, while modeling the degradation process. Common degradations include motion blur, defocus blur, noise. 2) Degradations can be modeled as a convolution of the original image with a point spread function (PSF) representing the blur, plus additive noise. Estimating the original image involves deconvolving the observed degraded image with the PSF. 3) The inverse filter directly inverts the PSF but amplifies noise. The Wiener filter applies a regularization term to suppress noise, balancing restoration and noise amplification.

Uploaded by

yb
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Lecture 3: Image Restoration

B14 Image Analysis Michaelmas 2014 A. Zisserman

• Image degradations
• motion blur, focus blur, resolution

• The inverse filter

• The Wiener filter

• MAP formulation

In contrast to image enhancement, in image restoration the


degradation is modelled. This enables the effects of the
degradation to be (largely) removed
Degradations

• original

• optical blur

• motion blur

• spatial quantization (discrete pixels)

• additive intensity noise


Overview – Deconvolution
The objective is to restore a degraded image to its original
form.

An observed image can often be modelled as:

where the integral is a convolution, h is the point spread


function of the imaging system, and n is additive noise.

The objective of image restoration in this case is to


estimate the original image f from the observed degraded
image g.
Degradation model
Model degradation as a convolution with a linear, shift
invariant, filter h(x,y)
• Example: for out of focus blurring, model h(x,y) as a Gaussian

original blurred

f(x,y) h(x,y) g(x,y)

i.e. : g(x,y) = h(x,y) * f(x,y)


h(x,y) is the impulse response or point spread function of the imaging system
The challenge: loss of information and noise
Gaussian
scale=3 pixels

*
FT FT Inverse FT

Blurring acts as a low pass filter and attenuates higher spatial frequencies
Definitions

• f(x,y) – image before degradation, ‘true image’


• g(x,y) – image after degradation, ‘observed image’
• h(x,y) – degradation filter
• f(x,y) – estimate of f(x,y) computed from g(x,y)
• n(x,y) – additive noise

f(x,y) g(x,y) f(x,y)


degradation restoration

h(x,y) n(x,y)

g(x,y) = h(x,y) * f(x,y) + n(x,y)  G(u,v) = H(u,v) F(u,v) + N(u,v)


The inverse filter
Start from the generative model
g(x,y) = h(x,y)* f(x,y) + n(x,y)  G(u,v) = H(u,v) F(u,v) + N(u,v)
and for the moment ignore n(x,y), then an estimate of f(x,y) is
obtained from

F(u,v) = G(u,v) / H(u,v)

Restoration with an inverse filter

g(x,y) G(u,v) F(u,v) f(x,y)


F.T. Inverse filter I.F.T.
1D vector explanation

Fourier trick
Example : Deblurring (deconvolution)

Image blurred with Gaussian point spread function


h(x,y) = n(x,y) = Normal distribution, mean zero

blur  = 1.0 pixels


noise  = 0.3 grey levels

f(x,y) g(x,y)

Restoration with an inverse filter


F(u,v) = G(u,v) / H(u,v) where H(u,v), is the FT of the Gaussian
Deblurring with an inverse filter
noise  = 0.3 grey levels F(u,v) = G(u,v) / H(u,v)

blur  = 0.5 pixels blur  = 1.0pixels blur  = 1.5pixels

g(x,y)

f(x,y)
The problem of noise amplification

G(u,v) = H(u,v) F(u,v) + N(u,v)

F(u,v) = G(u,v) / H(u,v) = F(u,v) + N(u,v) / H(u,v)

Schematically …
1 N(u,v)
F(u,v) H(u,v)
F(u,v)
x +
H(u,v)F(u,v)

0 u,v 0 u,v 0 u,v 0 u,v

1/H(u,v) G(u,v)
F(u,v)
x
1

0 u,v 0 u,v 0 u,v


high spatial frequency sinusoids

f(x,y)
blur  = 1.0 pixels
The Wiener filter
The Wiener filter

F(u,v) = W(u,v) G(u,v)


Frequency behaviour

F(u,v) = W(u,v) G(u,v)

• If K = 0 then W(u,v) = 1 / H(u,v), i.e. an inverse filter


• If K >> |H(u,v)| for large u,v, then high frequencies are attenuated

• |F(u,v)| and |N(u,v)| are often known approximately, or


• K is set to a constant scalar which is determined empirically

• A Wiener filter minimizes the least square error


F(u,v) = W(u,v) G(u,v)

Schematically …

G(u,v) = H(u,v) F(u,v) + N(u,v)


1 N(u,v)
F(u,v) H(u,v)
F(u,v)

x +
H(u,v)F(u,v)

0 u,v 0 u,v

1/H(u,v) G(u,v)
F(u,v) x
1

W(u,v)
Restoration with a Wiener filter
G(u,v) = H(u,v) F(u,v) + N(u,v)

F(u,v) = W(u,v) G(u,v)

g(x,y) G(u,v) F(u,v) f(x,y)


F.T. Wiener filter I.F.T.
Example 1: Focus deblurring with a Wiener filter
blur  = 1.5 pixels
noise  = 0.3 grey levels F(u,v) = W(u,v) G(u,v)
g(x,y) f(x,y)

K = 1.0 e -5 K = 1.0 e -3 K = 1.0 e -1


blur  = 3.0 pixels
noise  = 0.3 grey levels

f(x,y) g(x,y) f(x,y)

K = 5.0 e -4
Wiener filter – sketch derivation

Parseval’s Theorem

since f(x,y) and (x,y) uncorrelated

• Note, integrand is sum of two squares


Minimize integral if integrand minimum for all (u,v)
NB

Note: filter is defined in the Fourier domain


Example 2: Motion deblurring
Suppose there is blur only in the horizontal direction
e.g. camera pans as image is acquired

Degradation model

Require H(u,v) for Wiener filter


interchange order of spatial and temporal integration

where
pixels

FT of …

Note, H(u,v) has zeros – a problem for an inverse filter


Motion deblurring with a Wiener filter
blur = 20 pixels

1. Compute the FT of the blurred image


2. Multiply the FT by the Wiener filter F(u,v) = W(u,v) G(u,v)
3. Compute the inverse FT
Application: Reading number plates

Algorithm
1. Rotate image so that blur is horizontal
2. Estimate length of blur
3. Construct a bar modelling the convolution
4. Compute and apply a Wiener filter
5. Optimize over values of K
f(x,y) h(x,y) f(x,y)

blur = 30 pixels
Maximum a posteriori (MAP)
Estimation
Generative model (forward process)

• original f(x,y)

• motion blur

• additive intensity noise

For an image with n pixels, write this process as

ĝ = Af + n
where ĝ and f are n-vectors, and A is an n × n matrix.
Inverse problem

• Estimate f(x,y) by optimizing a cost function:


observed generated
image image

f̂ 2
= arg min (g − Af ) + λp (f )
f
Likelihood/ prior/
loss function regularization
Example

p (f ) = (∇f ) 2

to suppress high frequency noise


Example 3: Super resolution
Suppose there are multiple images of the same scene
each displaced spatially …

After registration the samples are not coincident


and this may be used to defeat the Nyquist limit.
Intuitive model

Treat images as point samples

low more images

• increase resolution

high • reduce noise


Generative Model

High-resolution
image, f.
Registrations,
lighting and
M1 M2 M3 M4 blur.

g1 g2 g3 g4

Low-resolution images
Sketch solution Non-examinable

• Estimate the super resolution image which minimizes


the error between predicted and observed images.

Write the generative model for one image i as

gi = Mif + η i
where Mi combines registration, lighting and down-sampling.
likelihood prior
Super resolution example I: Mars

25 JPEG images courtesy of the Mars lander


images are from different sweeps of a rotating camera
Super resolution result

Original frame Average image Super-resolution

2x zoom from 25 JPEG images.


Super resolution example II: car sequence

rotating DV camera
Mosaic
Super-resolution result for ROI

85 JPEG images

original ROI four times resolution


35 x 20 pixels
Super resolution example III: Run Lola Run
Input – low resolution
Super-resolution output
Blind deblurring Non-examinable

So far we have assumed that we know the generative model, e.g.

g = A(h) f
G = HF
= *

i.e. that h(x,y) is known, so that given the observed


image g(x,y), then the original image f(x,y) can be
estimated (restored)

Consider if only the observed image g(x,y) is known.


This is the problem of blind estimation.
Blind deblurring continued

• Estimate f(x,y) and h(x,y) by optimizing a cost function:

observed generated
image image

2
min (g − A(h) f ) + λpf (f ) + μph (h)
f ,h
Likelihood/ image blur
loss function prior prior
Example I: Blind deblurring

estimated
blurred image restored image
blur filter
More examples of blind deblurring

Blurry  input Deblurred output

You might also like