0% found this document useful (0 votes)
30 views

Image Processing

This document provides an introduction to image processing and pattern recognition. It discusses what a digital image is, defined as a finite set of digital values representing image properties like gray levels or colors. Digital image processing focuses on improving images for human interpretation and processing images for storage, transmission, and machine analysis. The history of digital image processing is reviewed, from early cable transmission of images in the 1920s to uses in space missions and medical imaging starting in the 1960s-1970s.

Uploaded by

stutinahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Image Processing

This document provides an introduction to image processing and pattern recognition. It discusses what a digital image is, defined as a finite set of digital values representing image properties like gray levels or colors. Digital image processing focuses on improving images for human interpretation and processing images for storage, transmission, and machine analysis. The history of digital image processing is reviewed, from early cable transmission of images in the 1920s to uses in space missions and medical imaging starting in the 1960s-1970s.

Uploaded by

stutinahuja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 460

Image Processing and Pattern

Recognition (IPPR):
Introduction (Lecture 1)

Sanjeeb Prasad Panday


[email protected]
2
of
92
Introduction

“One picture is worth more than ten


thousand words”
Anonymous
3
of
92
References
“Digital Image Processing”, Rafael C.
Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
– Much of the material that follows is taken from
this book
4
of
92
Contents
This lecture will cover:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
5
of
92
What is a Digital Image?
A digital image is a representation of a two-
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

dimensional image as a finite set of digital


values, called picture elements or pixels
6
of
92
What is a Digital Image? (cont…)
Pixel values typically represent gray levels,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

colours, heights, opacities etc


Remember digitization implies that a digital
image is an approximation of a real scene

1 pixel
7
of
92
What is a Digital Image? (cont…)
Common image formats include:
– 1 sample per point (B&W or Grayscale)
– 3 samples per point (Red, Green, and Blue)
– 4 samples per point (Red, Green, Blue, and “Alpha”,
a.k.a. Opacity)

For most of this course we will focus on grey-scale


images
8
of
92
What is Digital Image Processing?
Digital image processing focuses on two
major tasks
– Improvement of pictorial information for
human interpretation
– Processing of image data for storage,
transmission and representation for
autonomous machine perception
Some argument about where image
processing ends and fields such as image
analysis and computer vision start
9
of
92
What is DIP? (cont…)
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes
Output: Image Output: Attributes Output: Understanding
Examples: Noise Examples: Object Examples: Scene
removal, image recognition, understanding,
sharpening segmentation autonomous navigation

In this course we will


stop here
10
of
92
History of Digital Image Processing
Early 1920s: One of the first applications of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

digital imaging was in the news-


paper industry
– The Bartlane cable picture
transmission service Early digital image

– Images were transferred by submarine cable


between London and New York
– Pictures were coded for cable transfer and
reconstructed at the receiving end on a
telegraph printer
11
of
92
History of DIP (cont…)
Mid to late 1920s: Improvements to the
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Bartlane system resulted in higher quality


images
– New reproduction
processes based
on photographic
techniques
– Increased number
of tones in Improved
digital image Early 15 tone digital
reproduced images image
12
of
92
History of DIP (cont…)
1960s: Improvements in computing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

technology and the onset of the space race


led to a surge of work in digital image
processing
– 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
– Such techniques were used
A picture of the moon taken
in other space missions by the Ranger 7 probe
including the Apollo landings minutes before landing
13
of
92
History of DIP (cont…)
1970s: Digital image processing begins to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

be used in medical applications


– 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial Typical head slice CAT
Tomography (CAT) scans image
14
of
92
History of DIP (cont…)
1980s - Today: The use of digital image
processing techniques has exploded and
they are now used for all kinds of tasks in all
kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
15
of
92
Examples: Image Enhancement
One of the most common uses of DIP
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

techniques: improve quality, remove noise


etc
16
of
92
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
17
of
92
Examples: Artistic Effects
Artistic effects are
used to make
images more
visually appealing,
to add special
effects and to make
composite images
18
of
92
Examples: Medicine
Take slice from MRI scan of canine heart,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

and find boundaries between types of tissue


– Image with gray levels representing tissue
density
– Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image


19
of
92
Examples: GIS
Geographic Information Systems
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

– Digital image processing techniques are used


extensively to manipulate satellite imagery
– Terrain classification
– Meteorology
20
of
92
Examples: GIS (cont…)
Night-Time Lights of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the World data set


– Global inventory of
human settlement
– Not hard to imagine
the kind of analysis
that might be done
using this data
21
of
92
Examples: Industrial Inspection
Human operators are
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

expensive, slow and


unreliable
Make machines do the
job instead
Industrial vision systems

are used in all kinds of


industries
Can we trust them?
22
of
92
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
– Machine inspection is used to determine that
all components are present and that all solder
joints are acceptable
– Both conventional imaging and x-ray imaging
are used
23
of
92
Examples: Law Enforcement
Image processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

techniques are used


extensively by law
enforcers
– Number plate
recognition for speed
cameras/automated
toll systems
– Fingerprint recognition
– Enhancement of
CCTV images
24
of
92
Examples: HCI
Try to make human computer
interfaces more natural
– Face recognition
– Gesture recognition
Does anyone remember the
user interface from “Minority
Report”?
These tasks can be
extremely difficult
25
of
92
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
26
of
Key Stages in Digital Image Processing:
92 Image Aquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
27
of
Key Stages in Digital Image Processing:
92 Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
28
of
Key Stages in Digital Image Processing:
92 Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
29
of
Key Stages in Digital Image Processing:
92 Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
30
of
Key Stages in Digital Image Processing:
92 Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
31
of
Key Stages in Digital Image Processing:
92 Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
32
of
Key Stages in Digital Image Processing:
92 Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
33
of
Key Stages in Digital Image Processing:
92 Image Compression

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
34
of
Key Stages in Digital Image Processing:
92 Colour Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
35
of
92
Summary
We have looked at:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
36
of
92
Contents
This lecture will cover:
The human visual system
Light and the electromagnetic spectrum
Image representation
Image sensing and acquisition
Sampling, quantisation and resolution
37
of
92
Human Visual System
The best vision model we have!
Knowledge of how images form in the eye
can help us with processing digital images
We will take just a whirlwind tour of the
human visual system
38
of
92
Structure Of The Human Eye
The lens focuses light from objects onto the
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

retina
The retina is covered with
light receptors called
cones (6-7 million) and
rods (75-150 million)
Cones are concentrated
around the fovea and are
very sensitive to colour
Rods are more spread out
and are sensitive to low levels of illumination
39
of
92
Blind-Spot Experiment
Draw an image similar to that below on a
piece of paper (the dot and cross are about
6 inches apart)

Close your right eye and focus on the cross


with your left eye
Hold the image about 20 inches away from
your face and move it slowly towards you
The dot should disappear!
40
of
92
Image Formation In The Eye
Muscles within the eye can be used to
change the shape of the lens allowing us
focus on objects that are near or far away
An image is focused onto the retina causing
rods and cones to become excited which
ultimately send signals to the brain
41
of
92
Brightness Adaptation & Discrimination

The human visual system can perceive


approximately 1010 different light intensity
levels
However, at any one time we can only
discriminate between a much smaller
number – brightness adaptation
Similarly, the perceived intensity of a region
is related to the light intensities of the
regions surrounding it
42
of
Brightness Adaptation & Discrimination
92 (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An example of Mach bands


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
43
Brightness Adaptation & Discrimination
(cont…)
44
of
Brightness Adaptation & Discrimination
92 (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An example of simultaneous contrast


45
of
Brightness Adaptation & Discrimination
92 (cont…)

For more great illusion examples take a look at: http://web.mit.edu/persci/gaz/


46
of
92

Available here: http://www.lottolab.org/Visual%20Demos/Demo%2015.html


47
of
92
Optical Illusions

Our visual
systems play lots
of interesting
tricks on us
48
of
92
Light And The Electromagnetic Spectrum

Light is just a particular part of the


electromagnetic spectrum that can be
sensed by the human eye
The electromagnetic spectrum is split up
according to the wavelengths of different
forms of energy
49
of
92
Reflected Light
The colours that we perceive are determined
by the nature of the light reflected from an
object
For example, if white
light is shone onto a Wh
ite Ligh
green object most t
Colours

wavelengths are Absorbed

Light
absorbed, while green
n
Gree

light is reflected from


the object
50
of
92
Sampling, Quantisation And Resolution

In the following slides we will consider what


is involved in capturing a digital image of a
real-world scene
Image sensing and representation
Sampling and quantisation
Resolution
51
of
92
Image Representation
Before we discuss image acquisition recall
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

that a digital image is composed of M rows


and N columns of pixels
each storing a value col

Pixel values are most


often grey levels in the
range 0-255(black-white)
We will see later on
that images can easily
be represented as f (row, col)

matrices row
52
of
92
Image Acquisition
Images are typically generated by
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

illuminating a scene and absorbing the


energy reflected by the objects in that scene
– Typical notions of
illumination and
scene can be way off:
• X-rays of a skeleton
• Ultrasound of an
unborn baby
• Electro-microscopic
images of molecules
53
of
92
Image Sensing
Incoming energy lands on a sensor material
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

responsive to that type of energy and this


generates a voltage
Collections of sensors are arranged to
capture images

Imaging Sensor

Line of Image Sensors


Array of Image Sensors
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
54

Using Sensor Strips and Rings


Image Sensing
55
of
92
Image Sampling And Quantisation
A digital sensor can only measure a limited
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

number of samples at a discrete set of


energy levels
Quantisation is the process of converting a
continuous analogue signal into a digital
representation of this signal
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
56
Image Sampling And Quantisation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
57
Image Sampling And Quantisation
58
of
Image Sampling And Quantisation
92 (cont…)
Remember that a digital image is always
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

only an approximation of a real world


scene
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
59
Image Representation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
60
Image Representation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
61
Image Representation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
62
Image Representation
63
of
92
Spatial Resolution
The spatial resolution of an image is
determined by how sampling was carried out
Spatial resolution simply refers to the
smallest discernable detail in an image
Vision specialists will
often talk about pixel
size
Graphic designers will
talk about dots per 5.1
g a pixel
Me
inch (DPI) s
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
64
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
65
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
66
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
67
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
68
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
69
Spatial Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
70
Spatial Resolution (cont…)
71
of
92
Intensity Level Resolution
Intensity level resolution refers to the
number of intensity levels used to represent
the image
The more intensity levels used, the finer the level of detail
discernable in an image
Intensity level resolution is usually given in terms of the
number of bits used to store each intensity level
Number of Intensity
Number of Bits Examples
Levels
1 2 0, 1
2 4 00, 01, 10, 11
4 16 0000, 0101, 1111
8 256 00110011, 01010101
16 65,536 1010101010101010
72
of
92
Intensity Level Resolution (cont…)
256 grey levels (8 bits per pixel) 128 grey levels (7 bpp) 64 grey levels (6 bpp) 32 grey levels (5 bpp)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

16 grey levels (4 bpp) 8 grey levels (3 bpp) 4 grey levels (2 bpp) 2 grey levels (1 bpp)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
73
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
74
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
75
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
76
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
77
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
78
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
79
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
80
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
81
Saturation & Noise
82
of
92
Resolution: How Much Is Enough?
The big question with resolution is always
how much is enough?
This all depends on what is in the image and what
you would like to do with it
Key questions include
• Does the image look aesthetically pleasing?
• Can you see what you need to see within the
image?
83
of
Resolution: How Much Is Enough?
92 (cont…)

The picture on the right is fine for counting


the number of cars, but not for reading the
number plate
84
of
92
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Low Detail Medium Detail High Detail


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
85
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
86
Intensity Level Resolution (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
92
87
Intensity Level Resolution (cont…)
88
of
92
Summary
We have looked at:
Human visual system
Light and the electromagnetic spectrum
Image representation
Image sensing and acquisition
Sampling, quantisation and resolution
Next time we start to look at techniques for
image enhancement
Image Processing and Pattern
Recognition (IPPR):
(Lecture 2)

Sanjeeb Prasad Panday


[email protected]
2
of
90
Digital image representation
Monochrome image (or simply image) refers to a
2- dimensional light intensity function f(x,y)
– x and y denote spatial coordinates
– the value of f(x,y) at (x,y) is proportional to the brightness
(or gray level) of the image at that point
3
of
90
Digital image representation
A digital image is an image f(x,y) that has been
discretized both in spatial coordinates and in
brightness
• Considered as a matrix whose row and column
indices represent a point in the image
• The corresponding matrix element value
represents the gray level at that point
• The elements of such an array are referred to as:
– image elements
– picture elements (pixels or pels)
4
of
90
Steps in image processing
The problem domain in this example consists of pieces of mail
and the objective is to read the address on each piece
Step 1: image acquisition
– Acquire a digital image using an image sensor
• a monochrome or color TV camera: produces an entire image of the
problem domain every 1/30 second
• a line-scan camera: produces a single image line at a time, motion past the
camera produces a 2-dimensional image
– If not digital, an analog-to-digital conversion process is required
– The nature of the image sensor (and the produced image) are
determined by the application
• Mail reading applications rely greatly on line-scan cameras
• CCD and CMOS imaging sensors are very common in many applications
5
of
90
Steps in image processing
• Step 2: preprocessing
– Key function: improve the image in ways that increase the
chance for success of the other processes
– In the mail example, may deal with contrast enhancement,
removing noise, and isolating regions whose texture indicates a
likelihood of alphanumeric information
6
of
90
Steps in image processing
• Step 3: segmentation
– Broadly defined: breaking an image into its constituent parts
– In general, one of the most difficult tasks in image processing
• Good segmentation simplifies the rest of the problem
• Poor segmentation make the task impossible
– Output is usually raw pixel data: may represent region boundaries,
points in the region itself, etc.
• Boundary representation can be useful when the focus is on
external shape characteristics (e.g. corners, rounded edges, etc.)
• Region representation is appropriate when the focus is on
internal properties (e.g. texture or skeletal shape)
– For the mail problem (character recognition) both representations
can be necessary
7
of
90
Steps in image processing
• Step 4: representation & description
– Representation: transforming raw data into a form
suitable for computer processing
– Description (also called feature extraction) deals with
extracting features that result in some quantitative
information of interest or features which are basic for
differentiating one class of objects from another
– In terms of character recognition, descriptors such as
lakes (holes) and bays help differentiate one part of the
alphabet from another
8
of
90
Steps in image processing
• Step 5: recognition & interpretation
– Recognition: The process which assigns a label to an object
based on the information provided by its descriptors
A may be the alphanumeric character A
– Interpretation: Assigning meaning to an ensemble of
recognized objects
35487-0286 may be a ZIP code
9
of
90
Steps in image processing
10
of
90
Steps in image processing
11
of
90
A Knowledge Base
• Knowledge about a problem domain is coded into an
image processing system in the form of a knowledge
database
– May be simple:
• detailing areas of an image expected to be of interest
– May be complex
• A list of all possible defects of a material in a vision inspection
system
– Guides operation of each processing module
– Controls interaction between modules
– Provides feedback through the system
12
of
Steps in an image processing
90
system
• Not all image processing systems would require all
steps/processing modules
– Image enhancement for human visual perception may not go
beyond the preprocessing stage
• A knowledge database may not be required
• Processing systems which include recognition and
interpretation are associated with image analysis systems
in which the objective is autonomous (or at least partially
automatic)
13
of
90
A simple imaging model
• An image is a 2-D light intensity function f(x,y)
• As light is a form of energy
0 < f(x,y) < ∞
• f(x,y) may be expressed as the product of 2 components
f(x,y)=i(x,y)r(x,y)
• i(x,y) is the illumination: 0 < i(x,y) < ∞
– Typical values: 9000 foot-candles sunny day, 100 office room, 0.01
moonlight
• r(x,y) is the reflectance: 0 < r(x,y) < 1
– r(x,y)=0 implies total absorption
– r(x,y)=1 implies total reflectance
– Typical values: 0.01 black velvet, 0.80 flat white paint, 0.93 snow
14
of
90
A simple imaging model
• The intensity of a monochrome image f at (x,y) is the gray
level (l) of the image at that point

• In practice Lmin=imin rmin and Lmax=imax rmax


• As a guideline Lmin ≈ 0.005 and Lmax ≈ 100 for indoor
image processing applications
• The interval [Lmin, Lmax] is called the gray scale
• Common practice is to shift the interval to [0,L] where l=0
is considered black and l=L is considered white. All
intermediate values are shades of gray
15
of
90
Sampling and Quantization
• To be suitable for computer processing an image, f(x,y) must
be digitized both spatially and in amplitude
• Digitizing the spatial coordinates is called image sampling
• Amplitude digitization is called gray-level quantization
• f(x,y) is approximated by equally spaced samples in the form
of an NxM array where each element is a discrete quantity
16
of
90
Sampling and Quantization
• Common practice is to let N and M be powers of two; N=2^n
and M=2^k
• And G=2^m where G denotes the number of gray levels
• The assumption here is that gray levels are equally space in
the interval [0,L]
• The number of bits, b, necessary to store the image is then

• For example, a 128x128 image with 64 gray levels would


require 98,304 bits of storage
17
of
90
Sampling and Quantization

• How many samples and gray levels are required for a “good”
approximation?
• The resolution (the degree of discernible detail) depends strongly on
these two parameters
18
of
Effects of reducing spatial
90
resolution
19
of
90
Effects of reducing gray levels
20
of
Basic relationships between
90
pixels
 An image is denoted by: f(x,y)
 Lowercase letters (e.g. p, q) will denote individual pixels
 A subset of f(x,y) is denoted by S
 Neighbors of a pixel:
– A pixel p at (x,y) has 4 horizontal/vertical neighbors at
• (x+1,y),(x-1,y),(x,y+1)and(x,y-1)
• called the 4-neighbors of p: N4(p)
– A pixel p at (x,y) has 4 diagonal neighbors at
• (x+1,y+1),(x+1,y-1),(x-1,y+1)and(x-1,y-1)
• called the diagonal–neighbors of p:ND(p)
– The 4-neighbors and the diagonal-neighbors of p are called
the 8-neighbors of p: N8(p)
21
of
90
Connectivity between pixels
• Connectivity is an important concept in establishing boundaries of
object and components of regions in an image
• When are two pixels connected?
– If they are adjacent in some sense (say they are 4-neighbors)
– and, if their gray levels satisfy a specified criterion of similarity (say
they are equal)
• Example: given a binary image (e.g. gray scale = [0,1]), two pixels
may be 4-neighbors but are not considered connected unless they
have the same value
22
of
90
Connectivity between pixels
• Let V be the set of values used to determine connectivity
– For example, in a binary image, V={1} for the connectivity of pixels
with a value of 1
– In a gray scale image, for the connectivity of pixels with a range of
intensity values of, say, 32 to 64, it follows that V={32,33,...,63,64}
– Consider three types of connectivity
• 4-connectivity: Pixels p and q with values from V are 4-connected if q is
in the set N4(p)
• 8-connectivity: Pixels p and q with values from V are 8-connected if q is
in the set N8(p)
• m-connectivity (mixed): Pixels p and q with values from V are
m-connected if
– q is in the set N4(p),or
– q is in the set ND(p) and the set N4(p) ∩ N4(q) is empty (This is the set of
pixels that are 4-neighbors of p and q and whose values are from V)
23
of
90
Connectivity between pixels
24
of
90
Pixel adjacencies and paths
• Pixel p is adjacent to q if they are connected
– We can define 4-, 8-, or m-adjacency depending on the
specified type of connectivity
• Two image subsets S1 and S2 are adjacent if some pixel in S1
is adjacent to S2
• A path from p at (x,y) to q at (s,t) is a sequence of distinct
pixels with coordinates (x0,y0), (x1,y1),....., (xn,yn)
– Where (x0,y0)=(x,y) and (xn,yn)=(s,t) and
– (xi,yi) is adjacent to (xi-1,yi-1) for 1<= i <= n
– n is the length of the path
• If p and q are in S, then p is connected to q in S if there is a
path from p to q consisting entirely of pixels in S
25
of
90
Example paths
26
of
90
Connected components
• For any pixel p in S, the set of pixels connected to p form a
connected component of S
• Distinct connected components in S are said to be disjoint
27
of
Labeling 4-connected
90
components
• Consider scanning an image pixel by pixel from left to right and
top to bottom

– Assume, for the moment, we are interested in 4-connected


components
– Let p denote the pixel of interest, and r and t denote the
upper and left neighbors of p, respectively
– The nature of the scanning process assures that r and t have
been encountered (and labeled if 1) by the time p is
encountered
28
of
Labeling 4-connected
90
components
• Consider the following procedure
if p=0 continue to the next position
if r=t=0 assign a new label to p (Ln)
if r=t=1 and they have the same label, assign that label to p
if only one of r and t are 1, assign its label to p
if r=t=1 and they have different labels, assign one label to p and note
that the two labels are equivalent (that is r and t are connected through
p)
At the end of the scan, sort pairs of equivalent labels into
equivalence classes and assign a different label to each class
29
of
Labeling 4-connected
90
components (example)
30
of
Labeling 8-connected
90
components
• Proceed as in the 4-connected component labeling case, but
also examine two upper diagonal neighbors (q and s) of p
31
of
Labeling connected components
90
in non-binary images
• The 4-connected or 8-connected labeling schemes can be
extended to gray level images
• The set V may be used to connect into a component only
those pixels within a specified range of pixel values
32
of
90
Distance measures
• Given pixels p, q, and z at (x,y), (s,t) and (u,v) respectively,
• D is a distance function (or metric) if:
– D(p,q) ≥ 0 (D(p,q)=0 iff p=q),
– D(p,q) = D(q,p), and
– D(p,z) ≤ D(p,q) + D(q,z).
• The Euclidean distance between p and q is given by:

• The pixels having distance less than or equal to some value r


from (x,y) are the points contained in a disk of radius r centered
at (x,y)
33
of
90
Distance measures
• The D4 distance (also called the city block distance) between p
and q is given by:

• The pixels having a D4 distance less than some r from (x,y)


form a diamond centered at (x,y)
• Example: pixels where D4 ≤ 2
34
of
90
Distance measures
• The D8 distance (also called the chessboard distance)
between p and q is given by:

• The pixels having a D8 distance less than some r from (x,y)


form a square centered at (x,y)
• Example: pixels where D8 ≤ 2
35
of
Distance measures and
90
connectivity
• The D4 distance between two points p and q is the shortest 4-
path between the two points
• The D8 distance between two points p and q is the shortest 8-
path between the two points
• D4 and D8 may be considered, regardless of whether a
connected path exists between them, because the definition of
these distances involves only the pixel coordinates
• For m-connectivity, the value of the distance (the length of the
path) between two points depends on the values of the pixels
along the path
36
of
Distance measures and m-
90
connectivity

• Consider the given arrangement of pixels and assume


– p, p2 and p4 =1
– p1 and p3 can be 0 or 1
• If V={1} and p1 and p3 are 0, the m-distance (p, p4) is 2 If either
p1 or p3 are 1, the m-distance (p, p4) is 3 If p1 and p3 are 1, the
m-distance (p, p4) is 4
37
of
90
M-connectivity example
38
of
90
Arithmetic & logic operations
• Arithmetic & logic operations on images used extensively in
most image processing applications
– May cover the entire image or a subset Arithmetic operation
between pixels p and q are defined as:
– Addition: (p+q)
• Used often for image averaging to reduce noise
– Subtraction: (p-q)
• Used often for static background removal
– Multiplication: (p*q) (or pq, p×q)
• Used to correct gray-level shading
– Division: (p÷q) (or p/q)
• As in multiplication
39
of
90
Logic operations
• Arithmetic operation between pixels p and q are defined as:
– AND: p AND q (also p⋅q)
– OR: p OR q (also p+q)
– COMPLEMENT: NOTq (also q’)
• Form a functionally complete set
• Applicable to binary images
• Basic tools in binary image processing, used for:
– Masking
– Feature detection
– Shape analysis
40
of
90
Examples of logic operations
41
of
90
Examples of logic operations
42
of
Neighborhood-oriented
90
operations
• Arithmetic and logical operations may take place on a
subset of the image
– Typically neighborhood oriented
• Formulated in the context of mask operations (also
called template, window or filter operations)
• Basic concept:let the value of a pixel be a function of its
(current) gray level and the gray level of its neighbors (in
some sense)
43
of
Neighborhood-oriented
90
operations
• Consider the following subset of pixels in an image
• Suppose we want to filter the image by replacing the value at
Z5 with the average value of the pixels in a 3x3 region centered
around Z5
• Perform an operation of the form:

• and assign to z5 the value of z


44
of
Neighborhood-oriented
90
operations
• In the more general form, the operation may look like:

• This equation is widely used in image processing


• Proper selection of coefficients (weights) allows for operations
such as
– noise reduction
– region thinning
– edge detection
45
of
90
What Is Image Enhancement?
Image enhancement is the process of
making images more useful
The reasons for doing this include:
– Highlighting interesting detail in images
– Removing noise from images
– Making images more visually appealing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
46
Image Enhancement Examples
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
47
Image Enhancement Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
48
Image Enhancement Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
49
Image Enhancement Examples (cont…)
50
of
90
Spatial & Frequency Domains
There are two broad categories of image
enhancement techniques
– Spatial domain techniques
• Direct manipulation of image pixels
– Frequency domain techniques
• Manipulation of Fourier transform or wavelet
transform of an image
For the moment we will concentrate on
techniques that operate in the spatial
domain
51
of
Basic Spatial Domain Image
90 Enhancement
Most spatial domain enhancement operations
can be reduced to the form
Origin x
g (x, y) = T[ f (x, y)]
where f (x, y) is the
input image, g (x, y) is
the processed image (x, y)
and T is some
operator defined over
some neighbourhood
of (x, y) y Image f (x, y)
52
of
90
Point Processing
The simplest spatial domain operations
occur when the neighbourhood is simply the
pixel itself
In this case T is referred to as a grey level
transformation function or a point processing
operation
Point processing operations take the form
s=T(r)
where s refers to the processed image pixel
value and r refers to the original image pixel
value
53
of
Point Processing Example:
90 Negative Images
Negative images are useful for enhancing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

white or grey detail embedded in dark


regions of an image
– Note how much clearer the tissue is in the
negative image of the mammogram below

Original Negative
s = 1.0 - r
Image Image
54
of
Point Processing Example:
90 Negative Images (cont…)
Original Image Enhanced Image x
x

y Image f (x, y) y Image f (x, y)

s = intensitymax - r
55
of
Point Processing Example:
90 Thresholding
Thresholding transformations are particularly
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

useful for segmentation in which we want to


isolate an object of interest from a
background

1.0 r > threshold


s=
0.0 r <= threshold
56
of
Point Processing Example:
90 Thresholding (cont…)
Original Image Enhanced Image x
x

y Image f (x, y) y Image f (x, y)

1.0 r > threshold


s=
0.0 r <= threshold
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
57
Intensity Transformations
58
of
90
Basic Grey Level Transformations
There are many different kinds of grey level
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

transformations
Three of the most
common are shown
here
– Linear
• Negative/Identity
– Logarithmic
• Log/Inverse log
– Power law
• nth power/nth root
59
of
90
Logarithmic Transformations
The general form of the log transformation is
s = c * log(1 + r)
The log transformation maps a narrow range
of low input grey level values into a wider
range of output values
The inverse log transformation performs the
opposite transformation
Compresses the dynamic range of images
with large variations in pixel values
60
of
90
Logarithmic Transformations (cont…)

Log functions are particularly useful when


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the input grey level values may have an


extremely large range of values
In the following example the Fourier
transform of an image is put through a log
transform to reveal more detail

s = log(1 + r)
61
of
90
Logarithmic Transformations (cont…)

Original Image Enhanced Image x


x

y Image f (x, y) y Image f (x, y)

s = log(1 + r)

We usually set c to 1
Grey levels must be in the range [0.0, 1.0]
62
of
90
Power Law Transformations
Power law transformations have the following
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

form
s=c*rγ
Map a narrow range
of dark input values
into a wider range of
output values or vice
versa
Varying  gives a whole
family of curves
63
of
90
Power Law Transformations (cont…)
Original Image Enhanced Image x
x

y Image f (x, y) y Image f (x, y)

s=rγ

We usually set c to 1
Grey levels must be in the range [0.0, 1.0]
64
of
90
Power Law Example
65
of
90
Power Law Example (cont…)

γ = 0.6
1
0.9
T ran sfo rm ed In ten sities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Old Intensities
66
of
90
Power Law Example (cont…)

γ = 0.4
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
67
of
90
Power Law Example (cont…)

γ = 0.3
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
68
of
90
Power Law Example (cont…)
The images to the
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

right show a
s = r 0.6
magnetic resonance
(MR) image of a
fractured human

s = r 0.4
spine
Different curves
highlight different
detail
69
of
90
Power Law Example
70
of
90
Power Law Example (cont…)

γ = 5.0
1
0.9
Transformed Intensities

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Original Intensities
71
of
90
Power Law Transformations (cont…)
An aerial photo
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of a runway is
shown s = r 3.0

This time
power law
transforms are
s = r 4.0
used to darken
the image
Different curves
highlight
different detail
72
of
90
Gamma Correction
Many devices used for image capture, display and printing
respond according to a power law
• The exponent in the power-law equation is referred to as
gamma
• The process of correcting for the power-law response is
referred to as gamma correction
• Example: – CRT devices have an intensity-to-voltage
response that is
a power function (exponents typically range from 1.8-2.5)
– Gamma correction in this case could be achieved by
applying the transformation s=r1/2.5=r^0.4
73
of
90
Gamma Correction
Many of you might be familiar with gamma
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

correction of computer monitors


Problem is that
display devices do
not respond linearly
to different
intensities
Can be corrected
using a log
transform
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
74
More Contrast Issues
75
of
Piecewise Linear Transformation
90 Functions
Rather than using a well defined mathematical
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

function we can use arbitrary user-defined


transforms
The images below show a contrast stretching
linear transform to add contrast to a poor
quality image
76
of
Piecewise Linear Transformation
90 Functions
• Rather than using a well defined mathematical function we can use
arbitrary user-defined transforms
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

• Contrast stretching expands the range of intensity levels in an image so it


spans a given (full) intensity range
• Control points (r1,s1) and (r2,s2) control the shape of the transform T(r)
• r1=r2, s1=0 and s2=L-1 yields a thresholding function
The contrast stretched image shown in the previous slide is obtained using
the transformation obtained from the equation of the line having following
points
• (r1,s1)=(rmin,0) and (r2, s2)=(rmax,L-1)
77
of
90
Gray Level Slicing
• Used to highlight a specific range of intensities in an
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

image that might be of interest


•Two common approaches
– Set all pixel values within a range of interest to one
value (white) and all others to another value (black)
Produces a binary image
– Brighten (or darken) pixel values in a range of interest
and leave all others unchanged
78
of
90
Gray Level Slicing
Highlights a specific range of grey levels
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

– Similar to thresholding
– Other levels can be
suppressed or maintained
– Useful for highlighting features
in an image
79
of
90
Bit Plane Slicing
Often by isolating particular bits of the pixel
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

values in an image we can highlight


interesting aspects of that image
– Higher-order bits usually contain most of the
significant visual information
– Lower-order bits contain
subtle details
80
of
90
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

[10000000] [01000000]

[00100000] [00001000]

[00000100] [00000001]
81
of
90
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

[10000000] [01000000]

[00100000] [00001000]

[00000100] [00000001]
82
of
90
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
83
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
84
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
85
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
86
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
87
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
88
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
89
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
90
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90
91
Bit Plane Slicing (cont…)
92
of
90
Bit Plane Slicing (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Reconstructed image
using only bit planes 8 and
7

Reconstructed image
using only bit planes 8, 7
and 6

Reconstructed image
using only bit planes 7, 6
and 5
Image Processing and Pattern
Recognition (IPPR):
(Lecture 3)

Sanjeeb Prasad Panday


[email protected]
2
of
30
Image Histograms
The histogram of an image shows us the
distribution of grey levels in the image
Massively useful in image processing,
especially in segmentation
Frequencies

Grey Levels
3
of
30
Image Histograms
• The histogram of a digital image, f, (with intensities [0,L-1])
is a discrete function
h(rk) = nk
• Where rk is the kth intensity value and nk is the number of
pixels in f with intensity rk
• Normalizing the histogram is common practice – Divide the
components by the total number of pixels in the image –
Assuming an MxN image, this yields
p(rk) = nk/MN for k=0,1,2,....,L-1 – p(rk) is, basically, an estimate
of the probability of occurrence of intensity level rk in an image
Σ p(rk) = 1
4
of
30
Uses for Histogram Processing
• Image enhancements
• Image statistics
• Image compression
• Image segmentation
• Simple to calculate in software
• Economic hardware implementations
– Popular tool in real-time image processing
• A plot of this function for all values of k provides a global
description of the appearance of the image (gives useful information
for contrast enhancement)
• Histograms commonly viewed in plots as
h(rk) = nk versus rk
p(rk) = nk /MN versus rk
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
5
of
30
Histogram Examples
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
6
of
30
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
7
of
30
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
8
of
30
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
9
of
30
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
10
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
11

30
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
12
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
13
Histogram Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
14
Histogram Examples (cont…)
15
of
30
Histogram Examples (cont…)
A selection of images and
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

their histograms
Notice the relationships
between the images and
their histograms
Note that the high contrast
image has the most
evenly spaced histogram
16
of
30
Contrast Stretching
We can fix images that have poor contrast
by applying a pretty simple contrast
specification
The interesting part is how do we decide on
this transformation function?
17
of
30
Histogram Equalisation
• Histogram equalization is a process for increasing the
contrast in an image by spreading the histogram out to be
approximately uniformly distributed
• The gray levels of an image that has been subjected to
histogram equalization are spread out and always reach
white
– The increase of dynamic range produces an increase in
contrast
• For images with low contrast, histogram equalization has
the adverse effect of increasing visual graininess
18
of
30
Histogram Equalisation
• The intensity transformation function we are constructing
is of the form
s=T(r) 0 ≤ r ≤ L−1

• An output intensity level s is produced for every pixel in


the input image having intensity r
• We assume
– T(r) is monotonically increasing in the interval 0≤ r ≤ L-1
– 0 ≤ Τ(r) ≤ L-1 for 0 ≤ r ≤ L-1
• If we define the inverse
r=T^−1(s) 0 ≤ s ≤ L−1
• Then T(r) should be strictly monotonically increasing
19
of
30
Histogram Equalisation
20
of
30
Histogram Equalisation
• Histogram equalization requires construction of a
transformation function sk

• where rk is the kth gray level, nk is the number of pixels


with that gray level, MxN is the number of pixels in the
image, and k=0,1,...,L-1
• This yields an s with as many elements as the original
image’s histogram (normally 256 for our test images)
• The values of s will be in the range [0,1]. For
constructing a new image, s would be scaled to the range
[1,256]
21
of
30
Histogram Equalisation
22
of
30
Histogram Equalisation
Spreading out the frequencies in an image (or equalising the image) is
a simple way to improve dark or washed out images
The formula for histogram equalisation is given where
– rk: input intensity
– sk: processed intensity
– k: the intensity range
(e.g 0.0 – 1.0)
s  T (r )
k k
– nj: the frequency of intensity j k
– n: the sum of all frequencies   p (r )r j
j 1
The transformation function in the next slide is
k n

Obtained by applying the RHS formula to the j
Original image in the next side.
j 1 n
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
23
Equalisation Transformation Function
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
24

1
Equalisation Examples
25
of
30
Equalisation Transformation Functions

The functions used to equalise the images in


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the previous example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
26

2
Equalisation Examples
27
of
30
Equalisation Transformation Functions

The functions used to equalise the images in


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the previous example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
28

4
3
Equalisation Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
30
29

4
3
Equalisation Examples (cont…)
30
of
30
Equalisation Transformation Functions

The functions used to equalise the images in


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the previous examples


Image Processing and Pattern
Recognition (IPPR):
(Lecture 4)

Sanjeeb Prasad Panday


[email protected]
2
of
43
Spatial Filtering
• Use of spatial masks for filtering is called spatial filtering
– May be linear or nonlinear
• Linear filters
– Lowpass: attenuate (or eliminate) high frequency components
such as characterized by edges and sharp details in an image
• Net effect is image blurring
– Highpass: attenuate (or eliminate) low frequency components
such as slowly varying characteristics
• Net effect is a sharpening of edges and other details
– Bandpass: attenuate (or eliminate) a given frequency range
• Used primarily for image restoration(are of little interest for
image enhancement)
3
of
43
Spatial Filtering
• Filters in the frequency domain and corresponding spatial filters
• Basic approach is to sum products between mask coefficients and
pixel values
– R=w1z1 +w2z2 +....+w9z9
4
of
Order-Statistic nonlinear spatial
43
filters
• Nonlinear spatial filters also operate on neighborhoods
• Their operation is based directly on pixel values in the neighborhood
under consideration
– They do not explicitly use coefficient values as in the linear spatial
filters
• Example nonlinear spatial filters
– Median filter: Computes the median gray-level value of the
neighborhood. Used for noise reduction.
– Max filter: Used to find the brightest points in an image
R=max{zk |k=1,2,...,9}
– Min filter: Used to find the dimmest points in an image
R=min{zk |k=1,2,...,9}
5
of
43
Neighbourhood Operations
Neighbourhood operations simply operate
on a larger neighbourhood of pixels than
point operations Origin x

Neighbourhoods are
mostly a rectangle
around a central pixel
(x, y)
Any size rectangle Neighbourhood

and any shape filter


are possible
y Image f (x, y)
6
of
43
Simple Neighbourhood Operations
Some simple neighbourhood operations
include:
Min: Set the pixel value to the minimum in the
neighbourhood
Max: Set the pixel value to the maximum in the
neighbourhood
Median: The median value of a set of numbers is
the midpoint value in that set (e.g. from the set [1,
7, 15, 18, 24] 15 is the median). Sometimes the
median works better than the average
7
of
Simple Neighbourhood Operations
43 Example

Original Image x Enhanced Image x


123 127 128 119 115 130

140 145 148 153 167 172

133 154 183 192 194 191

194 199 207 210 198 195

164 170 175 162 173 151

y y
8
of
43
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3
e 3*3 Filter Pixels
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i

The above is repeated for every pixel in the


original image to generate the filtered image
9
of
43
Spatial Filtering: Equation Form
a b

  w(s, t ) f ( x  s, y  t )
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

g ( x, y ) 
s   at   b

Filtering can be given


in equation form as
shown above
Notations are based
on the image shown
to the left
10
of
43
Smoothing Spatial Filters
• The shape of the impulse response needed to implement a lowpass
(smoothing) filter indicates the filter should have all positive coefficients
• For a 3x3 mask, the simplest arrangement is to have all the coefficient
values equal to one (neighborhood averaging)
– The response would be the sum of all gray levels for the nine
pixels in the mask
– This could cause the value of R to be out of the valid gray-level
range
• The solution is to scale the result by dividing by 9
11
of
43
Smoothing Spatial Filters
One of the simplest spatial filtering
operations we can perform is a smoothing
operation
Simply average all of the pixels in a
neighbourhood around a central value
Especially useful
1/ 1/ 1/
in removing noise 9 9 9
from images Simple
Also useful for
1/
9
1/
9
1/
9 averaging
highlighting gross filter
1/ 1/ 1/
detail 9 9 9
12
of
43
Smoothing Spatial Filtering
Origin x
104 100 108 1/ 1/ 1/
9 9 9
1/ 1/ 1/
99 106 98

95 90 85
* 1/
9
1/
9
1/
9

9 9 9

1/ 100
104
9
1/ 108
9
1/
9
Original Image Filter
Simple 3*3 1/
99 1/ 1/
9 98
9 106 9
3*3 Smoothing Pixels
Neighbourhood /9 190
195 /9 /9
185 Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
y Image f (x, y) 1/ *95 + 1/ *90 + 1/ *85
9 9 9
= 98.3333
The above is repeated for every pixel in the
original image to generate the smoothed image
13
of
43
Image Smoothing Example
The image at the top left
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
3, 5, 9, 15 and 35
Notice how detail begins
to disappear
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
14
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
15
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
16
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
17
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
18
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
19
Image Smoothing Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
20
Image Smoothing Example
21
of
43
Weighted Smoothing Filters
More effective smoothing filters can be
generated by allowing different pixels in the
neighbourhood different weights in the
averaging function
1/ 2/ 1/
Pixels closer to the 16 16 16
central pixel are more
2/ 4/ 2/
important 16 16 16
Often referred to as a
1/ 2/ 1/
weighted averaging 16 16 16

Weighted
averaging filter
22
of
43
Another Smoothing Example
By smoothing the original image we get rid
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of lots of the finer detail which leaves only


the gross features for thresholding

Original Image Smoothed Image Thresholded Image


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
23
Another Smoothing Example
24
of
43
Smoothing Filters
• One problem with the lowpass filter is it blurs edges and other sharp
details
• If the intent is to achieve noise reduction, one approach can be to use
median filtering
– The value of each pixel is replaced by the median pixel value in
the neighborhood (as opposed to the average)
– Particularly effective when the noise consists of strong, spike like
components and edge sharpness is to be preserved
• The median m of a set of values is such that half of the values are
greater than m and half are less than m
• To implement, sort the pixel values in the neighborhood, choose the
median and assign this value to the pixel of interest
• Forces pixels with distinct intensities to be more like their neighbors
25
of
Averaging Filter Vs. Median Filter
43 Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Filtering is often used to remove noise from


images
Sometimes a median filter works better than
an averaging filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
26

Example
Averaging Filter Vs. Median Filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
27

Example
Averaging Filter Vs. Median Filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
28

Example
Averaging Filter Vs. Median Filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
29

Example
Averaging Filter Vs. Median Filter
30
of
43
Sharpening Filters (High Pass)
• The shape of the impulse response needed to implement a
high-pass (sharpening) filter indicates the filter should have
positive coefficients near its center and negative coefficients in
the outer periphery
• For a 3x3 mask, the simplest arrangement is to have the
center coefficient positive and all others negative
31
of
43
Sharpening Filters (High Pass)
• Note the sum of the coefficients is zero
– When the mask is over a constant or slowly varying region the
output is zero or very small
• This filter eliminates the zero-frequency term
• Eliminating this term reduces the average gray-level value in
the image to zero (will reduce the global contrast of the image)
• Result will be a some what edge-enhanced image over a dark
background
• Reducing the average gray-level value to zero implies some negative
gray levels
– The output should be scaled back into an appropriate range
[0, L-1]
32
of
43
Sharpening Filters (High Pass)
33
of
43
Sharpening Filters (High-boost)
• A high-pass filter may be computed as:
High-pass = Original - Lowpass
• Multiplying the original by an amplification factor yields a high-boost or
high-frequency-emphasis filter
High-boost = A(Original) − Lowpass
= ( A − 1)(Original ) + Original − Lowpass
= ( A − 1)(Original ) + High-pass
– If A>1, part of the original image is added to the high-pass result
(partially restoring low frequency components)
– Result looks more like the original image with a relative degree of
edge enhancement that depends on the value of A
– May be implemented with the center coefficient value w=9A-1
(A≥1)
34
of
43
Sharpening Filters (High-boost)
35
of
Simple Neighbourhood Operations
43 Example

x
123 127 128 119 115 130

140 145 148 153 167 172

133 154 183 192 194 191

194 199 207 210 198 195

164 170 175 162 173 151

y
36
of
43
Strange Things Happen At The Edges!

At the edges of an image we are missing


pixels to form a neighbourhood
Origin x
e e

e e e
y Image f (x, y)
37
of
Strange Things Happen At The Edges!
43 (cont…)
There are a few approaches to dealing with
missing edge pixels:
Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
Pad the image
• Typically with either all white or all black pixels
Replicate border pixels
Truncate the image
Allow pixels wrap around the image
• Can cause some strange image artefacts
38
of
Simple Neighbourhood Operations
43 Example

x
123 127 128 119 115 130

140 145 148 153 167 172

133 154 183 192 194 191

194 199 207 210 198 195

164 170 175 162 173 151

y
39
of
Strange Things Happen At The Edges!
43 (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Filtered Image:
Zero Padding

Original Filtered Image:


Image Replicate Edge Pixels

Filtered Image:
Wrap Around Edge Pixels
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
40
Strange Things Happen At The Edges!
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
41
Strange Things Happen At The Edges!
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
43
42
Strange Things Happen At The Edges!
(cont…)
43
of
43
Correlation & Convolution
The filtering we have been talking about so
far is referred to as correlation with the filter
itself referred to as the correlation kernel
Convolution is a similar operation, with just
one subtle difference
a b c r s t eprocessed = v*e +
z*a + y*b + x*c +
d
f
e
g h
e
* u
x
v
y
w
z
w*d + u*e +
t*f + s*g + r*h
Original Image Filter
Pixels

For symmetric filters it makes no difference


Image Processing and Pattern
Recognition (IPPR):
(Lecture 5)

Sanjeeb Prasad Panday


[email protected]
2
of
32
Contents
In this lecture we will look at more spatial
filtering techniques
– Spatial filtering refresher
– Sharpening filters
• 1st derivative filters
• 2nd derivative filters
– Combining filtering techniques
3
of
32
Spatial Filtering Refresher
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z
Original Image Filter
Simple 3*3
e 3*3 Filter Pixels
Neighbourhood
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i

The above is repeated for every pixel in the


original image to generate the smoothed image
4
of
32
Sharpening Spatial Filters
Previously we have looked at smoothing
filters which remove fine detail
Sharpening spatial filters seek to highlight
fine detail
– Remove blurring from images
– Highlight edges
Sharpening filters are based on spatial
differentiation
5
of
32
Spatial Differentiation
Differentiation measures the rate of change of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

a function
Let’s consider a simple 1 dimensional
example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
6
of
32

A
B
Spatial Differentiation
7
of
32
1st Derivative
The formula for the 1st derivative of a
function is as follows:
f
 f ( x  1)  f ( x)
x
It’s just the difference between subsequent
values and measures the rate of change of
the function
8
of
32
1st Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

0 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
9
of
32
2nd Derivative
The formula for the 2nd derivative of a
function is as follows:
 f
2
 f ( x  1)  f ( x  1)  2 f ( x)
 x
2

Simply takes into account the values both


before and after the current value
10
of
32
2nd Derivative (cont…)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7

-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
11
of
Using Second Derivatives For Image
32 Enhancement
The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative
later on
The first sharpening filter we will look at is
the Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
12
of
32
The Laplacian
The Laplacian is defined as follows:
 f  f2 2
 f  2  2
2

 x  y
where the partial 1st order derivative in the x
direction is defined as follows:
 f2
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y )
 x
2

and in the y direction as follows:


 f
2
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
 y
2
13
of
32
The Laplacian (cont…)
So, the Laplacian can be given as follows:
 f  [ f ( x  1, y )  f ( x  1, y )
2

 f ( x, y  1)  f ( x, y  1)]
 4 f ( x, y )
We can easily build a filter based on this
0 1 0

1 -4 1

0 1 0
14
of
32
The Laplacian (cont…)
Applying the Laplacian to an image we get a
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

new image that highlights edges and other


discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
15
of
32
But That Is Not Very Enhanced!
The result of a Laplacian filtering
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

is not an enhanced image


We have to do more work in
order to get our final image
Subtract the Laplacian result
Laplacian
from the original image to Filtered Image
generate our final sharpened Scaled for Display

enhanced image
g ( x, y )  f ( x, y )   f
2
16
of
32
Laplacian Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

- =
Original Laplacian Sharpened
Image Filtered Image Image

In the final sharpened image edges and fine


detail are much more obvious
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
32
17
Laplacian Image Enhancement
18
of
32
Simplified Image Enhancement
The entire enhancement can be combined
into a single filtering operation
g ( x, y )  f ( x, y )   f
2

 f ( x, y )  [ f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)
 4 f ( x, y )]
 5 f ( x, y )  f ( x  1, y )  f ( x  1, y )
 f ( x, y  1)  f ( x, y  1)
19
of
32
Simplified Image Enhancement (cont…)

This gives us a new filter which does the


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

whole job for us in one step

0 -1 0

-1 5 -1

0 -1 0
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
32
20
Simplified Image Enhancement (cont…)
21
of
32
Variants On The Simple Laplacian
There are lots of slightly different versions of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the Laplacian that can be used:


0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
22
of
32
1st Derivative Filtering
Implementing 1st derivative filters is difficult in
practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column
vector:
 f 
Gx   x 
f      f 
G y   
 y 
23
of
32
1st Derivative Filtering (cont…)
The magnitude of this vector is given by:
f  mag (f )

 G G2
x
2
y 
1
2

1
 f   f  
2 2 2

      
 x   y  

For practical reasons this can be simplified as:


f  G x  G y
24
of
32
1st Derivative Filtering (cont…)
There is some debate as to how best to
calculate these gradients but we will use:
f   z7  2 z8  z9    z1  2 z 2  z3 
  z3  2 z6  z9    z1  2 z 4  z7 
which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
25
of
32
Sobel Operators
Based on the previous equations we can
derive the Sobel Operators
-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

To filter an image it is filtered using both


operators the results of which are added
together
26
of
32
Sobel Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five o’clock in
the image) more
obvious

Sobel filters are typically used for edge


detection
27
of
32
1st & 2nd Derivatives
Comparing the 1st and 2nd derivatives we
can conclude the following:
– 1st order derivatives generally produce thicker
edges
– 2nd order derivatives have a stronger
response to fine detail e.g. thin lines
– 1st order derivatives have stronger response
to grey level step
– 2nd order derivatives produce a double
response at step changes in grey level
28
of
32
Summary
In this lecture we looked at:
– Sharpening filters
• 1st derivative filters
• 2nd derivative filters
– Combining filtering techniques
29
of
Combining Spatial Enhancement
32 Methods
Successful image
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

enhancement is typically not


achieved using a single
operation
Rather we combine a range
of techniques in order to
achieve a final result
This example will focus on
enhancing the bone scan to
the right
30
of
Combining Spatial Enhancement
32 Methods (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

(a)
Laplacian filter of
bone scan (a)
(b)
Sharpened version of
bone scan achieved (c)
by subtracting (a)
and (b) Sobel filter of bone
scan (a) (d)
31
of
Combining Spatial Enhancement
32 Methods (cont…)
Result of applying a (h)
power-law trans. to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Sharpened image (g)


which is sum of (a)
and (f) (g)
The product of (c)
and (e) which will be (f)
used as a mask
(e)

Image (d) smoothed with


a 5*5 averaging filter
32
of
Combining Spatial Enhancement
32 Methods (cont…)
Compare the original and final images
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Processing and Pattern
Recognition (IPPR):
(Lecture 5)

Sanjeeb Prasad Panday


[email protected]
2
of
118
Contents
In this lecture we will look at image
enhancement in the frequency domain
– Jean Baptiste Joseph Fourier
– The Fourier series & the Fourier transform
– Image Processing in the frequency domain
• Image smoothing
• Image sharpening
– Fast Fourier Transform
3
of
118
Jean Baptiste Joseph Fourier
Fourier was born in Auxerre,
France in 1768
– Most famous for his work “La Théorie
Analitique de la Chaleur” published in
1822
– Translated into English in 1878: “The
Analytic Theory of Heat”
Nobody paid much attention when the work was
first published
One of the most important mathematical theories in
modern engineering
4
of
118
The Big Idea
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Any function that periodically repeats itself can be


expressed as a sum of sines and cosines of
different frequencies each multiplied by a different
coefficient – a Fourier series
5
of
118
The Big Idea (cont…)
Taken from www.tfh-berlin.de/~schwenk/hobby/fourier/Welcome.html

Notice how we get closer and closer to the


original function as we add more and more
frequencies
6
of
118
The Big Idea (cont…)
Frequency
domain signal
processing
example in Excel
7
of
118
Introduction to the Fourier Transform
8
of
118
The Fourier Transform (continued)
9
of
118
The Fourier Transform (continued)
10
of
118
The Fourier Transform (continued)
11
of
118
The Fourier Transform (continued)
12
of
118
The Fourier Transform (Example)
13
of
118
The Fourier Transform (Example)
14
of
118
The 2-D Fourier Transform
15
of
118
The 2-D Fourier Transform (Continued)
16
of
Sample 2-D function and its Fourier
118
spectrum
17
of
118
Example 2-D Fourier transform
18
of
Example 2-D functions and their
118
spectra
19
of
118
The discrete Fourier transform
20
of
118
Sampling a continuous function
21
of
118
The discrete Fourier transform pair
22
of
118
The 2-D discrete Fourier transform
23
of
The 2-D discrete Fourier transform
118
(continued)
24
of
118
Discrete Fourier transform example
25
of
Discrete Fourier transform example
118
(continued)
26
of
Discrete Fourier transform example
118
(continued)
27
of
Properties of the 2-D Fourier
118
transform
28
of
118
Separability
29
of
118
Separability (continued)
30
of
118
Translation
31
of
118
Translation (continued)
32
of
118
Matlab example
33
of
118
Matlab example (continued)
34
of
Example image and complete, scaled
118
Fourier spectrum plot
35
of
Example image and partial, scaled
118 Fourier spectrum plot (with shifted f(x,y))
36
of
118
Periodicity of the Fourier transform
37
of
Conjugate symmetry of the Fourier
118 transform
38
of
118
Implications of periodicity & symmetry
39
of
118
Periodicity properties
40
of
118
Periodicity properties: 2-D Example
41
of
118
Distributivity & Scaling
42
of
118
Average Value
43
of
118
The Laplacian
44
of
118
The Laplacian: Matlab example
45
of
118
Convolution & Correlation
46
of
118
1-D convolution example
47
of
118
1-D convolution example (continued)
48
of
118
1-D convolution example (continued)
49
of
118
Convolution and impulse functions
50
of
Convolution and impulse functions
118 (continued)
51
of
118
Convolution with an impulse function
52
of
Convolution with an impulse function
118 (continued)
53
of
118
Convolution and the Fourier transform
54
of
118
Frequency domain filtering
55
of
118
Lowpass frequency domain filtering
56
of
118
Lowpass frequency domain filtering
57
of
118
Ideal lowpass filter (ILPF)
58
of
118
Ideal lowpass filter (ILPF) (continued)
59
of
118
The Discrete Fourier Transform (DFT)

The Discrete Fourier Transform of f(x, y), for x =


0, 1, 2…M-1 and y = 0,1,2…N-1, denoted by
F(u, v), is given by the equation:
M 1 N 1
F (u , v )    f ( x, y )e
x 0 y 0
 j 2  ( ux / M vy / N )

for u = 0, 1, 2…M-1 and v = 0, 1, 2…N-1.


60
of
118
DFT & Images
The DFT of a two dimensional image can be
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

visualised by showing the spectrum of the


images component frequencies

DFT
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
61

118
DFT & Images
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
62

118
DFT & Images
63
of
118
DFT & Images (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

DFT

Scanning electron microscope Fourier spectrum of the image


image of an integrated circuit
magnified ~2500 times
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
64

118
DFT & Images (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
65

118
DFT & Images (cont…)
66
of
118
The Inverse DFT
It is really important to note that the Fourier
transform is completely reversible
The inverse DFT is given by:
M 1 N 1
1
f ( x, y ) 
MN
  F (u , v ) e
u 0 v0
j 2  ( ux / M vy / N )

for x = 0, 1, 2…M-1 and y = 0, 1, 2…N-1


67
of
118
The DFT and Image Processing

To filter an image in the frequency domain:


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

1. Compute F(u,v) the DFT of the image


2. Multiply F(u,v) by a filter function H(u,v)
3. Compute the inverse DFT of the result
68
of
118
Some Basic Frequency Domain Filters

Low Pass Filter


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

High Pass Filter


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
69

118
Some Basic Frequency Domain Filters
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
70

118
Some Basic Frequency Domain Filters
71
of
118
Smoothing Frequency Domain Filters

Smoothing is achieved in the frequency domain


by dropping out the high frequency components
The basic model for filtering is:
G(u,v) = H(u,v)F(u,v)
where F(u,v) is the Fourier transform of the
image being filtered and H(u,v) is the filter
transform function
Low pass filters – only pass the low frequencies,
drop the high ones
72
of
118
Ideal Low Pass Filter
Simply cut off all high frequency components that
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

are a specified distance D0 from the origin of the


transform

changing the distance changes the behaviour of


the filter
73
of
118
Ideal Low Pass Filter (cont…)
The transfer function for the ideal low pass filter
can be given as:

 1 if D (u , v )  D0
H (u , v )  
 0 if D (u , v )  D0
where D(u,v) is given as:
D (u , v )  [( u  M / 2 )  ( v  N / 2 ) ]
2 2 1/ 2
74
of
118
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Above we show an image, it’s Fourier spectrum


and a series of ideal low pass filters of radius 5,
15, 30, 80 and 230 superimposed on top of it
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
75

118
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
76

118
Ideal Low Pass Filter (cont…)
77
of
118
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
Original with ideal low pass
image filter of radius 5

Result of filtering Result of filtering


with ideal low pass with ideal low pass
filter of radius 15 filter of radius 30

Result of filtering
Result of filtering
with ideal low pass
with ideal low pass
filter of radius 230
filter of radius 80
78
of
118
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
with ideal low pass
filter of radius 5
79
of
118
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
with ideal low pass
filter of radius 15
80
of
118
Butterworth Lowpass Filters
The transfer function of a Butterworth lowpass
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

filter of order n with cutoff frequency at distance


D0 from the origin is defined as:
81
of
118
Butterworth Lowpass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
Original with Butterworth filter
image of order 2 and cutoff
radius 5

Result of filtering with Result of filtering


Butterworth filter of with Butterworth
order 2 and cutoff filter of order 2 and
radius 15 cutoff radius 30

Result of filtering
Result of filtering with
with Butterworth filter
Butterworth filter of
of order 2 and cutoff
order 2 and cutoff
radius 230
radius 80
82
of
118
Butterworth Lowpass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Original
image

Result of filtering
with Butterworth filter
of order 2 and cutoff
radius 5
83
of
118
Butterworth Lowpass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering with


Butterworth filter of
order 2 and cutoff
radius 15
84
of
118
Gaussian Lowpass Filters
The transfer function of a Gaussian lowpass
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

filter is defined as:


85
of
118
Gaussian Lowpass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
Original with Gaussian filter
image with cutoff radius 5

Result of filtering Result of filtering


with Gaussian with Gaussian filter
filter with cutoff with cutoff radius 30
radius 15

Result of filtering Result of filtering


with Gaussian with Gaussian filter
filter with cutoff with cutoff radius
radius 85 230
86
of
118
Lowpass Filters Compared
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Result of filtering
Result of filtering
with Butterworth
with ideal low pass
filter of order 2
filter of radius 15
and cutoff radius
15

Result of filtering
with Gaussian
filter with cutoff
radius 15
87
of
118
Lowpass Filtering Examples
A low pass Gaussian filter is used to connect
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

broken text
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
88

118
Lowpass Filtering Examples
89
of
118
Lowpass Filtering Examples (cont…)
Different lowpass Gaussian filters used to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

remove blemishes in a photograph


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
90

118
Lowpass Filtering Examples (cont…)
91
of
118
Lowpass Filtering Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Original Gaussian lowpass


image filter

Spectrum of Processed
original image image
92
of
118
Sharpening in the Frequency Domain

Edges and fine detail in images are associated


with high frequency components
High pass filters – only pass the high
frequencies, drop the low ones
High pass frequencies are precisely the reverse
of low pass filters, so:
Hhp(u, v) = 1 – Hlp(u, v)
93
of
118
Ideal High Pass Filters
The ideal high pass filter is given as:
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

 0 if D (u , v )  D0
H (u , v )  
1 if D (u , v )  D0
where D0 is the cut off distance as before
94
of
118
Ideal High Pass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of ideal Results of ideal Results of ideal


high pass filtering high pass filtering high pass filtering
with D0 = 15 with D0 = 30 with D0 = 80
95
of
118
Butterworth High Pass Filters
The Butterworth high pass filter is given as:
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

1
H (u , v ) 
1  [ D0 / D (u , v )] 2n

where n is the order and D0 is the cut off


distance as before
96
of
118
Butterworth High Pass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Results of
Butterworth Butterworth
high pass high pass
filtering of filtering of
order 2 with order 2 with
D0 = 15 D0 = 80

Results of Butterworth high pass


filtering of order 2 with D0 = 30
97
of
118
Gaussian High Pass Filters
The Gaussian high pass filter is given as:
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

 D 2 ( u , v ) / 2 D0 2
H (u , v )  1  e
where D0 is the cut off distance as before
98
of
118
Gaussian High Pass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Results of
Gaussian Gaussian
high pass high pass
filtering with filtering with
D0 = 15 D0 = 80

Results of Gaussian high pass


filtering with D0 = 30
99
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of ideal
high pass filtering
with D0 = 15
100
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Butterworth
high pass filtering of order
2 with D0 = 15
101
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Gaussian
high pass filtering with
D0 = 15
102
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of ideal Results of Butterworth Results of Gaussian


high pass filtering high pass filtering of order high pass filtering with
with D0 = 15 2 with D0 = 15 D0 = 15
103
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of ideal
high pass filtering
with D0 = 15
104
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Butterworth
high pass filtering of order
2 with D0 = 15
105
of
118
Highpass Filter Comparison
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Results of Gaussian
high pass filtering with
D0 = 15
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
118
106
High frequency
emphasis result Original image

equalisation Highpass filtering result


After histogram
Highpass Filtering Example
107
of
118
Highpass Filtering Example
108
of
118
Highpass Filtering Example
109
of
118
Highpass Filtering Example
110
of
118
Highpass Filtering Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

of
111

118
Inverse DFT of
Laplacian in the Laplacian in the
frequency domain frequency domain

domain
in the frequency
2-D image of Laplacian
spatial filter
left compared to
the image on the
Laplacian In The Frequency Domain

Zoomed section of
112
of
118
Frequency Domain Laplacian Example

Original Laplacian
image filtered
image

Laplacian
Enhanced
image scaled
image
113
of
118
Fast Fourier Transform
The reason that Fourier based techniques have
become so popular is the development of the
Fast Fourier Transform (FFT) algorithm
Allows the Fourier transform to be carried out in
a reasonable amount of time
Reduces the amount of time required to perform
a Fourier transform by a factor of 100 – 600
times!
114
of
Frequency Domain Filtering & Spatial
118 Domain Filtering
Similar jobs can be done in the spatial and
frequency domains
Filtering in the spatial domain can be easier to
understand
Filtering in the frequency domain can be much
faster – especially for large images
Image Processing and Pattern
Recognition (IPPR):
(Lecture 7)

Sanjeeb Prasad Panday


[email protected]
2
of
12
The Fast Fourier Transform (FFT)
3
of
12
The Fast Fourier Transform (FFT)
4
of
12
The Fast Fourier Transform (FFT)
5
of
12
The Fast Fourier Transform (FFT)
6
of
12
The Fast Fourier Transform (FFT)
7
of
12
The Fast Fourier Transform (FFT)
8
of Chapter 4
12
Image Enhancement in the
Frequency Domain
9
of
12
Convolution and Padding
10
of Chapter 4
12
Image Enhancement in the
Frequency Domain
11
of
12
Convolution and Padding
12
of Chapter 4
12
Image Enhancement in the
Frequency Domain
13
of Chapter 4
12
Image Enhancement in the
Frequency Domain
14
of
12
2D Convolution and Padding
15
of Chapter 4
12
Image Enhancement in the
Frequency Domain
16
of
12
2D Convolution
17
of
12
2D Correlation
18
of Chapter 4
12
Image Enhancement in the
Frequency Domain
Image Processing and Pattern
Recognition (IPPR):
(Lecture 8)

Sanjeeb Prasad Panday


[email protected]
2
of
47
Why we need image compression?
3
of
47
Image Compression
4
of
47
Lossy Vs. Lossless Compression
5
of
47
Image Compression
6
of
47
Image Compression
7
of
47
Coding Redundancy
8
of
47
Coding Redundancy
9
of
47
Coding Redundancy
10
of
47
Coding Redundancy
11
of
47
Huffman coding
12
of
47
Huffman coding
13
of
47
Huffman coding
14
of
47
Interpixel / Interframe Redundancy
15
of
47
Interpixel / Interframe Redundancy
16
of
47
Interpixel / Interframe Redundancy
17
of
47
Run Length Coding
18
of
47
Psychovisual Redundancy
19
of
47
Psychovisual Redundancy
20
of
47
Psychovisual Redundancy
21
of
47
Bit Plane Coding
22
of
47
Bit Plane Coding
23
of
47
Gray Coded Bit Planes
24
of
47
Gray Coded Bit Planes
25
of
47
Image Compression Models
26
of
Source Encoder and Decoder
47
Models
27
of
47
Error-Free Compression (Lossless)
28
of
47
Lossless Predictive Coding Model
29
of
47
Lossless Predictive Coding Model
30
of
47
Lossy Compression
31
of
47
Lossy Compression
32
of
47
Lossy Compression
33
of
47
Lossy Compression
34
of
47
Lossy Compression
35
of
47
Lossy Compression
36
of
47
Lossy Compression
37
of
47
Lossy Compression
38
of
47
Lossless Vs. Lossy Coding
39
of
47
Transform Coding
40
of
47
Transform Coding
41
of
47
Hadamard Transform
42
of
47
Discrete Cosine Transform
43
of
47
Transform Coding Example

You might also like