0% found this document useful (0 votes)
135 views82 pages

Lecture 02 DIP Fall 2019

This document provides an overview of digital image processing fundamentals. It discusses the human visual system and how eyes perceive light, color, brightness and depth. It explains how digital images are represented as matrices and sampled from continuous scenes. Effects of varying spatial resolution and number of intensity levels are demonstrated. Common image processing operations like arithmetic, interpolation and subtraction are introduced along with examples of applications in astronomy, remote sensing and medical imaging.

Uploaded by

Naimal Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views82 pages

Lecture 02 DIP Fall 2019

This document provides an overview of digital image processing fundamentals. It discusses the human visual system and how eyes perceive light, color, brightness and depth. It explains how digital images are represented as matrices and sampled from continuous scenes. Effects of varying spatial resolution and number of intensity levels are demonstrated. Common image processing operations like arithmetic, interpolation and subtraction are introduced along with examples of applications in astronomy, remote sensing and medical imaging.

Uploaded by

Naimal Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Fundamentals of Digital Instructor

Engr Dr Muhammad
Image Processing Jamil Khan
Objectives
❑Understand some important functions and limitations of human vision.
❑Be familiar with the electromagnetic energy spectrum, including basic
properties of light.
❑Know how digital images are generated and represented.
❑Understand the basics of image sampling and quantization.
❑Be familiar with spatial and intensity resolution and their effects on
image appearance.
❑Understand basic geometric relationships between image pixels.
❑Be familiar with the principal mathematical tools used in digital image
processing.
❑Be able to apply a variety of introductory digital image processing
techniques
Human Visual System
The eye is nearly a sphere (with a diameter
of about 20 mm) enclosed by three
membranes:
◼ Cornea and Sclera outer cover
◼ Choroid
◼ Retina
The cornea is a tough, transparent tissue
that covers the anterior surface of the eye.
Continuous with the cornea, the sclera is an
opaque membrane that encloses the
remainder of the optic globe.
Human Visual System
The amount of light entering the eye is
controlled by the pupil, which dilates and
contracts accordingly. The cornea and lens,
whose shape is adjusted by the ciliary
body, focus the light on the retina, where
receptors convert it into nerve signals that
pass to the brain.
Human Visual System
◼ Elements of visual perception
◼ Cones
◼ 6 – 7 million in each eye
◼ Photopic or bright-light vision
◼ Highly sensitive to color

◼ Rods
◼ 75 – 150 million
◼ Not involved in color vision
◼ Sensitive to low level of illumination (scotopic or dim-light vision)

◼ An object appears brightly colored in daylight will be seen colorless


in moonlight (why)
Human Visual System

Distribution of rods and cones in the retina.


Human Visual System

Human simultaneous luminance vision range


(5 orders of magnitude)

log (cd/m2)
Human Visual System
◼Image formation in the eye
◼ Distance between centre of lens and retina (focal length)
vary between 14-17 mm.

◼ Image length h = 17(mm) x (15/100)


Human Visual System
◼Brightness adaptation

◼HVS can adapt to light intensity range on the order


of 1010

◼Subjective brightness is a logarithmic function of


the light intensity incident on the eye
Human Visual System
◼Brightness adaptation
The lambert (symbol L) is a unit of
luminance named for Johann Heinrich
Lambert (1728 - 1777), a German
mathematician, physicist and
astronomer. A related unit of luminance,
the foot-lambert, is used in the lighting,
cinema and flight simulation industries.
The SI unit is the candela per square
metre (cd/m²).

Source: Wikipedia
Human Visual System
◼Brightness adaptation

◼The HVS cannot operate on such range (10 orders of


magnitude) simultaneously

◼It accomplishes this through (brightness) adaptation

◼The total intensity level the HVS can discriminate


simultaneously is rather small in comparison (about 4
orders of magnitude)
Human Visual System
◼ Brightness adaptation
For a given observation
condition, the current
Sensitivity of the HVS for sensitivity level is call
the given adaptation level the brightness
adaptation level
Anything below Bb will be
perceived as
indistinguishable blacks
Human Visual System
◼ Brightness discrimination

◼ Perceivable changes at a given adaptation level


Human Visual System
◼ Brightness discrimination
Visual Perception
Perceived brightness is not a simple function of intensity
◼ Mach bands: visual system tends to undershoot or
overshoot around the boundary of regions of different
intensities
◼ Simultaneous contrast: region’s perceived brightness
does not depend on its intensity
◼ Optical illusions: eye fills in non-existing information or
wrongly perceives geometrical properties of objects
Mach Band
Illustration of the Mach band
effect. Perceived intensity is not
a simple function of actual
intensity.
Simultaneous Contrast

Simultaneous contrast: region’s perceived brightness does


not depend on its intensity
Some Optical Illusions

Optical illusions: eye fills in non-


existing information or wrongly
perceives geometrical properties
of objects
Electromagnetic Spectrum
Color Lights
Properties of Lights
A Simple Image Model
•Two-dimensional light-intensity function

f(x,y) = l(x,y) r(x,y)

l(x,y) - illumination component


r(x,y) – reflectance component
A Simple Image Model
◼l(x,y) - illumination range

◼r(x,y) – typical reflectance indixes


◼ black velvet (0.01)
◼ stainless steel (0.65)
◼ white paint (0.80)
◼ silver plate (0.90)
◼ snow (0.93)
Sampling
• Digitization of the spatial coordinates, sample (x, y)
at discrete values of (0, 0), (0, 1), ….

• f(x, y) is 2-D array

 f (0,0) f (0,1)  f (0, M − 1) 


 f (1,0) f (1,1) f (1, M − 1) 
f ( x, y ) = 
 
 
 f ( N − 1,0) f ( N − 1,1) f ( N − 1, M − 1)
Quantization
◼Digitization of the light intensity function

◼Each f(i,j) is called a pixel

◼The magnitude of f(i,j) is represented digitally


with a fixed number of bits - quantization
Image Sensing and Acquisition
Images are generated by the combination of the illumination
source and the reflection or absorption of energy from that
source by elements of the scene being imaged
Image Sensing and Acquisition
Image Sensing and Acquisition
Image Sensing and Acquisition

A rotating X-ray source provides


illumination and the sensors opposite
the source collect the X-ray energy
that passes through the object
Image Acquisition
Image Acquisition
Sampling and Quantization
Sampling and Quantization
◼How many samples to take?

◼Number of pixels (samples) in the image


◼Nyquist rate

◼How many gray-levels to store?

◼At a pixel position (sample), number of levels of


color/intensity to be represented
Image Acquisition at Different Sampling Levels
FIGURE 2.23
Effects of reducing spatial
resolution. The images
shown are at:
(a) 930 dpi,
(b) 300 dpi,
(c) 150 dpi, and
(d) 72 dpi.
Image as a Matrix
Image as a Matrix
Image as a Matrix

Coordinate convention
used to represent digital
images. Because
coordinate values are
integers, there is a one-to-
one Correspondence
between x and y and the
rows (r) and columns (c)
of a matrix.
Saturation and Noise
Storage Vs Bits
Number of megabytes
required to store
images for various
values of N and k.
Spatial and Intensity Resolution
Spatial Operations

Representative
Iso-preference
curves for the
three types of
images in
Fig.
Neighbors of a Pixel
Neighbors of a Pixel
We consider three types of adjacency:
Neighbors of a Pixel
Distance Measure
Distance Measure
Spatial and Intensity Resolution
Spatial Resolution Example
FIGURE 2.23
Effects of reducing spatial
resolution. The images
shown are at:
(a) 930 dpi,
(b) 300 dpi,
(c) 150 dpi, and
(d) 72 dpi.
Variation of Number of Intensity Levels
• Reducing the number of bits from k=7 to
k=1 while keeping the image size constant
• Insufficient number of intensity levels in
smooth areas of digital image leads to
false contouring.
Effects of Varying the Number of Intensity Levels
Effects of Varying the Number of Intensity Levels
Image Interpolation Techniques
Zooming using Various Interpolation Techniques

FIGURE 2.27 (a) Image reduced to 72 dpi and zoomed back to its original 930 dpi using nearest
neighbor interpolation. This figure is the same as Fig. 2.23(d). (b) Image reduced to 72 dpi
and zoomed using bilinear interpolation. (c) Same as (b) but using bicubic interpolation.
Arithmetic Operations
Averaging of Images
Averaging K different
noisy images can
decrease noise. Used in
astronomy.

FIGURE 2.29 (a) Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise. (b)-(f) Result of
averaging 5, 10, 20, 50, and 1,00 noisy images, respectively. All images are of size 566  598 pixels, and all
were scaled so that their intensities would span the full [0, 255] intensity scale. (Original image courtesy of
NASA.)
Image Subtraction Application
Enhancement of difference between images using image
subtraction

FIGURE (a) Infrared image of the Washington, D.C. area. (b) Image resulting from setting to
zero the least significant bit of every pixel in (a). (c) Difference of the two images, scaled
to the range [0, 255] for clarity. (Original image courtesy of NASA.)
Image Subtraction Application
A vivid indication of image change as a function of resolution can be obtained
by displaying the differences between the original image and its various lower-
resolution counterparts. Figure 2.31(a) shows the difference between the 930 dpi
and 72 dpi images.

(a) Difference between the 930 dpi and 72 dpi images in Fig. 2.23. (b) Difference between
the 930 dpi and 150 dpi images. (c) Difference between the 930 dpi and 300 dpi images.
Image Subtraction Application
As a final illustration, we discuss
briefly an area of medical imaging
called mask mode radiography, a
commercially successful and highly
beneficial use of image subtraction.
Consider image differences of the
form
g(x, y) = f (x, y) − h(x, y)
Digital subtraction angiography.
(a) Mask image.
(b) A live image.
(c) Difference between (a) and (b).
(d) Enhanced difference image.
Shading correction by image multiplication
(and division)
An important application of image multiplication (and division) is
shading correction.
Shading correction by image multiplication
(and division)
An important application of image multiplication (and division) is
shading correction.
Masking (RIO) using image multiplication

(a) Digital dental X-ray image.


(b) ROI mask for isolating teeth with fillings (white corresponds to 1 and black
corresponds to 0).
(c) Product of (a) and (b).
Arithmatic Operations
To guarantee that the full range of an arithmetic operation
between images is captured into a fixed number of bits, the
following approach is performed on image f
fm = f – min(f)
which creates an image whose minimum value is 0. Then the
scaled image is
fs = K [ fm / max(fm)]
whose value is in the range [0, K]
Example- for 8-bit image , setting K=255 gives scaled image
whose intensities span from 0 to 255
Set and Logical Operations
Elements of a sets are the coordinates of pixels (ordered pairs of
integers) representing regions (objects) in an image
– Union
– Intersection
– Complement
– Difference
Logical operations
– OR
– AND
– NOT
– XOR
Set Operations
Set Operations
Set operations involving gray-scale images

The union of two gray-scale sets is an array formed from the maximum
intensity between pairs of spatially corresponding elements
Logical Operations
Logical Operations
Illustration of logical
operations involving
Foreground (white) pixels.
Black represents binary 0’s
and white binary 1’s. The
dashed lines are shown for
reference only. They are not
part of the result.
Spatial Operations
Single-pixel operations
– For example,
transformation to obtain the
negative of an 8-bit image
Spatial Operations
Neighborhood operations
For example, compute the average
value of the pixels in a rectangular
neighborhood of size m  n
centered on (x, y)
Affine Transformation
Affine Transformation
Interpolation
Size of Rotated Image
The size of the spatial rectangle
needed to contain a rotated image is
larger than the rectangle of the
original image, as Figs. 2.41(a) and (b)
illustrate. We have two options for
dealing with this: (1) we can crop the
rotated image so that its size is equal
to the size of the original image, as in
Fig. 2.41(c), or we can keep the larger
image containing the full rotated
original, an Fig. 2.41(d).
a b
c d
Image Registration
• Used for aligning two or more images of the same scene
• Input and output images available but the specific transformation
that produced output is unknown
• Estimate the transformation function and use it to register the two
images.
➢ Input image- image that we wish to transform
➢ Reference image- image against which we want to register the
input
• Principal approach- use tie points ( also called control points) ,which
are corresponding points whose locations are known precisely in
the input and reference images
Image Registration
a b
c d

FIGURE 2.42 Image registration.


a. Reference image.
b. Input (geometrically distorted
image). Corresponding tie points
are shown as small white squares
near the corners.
c. Registered (output) image (note
the errors in the border).
d. Difference between (a) and (c),
showing more registration errors
Vector and Matrix Operations
▪ RGB images
▪ Multispectral images
Image Transforms

▪ Transforming the images


▪ Carrying the specified task
in a transform domain
▪ Applying the inverse
transform
Image Transforms
a b
c d

a. Image corrupted by sinusoidal


interference.
b. Magnitude of the Fourier
transform showing the bursts of
energy caused by the Interference
(the bursts were enlarged for
display purposes).
c. Mask used to eliminate the
energy bursts.
d. Result of computing the inverse of
the modified Fourier transform.
(Original image courtesy of NASA.)
Image Transforms
a b
c d

a. Image corrupted by sinusoidal


interference.
b. Magnitude of the Fourier
transform showing the bursts of
energy caused by the Interference
(the bursts were enlarged for
display purposes).
c. Mask used to eliminate the
energy bursts.
d. Result of computing the inverse of
the modified Fourier transform.
(Original image courtesy of NASA.)

You might also like