0% found this document useful (0 votes)
38 views

Digital ImageProcessing - Sampling & Quantization

This document provides an overview of a digital image processing course. The course objectives are to describe and explain basic digital image processing principles, design algorithms for tasks like noise removal and image enhancement, and assess algorithm performance. Topics covered include image fundamentals, enhancement techniques in the spatial domain like contrast stretching and histogram equalization, basic image formation, elements of human visual perception, light properties, image sampling and quantization for digital representation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Digital ImageProcessing - Sampling & Quantization

This document provides an overview of a digital image processing course. The course objectives are to describe and explain basic digital image processing principles, design algorithms for tasks like noise removal and image enhancement, and assess algorithm performance. Topics covered include image fundamentals, enhancement techniques in the spatial domain like contrast stretching and histogram equalization, basic image formation, elements of human visual perception, light properties, image sampling and quantization for digital representation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 80

COL LEGE OF E N GI N E ER I N G ROOR K EE

Established in 1998

Digital Image Processing (TCS-071)

Session: 2020-21

Digital Image Processing


COURSE OBJECTIVE

The aim of this course is,


 Describe and explain basic principles of digital image
processing.
 Design and implement algorithms that perform basic image
processing (e.g. noise removal and image enhancement).
 Design and implement algorithms for advanced image
analysis (e.g. image compression, image segmentation).
 Assess the performance of image processing algorithms and
systems.

Digital Image Processing


Unit 1
Introduction and Fundamentals:
Motivation and Perspective, Applications, Components of Image Processing
System, Element of Visual Perception, A Simple Image Model, Sampling and
Quantization.
Image Enhancement in Spatial Domain:
Introduction; Basic Gray Level Functions – Piecewise-Linear Transformation
Functions: Contrast Stretching; Histogram Specification; Histogram Equalization;
Local Enhancement; Enhancement using Arithmetic/Logic Operations – Image
Subtraction, Image Averaging; Basics of Spatial Filtering; Smoothing - Mean filter,
Ordered Statistic Filter; Sharpening – The Laplacian.

Digital Image Processing


What is a Digital Image?
•A digital image is a representation of a two-
dimensional image as a finite set of digital
values, called picture elements or pixels
Key Stages in Digital Image Processing:
Image Aquisition

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Image Sensing & Acquisition

How to transform
illumination energy
into digital images
using sensing
devices
Image Sensing & Acquisition

• Image Acquisition using single sensor


Image Sensing & Acquisition

• Image Acquisition using Sensor Strips

scanners
Image Sensing & Acquisition

• Image Acquisition using Sensor Array


Simple Image Formation Model
Simple image formation
f(x,y) = i(x,y)r(x,y)
i(x,y): illumination (determined by ill.
Source)
0 < i(x,y) < ∞
i(x,y) = 90,000 lm/m2 (clear day),
0.1lm/m2 (evening)
i(x,y) = 10,000 lm/m2 (cloudy day),
r(x,y) reflectance (determined by
imaged object)
0 < r(x,y) < 1
0.01 for black velvet
0.65 for stainless steel
In real situation
Lmin ≤ L=f(x,y) ≤ Lmax
Lmin = imin * rmin
Lmax = imax * rmax
L : gray level
Simple Image Formation Model
Simple image formation
f(x,y) = i(x,y)r(x,y)
i(x,y): illumination (determined by ill.
Source)
0 < i(x,y) < ∞
i(x,y) = 90,000 lm/m2 (clear day),
0.1lm/m2 (evening)
i(x,y) = 10,000 lm/m2 (cloudy day),
r(x,y) reflectance (determined by
imaged object)
0 < r(x,y) < 1
0.01 for black velvet
0.65 for stainless steel
In real situation
Lmin ≤ L=f(x,y) ≤ Lmax
Lmin = imin * rmin
Lmax = imax * rmax
L : gray level
ELEMENTS OF VISUAL PERCEPTION

• ELEMENTS OF HUMAN VISUAL SYSTEMS

Digital Image Processing


ELEMENTS OF HUMAN VISUAL SYSTEMS

• There are two types of receptors in the retina


– The rods are long slender receptors
– The cones are generally shorter and thicker in structure
• The rods and cones are not distributed evenly around
the retina.
• Rods and cones operate differently
– Rods are more sensitive to light than cones.
– At low levels of illumination the rods provide a visual response
called scotopic vision
– Cones respond to higher levels of illumination; their response
is called photopic vision

Digital Image Processing


IMAGE FORMATION IN THE EYE

Digital Image Processing


IMAGE FORMATION IN THE EYE

Digital Image Processing


CONTRAST SENSITIVITY

• The response of the eye to changes in the intensity of


illumination is nonlinear
• Consider a patch of light of intensity i+dI surrounded
by a background intensity I as shown in the following
figure
• Over a wide range of intensities, it is found that the
ratio dI/I, called the Weber fraction, is nearly constant
at a value of about 0.02.
• This does not hold at very low or very high intensities
• Furthermore, contrast sensitivity is dependent on the
intensity of the surround. Consider the second panel
of the previous figure.

Digital Image Processing


CONTRAST SENSITIVITY

Digital Image Processing


LIGHT

• Light exhibits some properties that make it appear to


consist of particles; at other times, it behaves like a
wave.
• Light is electromagnetic energy that radiates from a
source of energy (or a source of light) in the form of
waves
• Visible light is in the 400 nm – 700 nm range of
electromagnetic spectrum

Digital Image Processing


INTENSITY OF LIGHT

– The strength of the radiation from a light source is measured using the unit
called the candela, or candle power. The total energy from the light source,
including heat and all electromagnetic radiation, is called radiance and is
usually expressed in watts.
– Luminance is a measure of the light strength that is actually perceived by the
human eye. Radiance is a measure of the total output of the source;
luminance measures just the portion that is perceived.
– Brightness is a subjective, psychological measure of perceived intensity. Brightness is
practically impossible to measure objectively. It is relative. For example, a burning
candle in a darkened room will appear bright to the viewer; it will not appear bright in
full sunshine.
– The strength of light diminishes in inverse square proportion to its distance
from its source. This effect accounts for the need for high intensity projectors
for showing multimedia productions on a screen to an audience. Human light
perception is sensitive but not linear
Digital Image Processing
Reflected Light

• The colours that we perceive are determined by the


nature of the light reflected from an object
• For example, if white light is shone onto a green object
most wavelengths are absorbed, while green light is
reflected from the object
Wh
ite Ligh
t
Colours
Absorbe
d
Light
Green
Image Sampling & quantization
• Both sounds and images can be considered as signals, in one
or two dimensions, respectively.
• Sound can be described as a fluctuation of the acoustic
pressure in time, while images are spatial distributions of
values of luminance or color, the latter being described in its
RGB or HSB components.
• Any signal, in order to be processed by numerical computing
devices, have to be reduced to a sequence of discrete
samples, and each sample must be represented using a finite
number of bits. The first operation is called sampling, and the
second operation is called quantization of the domain of real
numbers.
Digital Image Processing
Image Sampling & quantization

Continues image
f(x,y) needs to be
in digital form.
Digitizing the
coordinate values
called sampling.
Sampling should
be in both
coordinates and
in amplitude.
Image Sampling
Image Sampling & quantization

Digitizing the Amplitude


values called image
quantization.

Sampling limits
established by no. of
sensors, but quantization
limits by color levels.

Image Quantization
Image Sampling & quantization

Digital Image Representation

Each element called image element, picture


element, or pixel
Image Sampling & quantization
Image Sampling & quantization
Image Sampling & quantization

 f (0,0) f (0,1) f (0, N  1) 


...
 f (1,0) f (1,1) ... 
f (1, N  1) 
f ( x, y )  
 . . . . 
 
 f ( M  1,0) f ( M  1,1) ... f ( M  1, N  1)

The
The notation
notation (0,1),
(0,1), is
is used
used to
to signify
signify the
the second
second sample
sample along
along the
the first
first row,
row, not
not the
the
actual
actual physical
physical coordinates
coordinates when
when the
the image
image was
was sampled
sampled
Image Sampling & quantization

Consider an image which has :


M * N : size of the image
L : Number of discrete gray levels in this image
L= 2k Where k is any positive integer

The total number of bits needed to store this image b :


b = M * N * K,
If M = N, then b= N2 * K
Image Sampling & quantization

1. The dynamic range is the ratio of


the max (determined by saturation)
measurable intensity to the min.
(limited by noise) detectable
intensity.
2. Contrast is defined as the
difference in intensity between the
highest and the lowest intensity
levels in an image.
Image Sampling & quantization

The dynamic range of an image can be described as:


• High dynamic range:
Gray levels span a significant portion of the gray
scale.
•Low dynamic range:
Dull, washed out gray look.
Image Sampling & quantization

• Spatial resolution:
-No. of samples per unit length or area.
- Lines and distance: Line pairs per unit distance.

• Gray level resolution:


- Number of bits per pixel.
- Usually 8 bits.
- Color image has 3 image planes to yield 8 x 3 = 24
bits/pixel.
- Too few levels may cause false contour.
Image Sampling & quantization

The subsampling was accomplished by


deleting the appropriate number of rows
and columns from the original image.

Spatial Image Resolutions


No. of gray levels (K) is constant(8-bits images).
No. of samples (N) is reduced (No. of sensors)
Image Sampling & quantization

Comparison between all image sizes


Image Sampling & quantization
Image Sampling & quantization

Gray Level Image Resolutions


No. of samples (N) is constant, but gray levels (K) decreases.
(false contouring)
Image Sampling & quantization

What is the effect of changing N and K..?

Little, Intermediate, and Large amount of details


Image Sampling & quantization

Isopreference Curve
BASIC RELATIONSHIP BETWEEN PIXELS

PIXEL
The word pixel is based on a contraction of pix ("pictures") and el (for
"element"); similar formations with el for "element" include the
words: voxel and texel.
In digital imaging, a pixel (or picture element) is a single point in a raster
image. The pixel is the smallest addressable screen element; it is the
smallest unit of picture that can be controlled. Each pixel has its own
address. The address of a pixel corresponds to its coordinates.

In color image systems, a color is typically represented by three or four


component intensities such as red, green, and blue, or cyan, magenta,
yellow, and black.
Bits per pixel

The number of distinct colors that can be represented by a pixel depends on the
number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each pixel
can be either on or off. Each additional bit doubles the number of colors available, so
a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors:

1 bpp, 21 = 2 colors (monochrome)


2 bpp, 22 = 4 colors
3 bpp, 23 = 8 colors
8 bpp, 28 = 256 colors

16 bpp, 216 = 65,536 colors ("Highcolor" )


24 bpp, 224 ≈ 16.8 million colors ("Truecolor")

Digital Image Processing


standard display resolutions

Name Megapixels Width x Height

CGA 0.064 320×200


EGA 0.224 640×350
VGA 0.3 640×480
SVGA 0.5 800×600
XGA 0.8 1024×768
SXGA 1.3 1280×1024
UXGA 1.9 1600×1200
WUXGA 2.3 1920×1200

Digital Image Processing


Relationship Between Pixels

1- Neighbors of a Pixel:
P
The 4- neighbors of pixel p are:
N4(p)
The 4- diagonal neighbors are: P

ND(p)
The 8-neighbors are :
P
N8(p)
Relationship Between Pixels

Adjacency of Pixels:-

Let V be the set of intensity used to define adjacency; e.g.


V={1} if we are referring to adjacency of pixels with value 1 in a
binary image with 0 and 1.
In a gray-scale image, for the adjacency of pixels with a range of
intensity values of , say, 100 to 120, it follows that
V={100,101,102,…,120}.
We consider three types of adjacency :
 4- adjacency
 8- adjacency
 m- adjacency
Adjacency

4- adjacency
Two pixels p and q with values from V are 4- adjacency if q is in the set N4(p) .
Adjacency

8- adjacency
Two pixels p and q with values from V are 8- adjacency if q is
in the set N8(p) .

m- adjacency (mixed adjacency)


Two pixels p and q with values from V
are m- adjacency if
(i) q is in N4(p) , or
(ii) q is in ND ( p) and N4( p) ∩ N4 (q) is empty
Path

A (digital) path(or curve) from pixel p at (x,y) to pixel q at (s,t) is a sequence of distinct
pixels with coordinates
(x0,y0), (x1,y1), …,, (xn,yn)
where (x0,y0) =(x,y), (xn,yn)=(s,t), and pixel (xi,yi) and (xi-1,yi-1) are adjacent for 1≤ i ≤ n
n is the length of the path
If (x0,y0) =(xn,yn), the path is a closed path
The path can be defined 4-,8-,m-paths depending on adjacency type
Connectivity

Let S be a subset of pixels in an image.


Two pixels p and q are said to be connected in S if
there exists a path between them consisting entirely
of pixels in S
For any pixel p in S, the set of pixels that are
connected to it in S is called a connected component
of S.
If it only has one connected component, then set S is
called a connected set.
Region

Let R be a subset of pixels in an image. We call R a region


of the image if R is a connected set
Two regions are said to be adjacent if their union forms a
connected set.
Ri, Rj are adjacent in 8-adjacency sense.
They are not adjacent in 4-adjacency sense.
Boundary

Suppose that an image contains K disjoint regions, Rk, k = 1,2 ,...,k, none of which touches
the image border.
Let Ru be the union of all the K regions, and let (Ru)c denote its complement.
We call the points in Ru the foreground, and all the points in (Ru)c the background of the
image.
The inner boundary (border or contour) of a region R is the set of points that are adjacent
to the points in the complement of R. i.e set of pixels in the region that have at least one
background neighbour.
Boundary

The point circle is not member of the border of


the 1-valued region if 4-connectivity is used.
As a rule, adjacency between points in a region
and its background is defined in terms of 8-
connectivity.
Boundary

The outer border corresponds to the border in the background.


This definition to guarantee the existence of a closed path
Distance Measures

• for pixel p, q and z with coordinates


(x,y), (s,t) and (u,v) respectively,

• D is a distance function or metric if


􀂄 (a) D(p,q) ≥ 0 ; D(p,q) = 0 iff D=q
􀂄 (b) D(p,q) = D(q,p)
􀂄 (c) D(p,z) ≤ D(p,q) + D(q,z)
Distance Measures

1- The Euclidean distance between p and q is


defined as:
Distance Measures

2- The D4 distance (city-block distance) between


p and q is defined as:
Distance Measures

3- The D8 distance (chessboard distance)


between p and q is defined as
Distance Measures

4- The Dm distance: the shortest m-path


between the points.
Array Versus Matrix Operations

 a11 a12  b11 b12 


a a22  b 
 21  21 b22 

Array product
 a11 a12  b11 b12   a11b11 a12b12 
a     
a b b a b a
 21 22   21 22   21 21 22 22 b

See matrix multiplication


Linear versus Nonlinear Operations

Consider general operator H


H  f ( x, y )   g ( x, y )
H is said to be a linear operator if
  
H ai f i ( x, y )  a j f j ( x, y )  ai H  f i ( x, y ) a j H f j ( x, y ) 
 ai g i ( x, y )  a j g j ( x, y )

Assume sum operator


 a f ( x, y)  a f ( x, y)   a f ( x, y)   a f
i i j j i i j j ( x, y )
 a  f ( x, y )  a  f i i j j ( x, y )
 ai g i ( x, y )  a j g j ( x, y )
Linear versus Nonlinear Operations

Let a1 = 1 , a2 =-1
0 2 6 5 
f1    and f2   
 2 3   4 7 
 0 2 6 5      6  3 
max (1)    (1)     max   
  2 3 4 7     2  4 
-2
 0 2   6 5  
(1)max     (1) max     3  (1)7
  2 3  4 7  
-4
Arithmetic Operations

Carried out between corresponding pixel pairs


Same size of arrays
s ( x, y )  f ( x, y )  g ( x, y )
d ( x, y )  f ( x, y )  g ( x, y )
p ( x, y )  f ( x, y )  g ( x, y )
v ( x, y )  f ( x, y )  g ( x, y )
Let g(x,y) denote a corrupted image formed by the addition of noise η(x,y), to a noiseless
image f(x,y);
g(x,y) = f(x,y) + η(x,y)
where the assumption is that at every pair of coordinates (x,y) the noise is uncorrelated
and has a zero average.
Arithmetic Operations

If the noise satisfies the constrains just stated, then an image g(x,y) is formed by
averaging K different noisy images,
1 K
g ( x, y )   g i ( x, y )
K i 1
 
E g ( x, y )  f ( x, y )
1 2
 g2( x , y )    ( x, y )
K
1
 g ( x, y )    ( x, y )
K
As K increases the variability of the pixels at each location decreases.
Arithmetic Operations
Arithmetic Operations

Image subtraction is used in the enhancement of the differences between images


g(x,y) = f(x,y) – h(x,y)
Arithmetic Operations

Application of image multiplication and division is shading correction.


If we can get a modeled image as a product of perfect image f(x,y), times a shading
function, h(x,y); i.e g(x,y) = f(x,y)h(x,y).
If h(x,y) is known we can get the perfect image f(x,y).
Arithmetic Operations

Multiplication as masking

To solve the range problem in arithmetic operations we can do the following:


Given an image f
fm = f – min(f) creates an image with min. value 0, then
fs = K[fm/max(fm)] creates a scaled image whose range [0, K],
With 8-bit image, setting K = 255
Logic Operations

Logic Operation
AND: p AND q (p. q)
OR: p OR q (p + q)
COMPLEMENT: NOT q ( q )
Logic Operations
Spatial Operations

Performed directly on pixels


Three categories
Single-pixel operations
Neighbourhood operations
Geometric spatial transformation

Single-pixel operations
Alter the values of its individual pixels
s = T(z)
Spatial Operations

Neighbourhood operations
Neighbourhood processing generates a
corresponding pixel at the same
coordinates in an output image.

R and c are the row and col. Of the pixels


whose coordinates are members of the
set
1
Sxy. g ( x, y )  
mn ( r ,c )S
f (r , c)
xy
Spatial Operations

Geometric spatial transformation


Modify the spatial relationship between pixels in an image
Consists of two basic operations
Transformation of coordinates
Expressed as (x,y) = T{(v,w)}
e.x (x,y) = T{(v,w)} = (v/2,w/2)
Affine transformation (most commonly used)

t11 t12 0
x y 1  v w 1T  v w 1t 21 t 22 0
t31 t32 1
Scale, rotate, translate, or sheer a set of coordinate points depending on the matrix T
Spatial Operations
Spatial Operations

The affine equation is used in two ways


Forward mapping
Scan pixels of the input image
At each location, (v,w), compute the spatial location, (x,y)
Two or more pixels in the input image may be transformed to the same
location in the output image
Some output locations may not assign a pixel at all.
Inverse mapping (more efficient, used in commercial MATLAB)
Scan the output pixel locations
At each location, (x,y), compute the spatial location, (v,w) = T-1(x,y)
Interpolate among the nearest input pixels to determine the intensity of the
output pixel value
Spatial Operations
Image Registration

Used to align two or more images of the same scene.


Input and output images are available, but the specific transformation
function is not known
Input (image that we wish to transform)
Reference image is the image against which we want to register the input.
One approach is using tie points (control points)
The location of the points are precisely known in input and reference
images.
Using the bilinear approximation is a simple model and given by
x = c1v+c2w+c3vw+c4
y = c5v+c6w+c7vw+c8
(x,y) – reference image, (v,w) – input image tie points
Image Registration

Four pairs of the corresponding points -> 8 equations to find the unknown coefficients.
Image Transformation

A 2-D transform denoted T(u,v) can be expressed in the general form


M 1 N 1
T (u , v)   f ( x, y )r ( x, y, u , v)
x 0 y 0

r(x,y,u,v) forward transformation kernel


M 1 N 1
f ( x, y )   T (u , v) s ( x, y, u , v)
u 0 v 0

s(x,y,u,v) inverse transformation kernel


Image Transformation
Image Transformation

The forward transformation kernel is said to be separable if


r ( x, y, u, v)  r1 ( x, u )r2 ( y, v)
kernel is said to be symmetric if r1(x,y) is functionally equal to r2(x,y) so that

r ( x, y, u, v)  r1 ( x, u )r1 ( y, v)
2-D Fourier transform
M 1 N 1
T (u , v)   f ( x, y )e  2 j ( ux / M  vy / N )
x 0 y 0
M 1 N 1
1
f ( x, y ) 
MN
 T (u ,
u 0 v 0
v ) e j 2 ( ux / M  vy / N )
Probabilistic Methods

Treating intensities as random quantities is the simplest way of applying


probability to image processing.
Let zi, i=0,1,2,…,L-1 possible intensities
nk
Then the probability, P(zk ), of intensity level zk is p( zk ) 
MN
So L 1

 p( z
k 0
k ) 1
L 1
mean average m  z
k 0
k p( zk )
L 1
 2
  (z
k 0
k  m) 2 p ( z k )

Is the measure of the spread of the value of z about the mean, so it is useful to
measure an image contrast
Probabilistic Methods

Standard deviation is 14.3, 31.6, 49.2 intensity level


The variance is 204.3, 997.8, 2424.9
THANK YOU!

Digital Image Processing

You might also like