Digital Image Acquisition Sampling and Quantization
Digital Image Acquisition Sampling and Quantization
2
EM Waves
–A stream of mass less particles each travelling in a wave like pattern, moving at the speed of light and
contains a certain bundle of energy
–The electromagnetic spectrum is split up in to bands according to the energy per photon
EM spectrum
3
An image is typically generated by the combination of an “illumination” source and the
reflection, absorption, or radiation of energy by “scene”.
5
Computers cannot handle continuous images but only arrays of digital numbers.
Thus it is required to represent images as two-dimensional arrays of points
A point on the 2-Dgrid is called a pixel
6
projection image of object
through lens
7
projection onto
discrete sensor digital camera
array (CCD)
8
AND QUANTIZATION
sampled image
sensors register
average color.
9
SAMPLING AND
continuous colors
mapped to a finite,
discrete set of colors.
10
continuous color input
sampled &
real image sampled quantized
quantized
11
12
Another Perspective
The basic idea behind converting an analog signal to its digital signal is to convert both of its
axis (x,y) into a digital format.
Since an image is continuous not just in its co-ordinates (x axis), but also in its amplitude (y
axis),
The part that deals with the digitizing of co-ordinates is known as sampling.
The part that deals with digitizing the amplitude is known as quantization.
13
Another Perspective
The concept of sampling is directly related to zooming.
The more samples you take, the more pixels, you get.
14
Illustration of Sampling
Digital images consist of pixels.
On a square grid, each pixel
represents a square region of
the image.
17
An image is a 2D projection of a 3D world
Image as a 2D function f(x,y): R2→R
x and y are spatial coordinates on a rectangular grid (i.e. Cartesian coordinates)
f at (x,y) is the intensity value
18
A set of pixels (picture elements, pels)
Pixel means
–pixel coordinate
–pixel value
–or both
19
20
21
22
Binary images:
f(x,y) ∈ {0,1}
Color images
Can be described mathematically as three gray scale images
Red, green, blue channels
fR(x,y) ∈ C, fG(x,y) ∈ C , fB(x,y) ∈ C
C = {0,….,255}
23
Digital Image Acquisition
Sampling and Quantization
Image Representation
Spatial, Temporal and Gray-Level Resolution
Interpolation
Pixel Neighbourhood
Some Basic Relationships Between Pixels
Neighborhood Adjacency and connectivity
2
An MxN image representation as a matrix
Space required to store each intensity value
Number of discrete intensity levels, L
Number of bits used, k
L=2 k
24
Color images have 3 values per
pixel; monochrome images have
1 value per pixel.
a grid of squares,
each of which
contains a single
color
each square is
called a pixel (for
picture element)
25
Spatial resolution:
The column (C) by row (R) dimensions of the image define the number of pixels used to cover the
visual space captured by the image.
Relates to the sampling of the image information
Temporal resolution
For a continuous capture system such as video, this is the number of images captured in a given time
period.
Measures in frames per second (fps); 30 fps normal video
26
AND GRAY-LEVEL
Spatial Resolution
27
SPATIAL AND
256 128 16 8
64 32 4 2
28
Zooming: Increasing the number of pixels in an image so that the image appears larger
Zooming may be viewed as oversampling
Shrinking may be viewed as undersampling.
Interpolation is basic tool used extensively in tasks such as zooming, shrinking, rotation and
geometric corrections.
Interpolation is a method of constructing new data points within the range of a discrete set of known
data points.
Nearest neighbour interpolation
Bilinear interpolation
Bicubic interpolation
29
Three methods
30
Pixel replication
31
32
33
Pixel Decimation
Decimation by a factor
of n:
take every nth pixel in
every nth row
34
35
Nearest neighbour interpolation
Simple but produces undesired artefacts
Bilinear Interpolation
Contribution from 4 neighbours
Bicubic Interpolation
Contribution from 16 neighbours
36
37
38
Digital Image Acquisition
Sampling and Quantization
Image Representation
Spatial, Temporal and Gray-Level Resolution
Interpolation
Pixel Neighbourhood
Some Basic Relationships Between Pixels
Neighborhood Adjacency and connectivity
2
Neighbors of pixel are the pixels that are adjacent pixels of an identified pixel
39
Pixel Neighbourhood
N4(p) 4-neighbors: the set of horizontal and vertical neighbors
ND(p) diagonal neighbors: the set of 4 diagonal neighbors
N8(p) 8-neighbors: union of 4-neighbors and diagonal neighbors
O O O O O O
O X O X O X O
O O O O O O
N4(p) N8(p)
ND(p)
Adjacency:
Two pixels that are neighbors are adjacent if they have
same grey-level or
some other specified similarity criterion
Pixels can be
4-adjacent, diagonally adjacent, 8-adjacent, or m-adjacent.
41
4-connectivity
If gray level
42
8-connectivity
If gray level
43
46
m-connectivity (Mixed Connectivity)
If gray level
Note: Mixed connectivity can eliminate the multiple path connections that
often occurs in 8-connectivity 44
Adjacency:
Two pixels that are neighbors are adjacent if they have
same grey-level or
some other specified similarity criterion
Pixels can be
4-adjacent, diagonally adjacent, 8-adjacent, or m-adjacent.
45
Connectivity in a subset S of an image
Two pixels are connected if there is a path between them that lies completely within S.
Connected component of S:
The set of all pixels in S that are connected to a given pixel in S.
Region of an image
Boundary, border or contour of a region
Edge
A path of one or more pixels that separate two regions of significantly different
gray levels.
47
Digital Image Acquisition
Sampling and Quantization
Image Representation
Spatial, Temporal and Gray-Level Resolution
Interpolation
Pixel Neighbourhood
Some Basic Relationships Between Pixels
Neighborhood Adjacency and connectivity
2
In Matlab : bwconncomp(BW)
Demo
48
4 Connectivity
49
50
Distance Measures
Distance measures
Distance function: A function of two points, p and q, in space that satisfies three criteria
( a ) D ( p, q ) ≥ 0
(b) D( p, q) = D(q, p), and
(c ) D ( p, z ) ≤ D ( p, q ) + D ( q, z )
p has coordinates (x,y)
The Euclidean distance De(p, q) q has coordinates (s,t)
De ( p, q) = ( x − s ) 2 + ( y − t ) 2
2
2 1 2
The city-block (Manhattan) distance D4(p, q) 2 1 0 1 2
D4 ( p, q ) =| x − s | + | y − t | 2 1 2
2 2 2 2 2 2
The chessboard distance D8(p, q) 2 1 1 1 2
2 1 0 1 2
D8 ( p, q ) = max(| x − s |, | y − t |) 2 1 1 1 2
2 2 2 2 2 51
52
Distance of each grey pixel from the closest white pixel.
Which distance measures are used in the figure below?
53
Arithmetic and logic operations are often applied a preprocessing steps in image analysis in
order to combine images in various way.
Arithmetic Operations
Addition,
Subtraction,
Division
Multiplication
Logic Operations
AND
OR,
NOT
54
Let x is the old gray value, y is the new gray value, c is a positive constant.
Addition: y=x+c
Subtraction: y = x - c
Multiplication: y = cx
Division: y = x/c
Complement: y= 255 - x
To ensure that the results are integers in the range [0, 255], the following operations should
be performed
• Rounding the result to obtain an integer
• Clipping the result by setting
• y = 255 if y > 255
• y=0 if y < 0
55
MATLAB functions
Addition: imadd(x,y)
Add two images or add constant to image
Subtraction: imsubstract(x,y)
Subtract two images or subtract constant to image
Multiplication: immultiply(x,y)
Multiply two images or multiply image by constant
Division: imdivide(x,y)
Divide two images or divide image by constant
Complement: imcomplement(x)
56
Subtraction of two image is often used to detect motion.
Consider the case where nothing has changed in a scene; the image resulting from subtraction of two
sequential image is filled with zeros - a black image.
If something has moved in the scene, subtraction produce a nonzero result at the location of movement.
Medical imaging
Often uses this type of operation to allow the doctor to more readily see changes which are helpful in
the diagnosis.
57
Subtraction of two images
Change detection in Scene
58
a) b)
c) d)
e) f)
Image Subtraction a) Original scene, b) same scene later, c) subtraction of scene a from scene b, d) the
subtracted image with a threshold of 50, e) the subtracted image with a threshold of 100, f) the subtracted image
with a threshold of 150.
Theoretically, only image elements that have moved should show up in the resultant image. Due to imperfect alignment between
the two images, other artifacts appear. Additionally, if an object that has moved is similar in brightness to the background it will
cause problems – in this example the brightness of the car is similar to the grass.
Added by 128 Subtracted by 128
60
Noise is common
G(x,y) = f(x,y) + n(x,y)
Digital subtraction angiography is commonly used to visualize vessels inside the body
61
Used to adjust the brightness of an image.
is done on a pixel by pixel basis and the options are to multiply or divide an image by a constant value,
or by another image.
Multiplication of the pixel value by a value greater than one will brighten the image (or division by a value less than
1), and
Division by a factor greater than one will darken the image (or multiplication by a value les than 1).
62
a) original image, b) image divided by a value less than 1 to brighten, c)
image divided a value greater than 1 to darken
a) b) c)
63
Image multiplication and division =>Shade Correction
64
Multiplied by 2 Divided by 2
65
Multiplication Addition
66
Division Subtraction
67
The logic operations AND, 0R and NOT operate in a bit-wise fashion on pixel data.
Example
performing a logic AND on two images. Two corresponding pixel values are 11110 in one image and 8810
in the second image. The corresponding bit string are:
68
AND and OR can be used as a simple method to extract a ROI from an image.
For example,
A white mask ANDed with an image will allow only the portion of the image coincident with the mask
to appear in the output image, with the background turned black;
A black mask ORed with an image will allow only the part f the image corresponding to the black mask
to appear in the output image, but will turn the return of the image white.
This process is called image masking
69
a) b) c)
a) Original image,
b) image mask for AND
operation,
c) Resulting image from
(a) AND (b),
d) image mask for OR
operation, created by
performing a NOT on
mask (b),
e) Resulting image from
(a) OR (d).
d) e) 70
Mask application
71
Create the negative image
MATLAB:
function:
x = imread(‘filename.ext’);
y = imcomplement(x);
72
Digital Image Processing (3rd Ed) By Gonzalez
Chapter 2 ( some sections )
73
Various contents in this presentation have been taken from different books, lecture notes, and
the web. These solely belong to their owners, and are here used only for clarifying various
educational concepts. Any copyright infringement is not intended.
74