0% found this document useful (0 votes)
38 views79 pages

# 03 Image Fundamentals and Mathematics Basics in DIP

Uploaded by

Mortal xx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views79 pages

# 03 Image Fundamentals and Mathematics Basics in DIP

Uploaded by

Mortal xx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 79

Digital Image Processing

Lecture # 03

Image Fundamentals and


Mathematics Basics in DIP
Agenda
►Relationships b/w Pixels
►Distance Measures
►Linear vs. Non-Linear Operations
►Arithmetic Operations
►Spatial Operations
►Geometric Transformations
►Image Registration
►Image Transforms

Digital Image Processing Techniques Lecture # 3 2


Basic Relationships Between Pixels

► Neighborhood

► Adjacency

► Connectivity

► Paths

► Regions and boundaries

Digital Image Processing Techniques Lecture # 3 3


Neighbor of a pixel
Basic Relationships Between Pixels

► Neighbors of a pixel p at coordinates (x,y)

 4-neighbors of p, denoted by N4(p):


(x-1, y), (x+1, y), (x,y-1), and (x, y+1).

 4 diagonal neighbors of p, denoted by ND(p):


(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).

 8 neighbors of p, denoted N8(p)


N8(p) = N4(p) U ND(p)

Digital Image Processing Techniques Lecture # 3 5


Basic Relationships Between Pixels
Basic Relationships Between Pixels
Basic Relationships Between Pixels
Basic Relationships Between Pixels

► Adjacency
Let V be the set of intensity values

 4-adjacency: Two pixels p and q with values from V are


4-adjacent if q is in the set N4(p).

 8-adjacency: Two pixels p and q with values from V are


8-adjacent if q is in the set N8(p).

Digital Image Processing Techniques Lecture # 3 9


Basic Relationships Between Pixels

► Adjacency
Let V be the set of intensity values

 m-adjacency: Two pixels p and q with values from V are


m-adjacent if

(i)q is in the set N4(p), or

(ii)q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels whose
values are from V.

Digital Image Processing Techniques Lecture # 3 10


Basic Relationships Between Pixels

► Path
 A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel
q with coordinates (xn, yn) is a sequence of distinct pixels with
coordinates

(x0, y0), (x1, y1), …, (xn, yn)

Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.

 Here n is the length of the path.

 If (x0, y0) = (xn, yn), the path is closed path.

 We can define 4-, 8-, and m-paths based on the type of


adjacency used.

Digital Image Processing Techniques Lecture # 3 11


Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1

Digital Image Processing Techniques Lecture # 3 12


Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
8-adjacent

Digital Image Processing Techniques Lecture # 3 13


Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
8-adjacent m-adjacent

Digital Image Processing Techniques Lecture # 3 14


Examples: Adjacency and Path
V = {1, 2}
0 1 1
1,1 1,2 1,3 0 1 1 0 1 1
0 2 0
2,1 2,2 2,3 0 2 0 0 2 0
0 0 1
3,1 3,2 3,3 0 0 1 0 0 1
8-adjacent m-adjacent

The 8-path from (1,3) to (3,3): The m-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)

Digital Image Processing Techniques Lecture # 3 15


Adjacency
Adjacency
Adjacency
Adjacency
Adjacency
Adjacency
Basic Relationships Between Pixels

► Connected in S
Let S represent a subset of pixels in an image. Two pixels
p with coordinates (x0, y0) and q with coordinates (xn, yn)
are said to be connected in S if there exists a path

(x0, y0), (x1, y1), …, (xn, yn)

Where i, 0  i  n,(x i , yi ) 

S
Digital Image Processing Techniques Lecture # 3 22
Basic Relationships Between Pixels

Let S represent a subset of pixels in an image

► For every pixel p in S, the set of pixels in S that are connected to p is


called a connected component of S.

► If S has only one connected component, then S is called Connected


Set.

► We call R a region of the image if R is a connected set

► Two regions, Ri and Rj are said to be adjacent if their union forms a


connected set.
► Regions that are not to be adjacent are said to be disjoint.

Digital Image Processing Techniques Lecture # 3 23


Basic Relationships Between Pixels
Connected Component

Digital Image Processing Techniques Lecture # 3 25


Basic Relationships Between Pixels
Basic Relationships Between Pixels
Basic Relationships Between Pixels
Basic Relationships Between Pixels
Basic Relationships Between Pixels

► Boundary (or border)

 The boundary of the region R is the set of pixels in the region that
have one or more neighbors that are not in R.
 If R happens to be an entire image, then its boundary is defined as the
set of pixels in the first and last rows and columns of the image.

► Foreground and background

 An image contains K disjoint regions, Rk, k = 1, 2, …, K. Let Ru denote


the union of all the K regions, and let (Ru)c denote its complement.
All the points in Ru is called foreground;
All the points in (Ru)c is called background.

Digital Image Processing Techniques Lecture # 3 30


Boundary and Foreground/Background

Digital Image Processing Techniques Lecture # 3 31


Distance Measures

► Given pixels p, q and z with coordinates (x, y), (s,


t), (u, v) respectively, the distance function D has
following properties:

a. D(p, q) ≥ [D(p, q) = 0, iff p =


0 q]
b. D(p, q) = D(q, p)

c. D(p, z) ≤ D(p, q) + D(q,


z)

Digital Image Processing Techniques Lecture # 3 32


Distance Measures

The following are the different Distance measures:

a.Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]1/2

b. City Block Distance:


D4(p, q) = |x-s| + |y-t|

c.Chess Board Distance:


D8(p, q) = max(|x-s|, |y-t|)

Digital Image Processing Techniques Lecture # 3 33


Introduction to Mathematical Operations in
DIP
► Array vs. Matrix Operation

a11 a12 b11 b12 



 B
Array A  a a b22 
product
operator  21 b21
22a11b11
A .* B   Array product

Matrix  a21b21
product a12b12 
operator
 a11b11  a12 21 a b  a b Matrix product
A b 12 22 
a22b22  11 12
a21b11  a22b21 a b  a b
* 21 12 22 22 

B
Digital Image Processing Techniques Lecture # 3 34
Introduction to Mathematical Operations in
DIP
► Linear vs. Nonlinear Operation

H  f (x, y)  g(x, y)


H ai fi (x, y)  a j f j (x, y) Additivity

 H ai fi (x, y)   H  a j f j (x, Homogeneity

y) 

ai Hto be
H issaid fi (x, y)  a j H
a nonlinear  f j (x,if it does not meet the
operator
above qualification.
y)
Digital Image Processing Techniques Lecture # 3 35
Arithmetic Operations

► Arithmetic operations between images are array


operations. The four arithmetic operations are
denoted as

s(x,y) = f(x,y) + g(x,y)


d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)

Digital Image Processing Techniques Lecture # 3 36


Image Addition

►Used to create double exposures or


composites

Digital Image Processing Techniques Lecture # 3 37


Example: Addition of Noisy Images for Noise Reduction

► In astronomy, imaging under very low light levels


frequently causes sensor noise to render single images
virtually useless for analysis.

► In astronomical observations, similar sensors for noise


reduction by observing the same scene over long
periods of time. Image averaging is then used to
reduce the noise.

Digital Image Processing Techniques Lecture # 3 38


Example: Addition of Noisy Images for Noise Reduction

Noiseless image: f(x,y)


Noise: n(x,y) (at every pair of coordinates (x,y), the noise is uncorrelated
and has zero average value)
Corrupted image: g(x,y)
g(x,y) = f(x,y) + n(x,y)

Reducing the noise by adding a set of noisy images, {gi(x,y)}


K

1
g (x, y)  K  g (x, y) i
i1

Digital Image Processing Techniques Lecture # 3 39


D 40
Example: Addition of Noisy Images for Noise Reduction

Digital Image Processing Techniques Lecture # 3 41


Image Subtraction

► Image subtraction : useful for finding changes b/w two images

Digital Image Processing Techniques Lecture # 3 42


An Example of Image Subtraction: Mask Mode Radiography

Mask h(x,y): an X-ray image of a region of a patient’s body

Live images f(x,y): X-ray images captured at TV rates after injection of


the contrast medium

Enhanced detail g(x,y)

g(x,y) = f(x,y) - h(x,y)

The procedure gives a movie showing how the contrast medium


propagates through the various arteries in the area being observed.

Digital Image Processing Techniques Lecture # 3 43


Digital Image Processing Techniques Lecture # 3 44
An Example of Image Multiplication

Digital Image Processing Techniques Lecture # 3 45


Set and Logical Operations

Digital Image Processing Techniques Lecture # 3 46


Set and Logical Operations
► Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where
x and y are spatial coordinates and z denotes the
intensity at the point (x, y).

A {(x, y, z) | z  f (x, y)}


► The complement of A is denoted Ac

Ac {(x, y, K  z) | (x, y, z) 
K A}
2k 1; k is the number of intensity bits used to represent
z

Digital Image Processing Techniques Lecture # 3 47


Set and Logical Operations
► The union of two gray-scale images (sets) A and
B is defined as the set

A  B  {max(a,b) | a  A,b  B}
z

Digital Image Processing Techniques Lecture # 3 48


Set and Logical Operations

Digital Image Processing Techniques Lecture # 3 49


Set and Logical Operations

Digital Image Pr 50
Spatial Operations

Spatial operations are performed directly


on the pixels of a given image.

Three categories;
Single pixel operations
Neighborhood operations
Geometric spatial transformation
Spatial Operations

► Single-pixel operations
Alter the values of an image’s pixels based on the
intensity.

s  T (z)

e.g.,

Digital Image Processing Techniques Lecture # 3 52


Spatial Operations

► Neighborhood
Neighborhood
operations processing generates
a corresponding
pixel at the same
coordinates in an
output (processed
image)

The value of this pixel is


determined by a specified
operation involving the pixels in
the input image with coordinates
in Sxy

Digital Image Processing Techniques Lecture # 3 53


Spatial Operations

► Neighborhood
operations

Digital Image Processing Techniques Lecture # 3 54


Spatial Operations
For example, suppose that the specified operation is to
compute the average value of the pixels in a rectangular
neighborhood of size m*n centered on (x,y).
The location of pixels in this region constitute the set Sxy.
Pixel to pixel

Using a neighborhood of size


41x41. The net effect is to
perform local blurring in the
original image. This type of
process is used to eliminate
small details and render
blobs corresponding to the
largest regions of an image.
Blob Detection
BLOB stands for Binary Large OBject and refers to a group of connected pixels in
a binary image. 
The purpose of BLOB extraction is to isolate the BLOBs (objects) in a binary
image.
 BLOB consists of a group of connected pixels.
Whether or not two pixels are connected is defined by the connectivity, that is,
which pixels are neighbors and which are not. The two most often applied types of
connectivity 4 and 8.
The 8-connectivity is more accurate than the 4-connectivity, but the 4-connectivity
is often applied since it requires fewer computations, hence it can process the image
faster.
Geometric Spatial Transformations

► Geometric transformation (rubber-sheet transformation)


(Printing an image on a sheet of rubber and then stretching the sheet according to a
predefined set of rules.) T(v/2,w/2) shrinks the
—A spatial transformation of coordinates original image to half its
size in both spatial
(x, y)  T{(v, w)} directions
—intensity interpolation that assigns intensity values to the spatially
transformed pixels.

► Affine transform
t11
0
 
 x y 1  v w t12 0
1 t 21 t 22
Digital Image Processing Techniques Lecture # 3
1 57
Digital Image Pr 58
Intensity Assignment

► Forward Mapping

(x, y)  T{(v, w)}


Scanning the pixels of i/p image and, at each location, (v,w),
computing the spatial location (x,y) of the corresponding pixel in
the o/p image.
It’s possible that two or more pixels can be transformed to the
same location in the output image, raising the question how to
combine multiple o/p values into a single o/p pixel.
In addition it is possible that some o/p pixels are not assigned a
value at all.

Digital Image Processing Techniques Lecture # 3 59


Intensity Assignment
Intensity Assignment

Gaps and overlaps.


One way to overcome these problems is to map pixel
rectangles in input space to output space quadrilaterals.
Intensity Assignment

 Inverse Mapping

Scans the o/p pixel locations and at each location, (x,y)


computes the corresponding location in the i/p image using
(v, w)  T 1{(x, y)}
It then interpolates among the nearest input pixels to determine
the intensity of the output pixel value.

Inverse mappings are more efficient to implement than forward


mappings.

Matlab uses this approach.


Example: Image Rotation and Intensity
Interpolation

Blocks that provide the intensity transition from light to dark is larger than the
corresponding number of blocks in (d), sharper edge.
The identity transformation and translation transformation if the increments are
integer numbers does not require interpolation.
Digital Image Processing Techniques Lecture # 3 63
Image Registration

► Important application of DIP used to align two or more images of the


same scene.
 Input and output images are available but the transformation
function
 is unknown.
Goal: estimate the transformation function and use it to register
the two images.
The input image is the image we want to transform and what we
call the reference image is the image against which we want to
register the input.

Digital Image Processing Techniques Lecture # 3 64


Image Registration

For example, it may be of interest to align (register) two or


more images taken at approximately the same time, but using
different imaging systems, such as MRI scanner and a PET
(positron emission tomography) scanner.
Or the images were taken at different times using the same
instrument, such as satellite images of a given location taken
several days, months or even years.
In either case, combining the images or performing
quantitative analysis and comparisons between them requires
compensating for geometric distortions caused by differences in
viewing angle, distance and orientation; sensor resolution; shift
in object positions and other factors.
Image Registration

 One of the principal approaches for image


registration is to use tie points (also called
control points).
 The corresponding points are known precisely in
the input and output (reference) images.
 There are numerous ways to select tie points,
ranging from interactively selecting them to
applying algorithms that attempt to detect these
points automatically.
 In some applications, imaging systems have
physical artifacts (small metallic objects)
embedded in imaging sensors.
 These produce a set of known points (reseau
marks ) directly on all images, guides for
establishing tie points.
Image Registration

►A simple model based on bilinear approximation:

 x  c1v  c2 w  c3vw 
c4
Where (v, w) and (x, y) are the coordinates of tie points in the input
 c5v 
andyreference c6 w  c7vw 
images.

c4 8tie points inefficient, sub images, quadrilateral.


Complex models, polynomials fitted by least squares algorithms.
Dependent on the severity of geometric distortion.
Need to perform intensity interpolation.

Digital Image Processing Techniques Lecture # 3 67


Image Registration

Digital Im 68
Image Transform

►A particularly important class of 2-D linear transforms,


denoted T(u, v)
M 1 N 1

T (u, v)    f (x, y)r(x, y, u, v)


x0 y 0

where f (x, y) is the input image,


r(x, y, u, v) is the forward transformationker nel,
variables u and v are the transform variables,
u = 0, 1, 2, ..., M-1 and v = 0, 1, ..., N-1.

Digital Image Processing Techniques Lecture # 3 69


Image Transform

► Given T(u, v), the original image f(x, y) can be recovered


using the inverse transformation of T(u, v).

M 1 N 1

f (x, y)   T (u, v)s(x, y, u, v)


u 0 v0

where s(x, y, u, v) is the inverse transformation


ker nel,
x = 0, 1, 2, ..., M-1 and y = 0, 1, ..., N-1.

Digital Image Processing Techniques Lecture # 3 70


Image Transform

Digital Image Processing Techniques Lecture # 3 71


Example: Image Denoising by Using DCT Transform

Digital Image Processing Techniques Lecture # 3 72


Forward Transform Kernel
M 1 N 1

T (u, v)    f (x, y)r(x, y, u, v)


x0 y 0

The kernel r(x, y, u, v) is said to be SEPERABLE if


r(x, y, u, v)  r1 (x, u)r2 ( y, v)

In addition, the kernel is said to be SYMMETRIC if


r1 (x, u) is functionally equal to r2 ( y, v), so that
r(x, y, u, v)  r1 (x, u)r1 ( y, u)
Digital Image Processing Techniques Lecture # 3 73
The Kernels for 2-D Fourier Transform

The forward kernel


r(x, y, u, v)  e  j 2 (ux/ M vy /
N)

Where j= 1

The inverse kernel


1
s(x, y, u, v) e j 2 (ux/ M vy /
 MN N )

Digital Image Processing Techniques Lecture # 3 74


2-D Fourier Transform

M 1 N 1
T (u, v)    f (x, y)e j 2 (ux/ M vy / N )
x0 y 0

M 1 N 1
1
f (x, y)  T (u, j 2 (ux/ M vy /
N)
 MN
v)e
u 0 v0

Digital Image Processing Techniques Lecture # 3 75


Probabilistic Methods

Treating intensity values as random values.


Let zi , i  0, 1, 2, ..., L -1, denote the values of all possible intensities
in an M  N digital image. The probability, p(z k ), of intensity level
zk occurringp(z nk image is estimated as
in a) given
k  ,
MN
where nk is the number of times that intensity zk occurs in the image.
L1

 p(z k ) 1
k 0

The mean (average) intensity is given by


L1
m =  zk p(z k )
k 0

Digital Image Processing Techniques Lecture # 3 76


Probabilistic Methods

The variance of the intensities is given by


L1

 = 2  (z k  m) 2 p(zk )
k 0
The variance is a measure of the spread of the values of z
about the mean, so it a useful measure of image contrast.
In general the nth moment of random variable z about the
mean is defined as

Digital Image Processing Techniques Lecture # 3 77


Probabilistic Methods

Whereas the mean and variance have an immediately


obvious relationship to visual properties of an image, higher
order moments are more subtle.

Positive third moment – values higher than the mean


Negative third moment- opposite
Zero third moment – intensities are distributed equally on
both sides of mean.

Useful for computational purposes, but do not tell much


about the appearance of an image in general
Example: Comparison of Standard Deviation
Values

   31.6   49.2
14.3
Digital Image Processing Techniques Lecture # 3 79

You might also like