0% found this document useful (0 votes)
3 views105 pages

Unit 1. Digital Image Fundamentals

The document provides an overview of Digital Image Processing (DIP), covering its fundamentals, applications, and key concepts such as image acquisition, sampling, and quantization. It discusses various types of images, including monochrome, grayscale, color, and multispectral images, as well as the fundamental steps involved in image processing like enhancement, restoration, and segmentation. Additionally, it highlights the historical development of DIP and its applications across various fields such as medicine, law enforcement, and remote sensing.

Uploaded by

nisthajain1228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views105 pages

Unit 1. Digital Image Fundamentals

The document provides an overview of Digital Image Processing (DIP), covering its fundamentals, applications, and key concepts such as image acquisition, sampling, and quantization. It discusses various types of images, including monochrome, grayscale, color, and multispectral images, as well as the fundamental steps involved in image processing like enhancement, restoration, and segmentation. Additionally, it highlights the historical development of DIP and its applications across various fields such as medicine, law enforcement, and remote sensing.

Uploaded by

nisthajain1228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

DIGITAL IMAGE PROCESSING

21CSE251T

Unit 1 : Digital Image


Fundamentals

1
Topics to be covered
Origin of Digital Image Processing,

Fundamental Steps of Image Processing,


Components of Digital Image Processing System.
Applications of Digital Image Processing,
Elements of Visual perception,

Image Acquisition systems,

Image Sampling and Quantization,

Types of Images,

Some basic relationships like Neighbors, Connectivity,


Regions and Boundaries, Distance Measures.

2
Digital Image Processing

“Digital image processing (DIP) refers to processing digital


images by means of digital computers”

DIP encompasses a wide and varied field of applications where


input and output are images.

Various image processing methods can be employed on these


images to extract the various attribute of images.

3
Digital Image Processing

Image : “ An image may be defined as a two dimensional function


g(x,y). Where ‘x’ and ‘y’ are spatial or plane co-ordinates”.

Gray level or Intensity: “ In and image, with respect to


2-dimensional function g(x,y), the amplitude of ‘g’ at any pair of
coordinator (x,y) is known as intensity or gray level of image at that
point”.

Digital Image: “When x,y, and amplitude levels of ‘g’ are all
finite and discrete in quantities, the image is said to be a digital
image”

Pixels: “A digital image is composed of a finite number of


elements, each of which has a particular location and value. These
components are referred to as picture elements, image element,
pels and pixels.”

4
Digital Image Processing
Digital Image: “When x, y and amplitude levels of ‘g’ are all finite
and discrete in quantities, the image is said to be a digital image”

5
Origin of Digital Image
Processing
Newspaper Industry : Very first application of digital images was
Newspaper industry where pictures are sent by submarine cables
between two cities”. Time and space complexity is high in this system.

6
Origin of Digital Image
Processing
Bartlane cable picture transmission system: In 1920, invention of
Bartlane cable picture transmission system reduced the time required to
send a picture.

7
Origin of Digital Image
In 1960, introductionProcessing
of powerful digital computers reduced the
storage complexity and enhanced the computational requirement of
digital image.

In 1970, digital image processing techniques began in field of


medical imaging, earth resource observation and astronomy.

Till 1970, with expansion of networking and communication


bandwidth via internet have created unprecedented opportunities for
continued growth of DIP.

8
Application of DIP
Office automation.

Industrial automation.

Bio-medical.

Remote sensing.

Scientific application.

Criminology

Astronomy and space applications.

Meteorology

Information technology

Printing and graphics arts.

Military application.

9
Application of DIP: Automatic Text
Analysis
Possible steps of DIP will be-

Acquiring an image of the area containing the text

Preprocessing of image

Segmenting the individual characters

Description of characters in a form suitable for computer

processing

At last recognition of individual characters.

10
Application of DIP: Image
Enhancement
One of the most common uses of DIP techniques: improve quality,
remove noise etc

11
Application of DIP: Artistic Effects

Artistic effects are used to make images more visually appealing, to


add special effects and to make composite images

12
Application of DIP: Medicine

Take slice from MRI scan of heart, and find boundaries between types of
tissue.
◦ Image with gray levels representing tissue density
◦ Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image

13
Application of DIP: GIS
Geographic Information Systems
◦ Digital image processing techniques are used extensively to manipulate satellite
imagery
◦ Terrain classification

14
Application of DIP: PCB Inspection

Printed Circuit Board (PCB) inspection


◦ Machine inspection is used to determine that all components are
present and that all solder joints are acceptable.
◦ Both conventional imaging and x-ray imaging are used

15
Application of DIP: Law Enforcement

Image processing techniques are used extensively by law enforcers


◦ Number plate recognition for speed cameras/automated toll
systems
◦ Fingerprint recognition
◦ Enhancement of CCTV images

16
Application of DIP: Human Computer Interface
(HCI)
Try to make human computer interfaces more natural
◦ Face recognition
◦ Gesture recognition

17
Fundamental steps in digital image
processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Colour Image Image
Processing Compression

18
Fundamental steps in digital image
processing
◦ Process of acquiring an image
◦ Converting simple image into
digitized image
Morphologi
Image
cal ◦ Involves some preprocessing like
Restoration
Processing scaling etc also

Image
Segmentati
Enhanceme
on
nt

Representa
Image
tion &
Acquisition
Description

Object
Problem
Recognition
Domain
Colour
Image
Image
Compression
Processing

19
Fundamental steps in digital image
processing
Image Enhancement
◦ Highlighting certain features of interest
in an image suitable for specific
Image Morphological
Restoration Processing application.
◦ It is a subjective process
◦ Because what constitutes a good
Image
Segmentation enhancement result is based on human
Enhancement
subjective preferences.

Representa
Image
tion &
Acquisition
Description

Problem Object
Domain Recognition
Colour
Image
Image
Compressi
Processin
on
g

20
Fundamental steps in digital image
processing
Image Restoration
Morphologi
Image
cal
Restoration
Processing • Deals with improving the appearance
of image
• It is a objective process
Image
Segmentati
Enhancem • Because restoration techniques based
on
ent
on the mathematical or probalistic
models of image degradation.
Represent
Image
ation &
Acquisition
Description

Object
Problem Recognitio
Domain n
Colour Image
Image Compressi
Processing on

21
Fundamental steps in digital image
processing

Image
Morphological Processing
Morphological
Restoratio
Processing
n
Deals with tools for extracting
image components that are
Image useful in image representation
Enhancem Segmentation
ent and description of shape.

Image Representati
Acquisitio on &
n Description

Problem Object
Domain Recognition

Colour Image
Image Compressi
Processing on

22
Fundamental steps in digital image
processing

Morphologi Segmentation
Image
cal
Restoration
Processing
Partitioning an image into its
constituent parts or objects.
Image
Enhanceme Segmentation
nt
If more accurate
segmentation then more
Representa
succeeded recognition is
Image
Acquisition
tion & done.
Description

Problem Object
Domain Recognition
Colour Image
Image Compressio
Processing n

23
Fundamental steps in digital image
processing
Morphologi
Representation and
Image Description
cal
Restoration
Processing

Conversion of data in a form


Image
suitable for computer
Segmentati processing.
Enhancem
on
ent
Boundary representation- when
focus is on external shape like
Image Representation corners etc.
Acquisition & Description
Region Representation- when
focus is on internal
Object
representation like texture etc.
Problem
Domain Recognition

Colour Image Object Recognition


Image Compressi
Processing on Assigning a label to an object
based on its descriptor.

24
Fundamental steps in digital image processing

Image
Morphologic Image Compression
al
Restoration
Processing Reducing storage required
Reducing bandwidth required
for transmission
Image
Segmentati
Enhanceme
on
nt Color Image Processing
Extracting features of interest
Image
Representat in an image
ion &
Acquisition
Description

Problem Object
Domain Recognition
Colour
Image
Image
Compression
Processing

25
Elements of an Image Processing System

Mass Storage
Sensor

Digital Image
Digitizer Computer Processing H/W
& S/W

Display

[ General Purpose Image Processing System ]


26
Type of Images

1. Monochrome image: Each pixel is stored as a single bit ( 0 or 1). Here


‘0’ represent black color while ‘1’ represent black color. This images are
also know as binary images / bit mapped images.

27
Type of Images

2. Gray scale image: Each pixel is usually stored as a byte ( 8-bits). Pixel values are
ranges from ‘0’ (Black) to ‘255 (28 )’ (White).

28
Type of Images

3. Color image: Color image are based on the fact that a variety of colors
can be generated by mixing 3-primary colors i.e. Red, Green and Blue.
Each single color represent 1 byte. So, size of color image is 24-bit. Color
image can be represented as:
g(x,y) = [ g R(x,y), g G(x,y), g B(x,y) ]

Color image can be converted to Gray image: X = (R+G+B) / 3

29
Type of Images

4. Half-tone images: A halftone, or halftone image, is


an image comprised of discrete dots rather than continuous tones.
When viewed from a distance, the dots blur together, creating the
illusion of continuous lines and shapes.

“Technique to achieve an illusion of gray levels from only black and


white level is called half toning.

30
Type of Images

5. Multispectral image: A multispectral image is one that captures


image data within specific wavelength ranges across the
electromagnetic spectrum. A multispectral image is a vector valued
function having number of components equal to that of spectral
band. And is represented by

[ g1(x,y), g2(x,y),………………gn(x,y) ] at each (x,y)

Multispectral imaging extract the additional information the


human eye fails to capture with its visible receptors for red, green
and blue.

31
Type of Images

6. Hyper-spectral image: Hyperspectral imaging, like


other spectral imaging, collects and processes information from
across the electromagnetic spectrum. The goal of hyper spectral
imaging is to obtain the spectrum for each pixel in the image of a
scene, with the purpose of finding objects, identifying materials, or
detecting processes..

32
Elements of Visual Perception
Structure of the human eye
• Avg diameter : 20mm

• 3 membranes enclose the eye


– Cornea & sclera
– Choroid
Contd.
Image formation in the human eye
◦ Retinal image reflected primarily in the fovea
Contd.
Brightness adaptation and discrimination
Image sensing and Acquisition
A Simple Image formation Model
Image formation Model

2-Dimensional image representation : An image can be


represented by a 2-dimensional function which is of form f(x,y).

An image is generated from a physical process then its value will be


proportional to energy radiated by physical source. Because this fact
is 2-dimenstional image representation, so f(x,y) should be finite
and non-zero. Mathematically,

0 < f(x,y) < ꝏ

38
Image formation Model

Characteristics of 2-dimensional function f(x,y):

1. The amount of source illumination incident on the scene being


viewed is called illumination component denoted by i(x,y)
2. The amount of illumination reflected by the objects in the scene
is called reflectance component denoted by r(x,y).
Therefore, these two components are combined as a product to yield
f(x,y):
f(x,y) = i(x,y) . r(x,y)
where, 0 < i(x,y) <ꝏ and
0 < r(x,y) <1 : 0 = total absorption
: 1 = total reflectance

39
Image formation Model

Pixel coordinator (x,y)


L ( any number)
color image
Function f(x,y)

Lmin <= L <= Lmax

e.g. grayscale image


0 <= L <= 28

Image formation model

40
Image Sampling &
Quantization
Output of most sensors is a continuous voltage waveform whose
amplitude & spatial behavior are related to the physical phenomenon
(e.g. brightness) being sensed.

For an image to be processed by a computer, it must be represented


by an appropriate discrete data structure (e.g. Matrix).

So, to create a digital image continuous sensed data should be


converted into a digital form .

Two important processes to convert continuous analog image into


digital image – Sampling & Quantization

41
Image Sampling & Quantization
Continued…

Discretization: Process in which signals or data samples are


considered at regular intervals. It is the process of transferring
continuous functions, models, variables, and equations into discrete
counterparts. This process is usually carried out as a first step
toward making them suitable for numerical evaluation and
implementation on digital computers.

Sampling: It is the discretization of image data in spatial


coordinates.

Quantization: It is the discretization of image intensity (gray level)


values.

42
Image Sampling & Quantization
Continued…
Image Sampling & Quantization
Continued….

Figure (b) is a plot of amplitude values of the continuous image


along the line segment AB in fig (a).

The random variations are due to image noise.

Samples are shown as small white squares superimposed on the


function. The set of these discrete locations gives the sampled
function.

Then the intensity values are converted (quantized) into discrete


quantities .
The continuous intensity levels are quantized by assigning one of
the eight values to each sample.

By Repeating this process line by line from top of the image can
generate a 2D digital image.

44
Image Sampling & Quantization
Continued….
Sampling: digitizing the 2-dimensional spatial coordinate values.
Quantization: digitizing the amplitude values (brightness level)

Accuracy in quantization is highly dependent on the noise content of


the sampled signal.

45
Image Sampled w.r.t. ‘x’ and ‘y’
Coordinates
N – no. of samples along x axis (i.e. no. of columns in the matrix).
M – no. of samples along y axis (i.e. no. of rows in the matrix).

Usually, M & N are powers of 2

M = 2m and N = 2n

The no. of gray level values, L, is also usually taken as power of 2


L = 2k

Question: What is the number of bits required to store a digitized


image?

46
Image Sampled w.r.t. ‘x’ and ‘y’
Coordinates Continued…
Question: What is the number of bits required to store a digitized
image?
Answer:

b=M*N*k

If M = N, then:

b = N2 * k

47
k-bit image
An image having 2k gray levels

An image having 256 gray levels is called a 8-bit image.

b = N2 * k

48
Representing Digital
Images

49
Image Coordinate
System

50
Resolution of an Image
Spatial Resolution:
-- It is the smallest detail in an image.
-- It can be expressed as dots(pixels) per unit distance.

Gray-level Resolution:
-- It is the smallest discernible change in the gray level of an image.
(no. of bits needed to quantize intensity).

e.g. In L-level digital image of size M x N

Spatial resolution – M x N pixels


Gray-level resolution – L levels

51
Fixed Gray-level ; Varying number of samples

(a) & (b)


virtually impossible
to tell apart. The
level of detail lost is
too fine

(c) Very slight fine


checkerboard
pattern in the
borders & some
graininess
throughout the
image

(d), (e), (f)


These effects
much more visible

52
Checkboard Effect and False
Contouring
Checkerboard Effect :
When the no. of pixels in an image is reduced and keeping the no. of gray
levels in the image constant then fine checkerboard patterns are found at
the edges of the image. This effect is called the checker board effect.

False Contouring:
When the no. of gray-levels in the image is low, the foreground details of
the image merge with the background details of the image, causing ridge
like structures. This degradation phenomenon is known as false
contouring.

Checkboard effect False Contouring


53
Varying Gray-level ; Fixed number of
samples

54
Basic Relationship between Pixels

Neighbors of a pixel

Adjacency between pixels.

Connectivity between pixels.

Regions.

Boundary.

55
Neighbors of Pixel
• There are three kinds of neighbors of a pixel:

1. N4(p) 4-neighbors: the set of horizontal and vertical neighbors.

(x-1, y) (x+1, y) (x, y-1) (x, y+1)


2. ND(p) diagonal neighbors: the set of 4 diagonal neighbors.

(x-1, y-1) (x+1, y-1) (x-1, y+1) (x+1, y+1)


3. N8(p) 8-neighbors: union of 4-neighbors and diagonal neighbors.

O O O O O O
O X O X O X O
O O O O O O
56
Neighbors of Pixel
Continued..

N4(p) + ND(p) = N8(p)

• If pixel p (x, y) lies on the boundary, then some of the neighbors


of p will lie outside the digital image.

57
Adjacency

• Two pixels that are neighbors and have the same grey-level (or some
other specified similarity criterion) are adjacent.

• v= The set of gray-level values used to define adjacency.

• In a gray-scale image, V typically contains more than one elements.


e.g. in an image of 265 gray-levels, V could be any subset of these
256 values.

• In a binary image, V = {1} for finding the adjacency of a pixel with


value 1.

58
Type of Adjacency

1. 4-adjacency: Two pixels p & q with values from V are 4-adjacent if


q is in the set N4(p).

2. 8-adjacency: Two pixels p & q with values from ‘V’ are 8-adjacent if
‘q’ is in the set N8(p).

3. m-adjacency (mixed adjacency): Two pixels p & q with values


from V are
m-adjacent if

a) q is in the set N4(p), or


b) q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels whose
values are from V.

Mixed adjacency is a modification of 8-adjacency. It is used to eliminate


ambiguities that may arise when 8-adjacency is used.

59
Problem of Adjacency
• Question: Find 8-adjacency & m-adjacency of the pixel in the centre.
Note: V = {1}

60
Problem of Adjacency Continued..

• Solution:

Fig (b) shows the ambiguity in 8-adjacency. Where V ={1}

61
Path (Digital Path or Curve)

• From pixel p with coordinates (x0, y0) to pixel q with coordinates (xn, yn)

(x0, y0), (x1, y1), …, (xn, yn)

Where pixels (xi, yi) & (xi-1, yi-1) are adjacent for
1≤i≤n

n is the length of the path

If (x0, y0) = (xn, yn) then the path is closed

62
Problem of Adjacency Continued..

• Solution:

Path can be 4-, 8- or m-path, depending upon the type of adjacency specified.

Path between NE & SE pixels 8-path shown in (b) & m-path shown in
(c)
Note: No ambiguity in m-path
63
Region
R – subset of pixels in an image.

R is a region of an image if R is a connected set.

Ri & Rj are adjacent if their union forms a connected set; otherwise


the regions are said to be disjoint.

1 1 0 0 1 1
1 1 0 1 0 1
1 1 1 0 1 1

• Two image subsets S1 & S2 are adjacent if some pixel in S1 is


adjacent
to some pixel in S2.(adjacent means 4-, 8- or m-adjacent)

64
Boundary
The boundary (aka border or contour) of a region R is the set of
pixels in R that have one or more neighbors that are not in R.

0 0 0 0 0
0 1 1 0 0
Is it a boundary pixel?
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0

65
Another Definition of
Boundary
The boundary of a region R is the set of pixels that are adjacent to
pixels in the complement of R.

These definitions are referred to as the inner border of the region


as opposed to its outer border, which is the corresponding border
in the background.

0 0 0 0 0
0 0 1 0 0
0 0 1 0 0
0 0 1 0 0
0 0 1 0 0
0 0 0 0 0

66
Edge vs. Boundary

Boundary of a region forms a closed path. It is a Global Concept.

Edge- a path of one or more pixels that separate two regions of


significantly different gray levels.

Edges are formed from pixels with derivative values that exceed a
preset threshold. It is thus a local concept that is based on a
measure of intensity-level discontinuity at a point.

67
Distance Measures between Pixels

For pixel p,q,z with coordinates (x,y) , (s,t), (v,w) then D is a


distance function or metric if

1. D(p,q)>=0 D(p,q)=0 iff p=q


2. D(p,q)=D(q,p) and
3. D(p,z)<=D(p,q) + D(q,z)

68
Distance Measures between Pixels
Continued…
Pixel Coordinate
1. Euclidean distance between p & q:
p (x, y)

De(p, q) = [ (x – s)2 + (y – t)2]1/2 q (s, t)

2. City-block (Manhattan)distance between p & q:

D4(p, q) = |x – s| + |y – t|

3. Chessboard distance between p & q:

D8(p, q) = max(|x – s| , |y – t|)

69
Distance Measure Points

D4 and D8 distances are independent of any path.

Dm distance is defined as shortest m-path b/w points.

70
Numerical Problems on Adjacency

Question 1: Consider the two image subsets, S1 and S2, shown in the
following figure. For V={1}, determine whether these two subsets are
(a) 4-adjacent,
(b) 8-adjacent, or
(c) m-adjacent.

71
Numerical Problems on Adjacency

Solution 1:
(a) S1 and S2 are not 4-connected because q is not in the set N4(p);

(b) S1 and S2 are 8-connected because q is in the set N8(p);

(c) S1 and S2 are m-connected because


(i) q is in ND(p), and
(ii) the set N4(p) ∩ N4(q) is empty.

72
Numerical Problems on Adjacency

Question 2: Consider the image segment shown.


Let V={0, 1} and compute the lengths of the shortest 4-, 8-, and m-path between
p and q. If a particular path does not exist between these two points, explain
why.

73
Numerical Problems on Adjacency

Solution 2:
When V = {0,1}, 4-path does not exist between p and q because it is
impossible to get from p to q by traveling along points that are both
4-adjacent and also have values from V . Fig. a shows this condition;
it is not possible to get to q.
The shortest 8-path is shown in Fig. b its length is 4.
The length of the shortest m- path (shown dashed) is 5.
Both of these shortest paths are unique in this case.

74
Numerical Problems on Adjacency

Question 3: Consider the image segment shown.


Let V={1, 2} and compute the lengths of the shortest 4-, 8-, and m-path between
p and q. If a particular path does not exist between these two points, explain
why.

75
Mathematical Tools
used in
Digital Image Processing

1/15/2025 76
Array vs Matrix Operations

When we multiply two images, we usually carry out


array multiplication.

17/2/2011 77
Linear vs Nonlinear Operations

An operator is said to be linear if

H[a1 f1(x,y) + a2 f2(x,y)] =


a1 H[f1(x,y)] + a2H[f2(x,y)]

78
Is the sum operator, Σ, linear?

Σ [a1 f1(x,y) + a2 f2(x,y)] =


Σ a1 [f1(x,y)] + Σa2 [f2(x,y)]
{Distributive Property}

Σ [a1 f1(x,y) + a2 f2(x,y)] =


a1 Σ [f1(x,y)] + a2 Σ [f2(x,y)]

17/2/2011 79
Is the max operator, whose function is to
find the maximum value of the pixels in an
image, linear?

17/2/2011 80
Arithmetic Operations

Arithmetic operations are performed on the


pixels of two or more images

Let p and q be the pixel values at location (x,y) in


first and second images respectively

– Addition: p+q {s(x,y)=f(x,y)+g(x,y)}


– Subtraction: p-q {s(x,y)=f(x,y)-g(x,y)}
– Multiplication: p.q {s(x,y)=f(x,y).g(x,y)}
– Division: p/q {s(x,y)=f(x,y)/g(x,y)}

17/2/2011 81
Image
g(x, y) Subtraction
= f(x, y) - h(x,
y) by computing the difference
Obtained
between all pairs of corresponding pixels from
images f & h.

Subtraction enhances
the differences
between the two
images
17/2/2011 82
Mask Mode Radiography
One of the most successful commercial applications of
image subtraction

Example: imaging blood vessels and arteries in


a body. Blood stream is injected with a iodine
medium and X-ray images are taken before and
after the injection

– f(x, y): image after injecting a medium


– h(x, y): image before injecting the medium
(the mask)

• The difference of the 2 images yields a clear


display of the blood flow paths. 17/2/2011 83
(a) Mask image
(b) Live Image taken after injection of iodine into bloodstream
(c) Difference b/w image (a) and image (b)
(d) Image after enhancing the contrast of image (c) clearly shows
the propagation of medium through blood vessels

17/2/2011
Image
Averaging
• A noisy image:

• Averaging M different noisy images:

• As M increases, the variability of the pixel


values at each location decreases.

• This means that g(x, y) approaches f(x, y)


as the number of noisy images used in the
averaging process increases.
17/2/2011 85
An important application of image averaging is
in the field of astronomy

17/2/2011 86
Image Multiplication or division

Shading Correction
An important applications of image division
An imaging sensor produces image g(x, y), where

g(x, y) = f(x, y) h(x, y)

Perfect image Shading


function

By dividing g(x, y) by h(x, y), we can get the


perfect image.
17/2/2011 87
Region of Interest
An important applications of image multiplication

17/2/2011 89
Set Operations

17/2/2011 90
Logical Operations

Note: Here

Black represents binary


0s

White represents
binary 1s

17/2/2011 91
Geometric Spatial
Transformations
Geometric transformation consists of two basic operations-

• Spatial transformation of coordinates

• Assign intensity values to spatially transformed pixels,


method is intensity interpolation.
Affine Transformations
One of the most commonly used spatial
coordinate transformations

This transformation can scale, rotate, translate or


sheer a set of coordinates, depending on the value
chosen for the elements of the matrix T.

17/2/2011 93
Translation
• Translate (a,b): (x,y) (x+a,y+b)

17/2/2011 94
(x, y) (x’, y’)
where (x’, y’) = (x+h, y+k)
Then, the relationship between (x, y) and (x', y') can be
put into a matrix form

17/2/2011 95
17/2/2011 96
17/2/2011 97
If a point (x, y) is rotated an angle a
about the coordinate origin to
become a new point (x', y'), the
relationships can be described as
follows:

17/2/2011 98
Translation & Rotation
can be combined

Rotates the point (x, y) by an angle a about the


coordinate origin and translates the rotated
result in the direction of (h, k).
17/2/2011 99
Scaling

10
17/2/2011 0
Scaling can be applied to all axes, each with a
different scaling factor. For example, if the x-, y-
and z-axis are scaled with scaling factors p, q and
r, respectively, the transformation matrix is:

10
17/2/2011 1
10
17/2/2011 2
How far a direction is pushed is determined
by a shearing factor. On the xy-plane, one
can push in the x-direction, positive or
negative, and keep the y-direction
unchanged. Or, one can push in the
y-direction and keep the x-direction fixed.
The following is a shear transformation in
the x-direction with shearing factor a:

10
17/2/2011 3
Transformations relocate
pixels on an image to new
locations

Intensity values need to be


assigned to those locations

Intensity interpolation
required

10
17/2/2011 4
Thank You!

10
5

You might also like