0% found this document useful (0 votes)
13 views

Digital Image Processing

Uploaded by

gemasif162
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Digital Image Processing

Uploaded by

gemasif162
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

DIGITAL IMAGE PROCESSING

What Is Digital Image Processing?

 The field of digital image processing refers to processing digital images by means of a digital
computer.

2
3
Filtering

4
Image Enhancement

5
Image Enhancement(contd.)

6
Image Deblurring

7
Medical Imaging

8
Remote Sensing

9
Weather Forecasting

10
Atmospheric Study

11
Astronomy

12
Machine Vision Applications

13
Machine Vision Applications

Automated Inspection

14
Machine Vision Applications(contd.)

Texture Processing

15
Video Sequence Processing

Detection and Tracking of Moving Targets

16
17
18
The Origins of Digital Image Processing

• One of the first applications of digital images was in the newspaper industry, when pictures

were first sent by submarine cable between London and New York.

• Specialized printing equipment coded pictures for cable transmission and then reconstructed

them at the receiving end.

19
 The printing technique based on photographic reproduction made from
tapes perforated at the telegraph receiving terminal from 1921.

Figure shows an image obtained using this method.

 The improvements are tonal quality and in resolution.


20
The early Bartlane systems were capable of coding images in five distinct levels of gray.

• This capability was increased to 15 levels in 1929.


• Figure is typical of the type of images that could be obtained using the 15-tone equipment.
21
 Improvement of processing techniques continued for next 35

years

 In 1964 computer processing techniques was used to improve

the pictures of moon transmitted by Ranger 7 at Jet Propulsion

Laboratory.

 This was the basis of modern image processing techniques

• Figure shows the first image of the moon taken by Ranger

22
What is an image?
 An image can be defined as a 2D signal that varies over the spatial coordinates x and
y, and can be written mathematically as f (x, y).

Fig. 1. Digital Image Representation. a) Small Digital Image. b) Equivalent image content
23
in matrix form
What is a Digital Image ?
 An image may be defined as a two- dimensional function, f(x,y) where x and y are

spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y)

is called the intensity or gray level of the image at that point.

 When x, y, and the amplitude values of f are all finite, discrete quantities, we call

the image a digital image

24
Image Representation
Picture elements, Image elements, pels, and pixels

• A digital image is composed of a finite number of elements, each of which has a

particular location and value.

• These elements are referred to as picture elements, image elements, pels, and pixels.

• Pixel is the term most widely used to denote the elements of a digital image.

25
Fundamentals of Image Processing
UNIT-1
 Introduction
 Fundamental steps in digital image processing
 Components of an Image Processing systems
 Image sampling and quantization
 Basic relationship between pixels
 Introduction to Fourier Transform and DFT
 Properties of 2D FT
 FFT
26
ANALOG VS DIGITAL IMAGE

27
Types of images
 Binary images : The binary image as its name suggests, contain only
two pixel elements i.e 0 & 1,where 0 refers to black and 1 refers to
white. A binary image is referred to as a 1-bit image because it takes
only 1 binary digit to represent each pixel.

 Very useful for industry application.

 Obtained from Gray level.

28

Fig. A binary image


Types of images (contd.)

 Gray-scale images : Gray-scale images are referred to as monochrome


(one-color) images.
 They contain gray-level information, no colors information.
 Grayscale or 8-bit images are composed of 256 unique colors

Fig. Example of Gray scale image


29
Types of images(contd.)
 Color Image (24-bit) : Mixing the three primary colors i.e, Red, Green and
Blue, in proper proportions.
 24-bit color images consist of three separate 8-bit channels.
 Image values to be in the range from 0 to 255.

Fig. Example of Color image


30
Applications of Image Processing

1. Astronomy
2. Medical
3. Biometrics
4. Vehicle Number plates
Recognition.
5. Geographic Information
System

31

31
Fundamental steps in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Representation
Image & Description
Acquisition

Object
Recognition
Problem Domain
Colour Image Image
Processing Compression

32

32
Fundamental steps in Digital Image Processing

Image Acquisition: Image Morphological


Restoration Processing

• This step aims to obtained the Image


Segmentation
digital images of the object or Enhancement

involves retrieving the image


from source usually hardware- Representation
based. Image & Description
Acquisition
• Preprocessing techniques can be
used to improve the image, such Object
as scaling. Problem Domain Recognition
Colour Image Image
Processing Compression
33

33
Image Enhancement:

Image Morphological
 It is the process of bringing Restoration Processing

out and highlighting certain


Image
features of interest from the Enhancement
Segmentation

image like sharpening,


Representation
brightness, contrast Image & Description
Acquisition
adjustment, removal of
noise, etc. Object
Problem Domain Recognition
• Applied various filters
Colour Image Image
Processing Compression
34

34
Image Restoration:

Image Morphological
Restoration Processing
 It is the process of improving
the appearance of an image, Image
Segmentation
Enhancement
image restoration can be
done using mathematical or Representation
Image & Description
probabilistic models. Acquisition

Object
Problem Domain Recognition
Colour Image Image
Processing Compression
35

35
Morphological Processing:

Image Morphological
Restoration Processing
 It is the set of processing
operations for morphing Image
Segmentation
Enhancement
images based on their
shapes ,basic morphological
Representation
Image & Description
operations like erosion and Acquisition
dilation .
Object
Problem Domain Recognition
Colour Image Image
Processing Compression
36

36
Image Segmentation:

It is one of the most Image Morphological


difficult steps of image Restoration Processing
processing ,it involves
partitioning an image into Image
its constituent Segmentation
Enhancement
parts ,generally used to
locate objects and
Representation
boundaries Image & Description
Acquisition

Object
Problem Domain Recognition
Colour Image Image
Processing Compression
37

37
Representation & Description:

• After segmentation, each Image Morphological


region is represented and Restoration Processing
described in a form suitable
for further computer
Image
processing Enhancement
Segmentation
• Representation deals with
image characteristics and
Representation
regional properties Image & Description
• Description deals with Acquisition
extracting quantitative
information that helps Object
Problem Domain Recognition
differentiate one class of object
Colour Image Image
from the other Processing Compression
38

38
Object Recognition:

Image Morphological
Restoration Processing
 Recognition assigns a label
to an object based on its
description Image
Segmentation
Enhancement

Representation
Image & Description
Acquisition

Object
Problem Domain Recognition
Colour Image Image
Processing Compression
39

39
Image Compression:

 Compression is used to reduce Image Morphological


the storage required to save an Restoration Processing
image or bandwidth required to
transmit Image
Segmentation
Enhancement

Representation
Image & Description
Acquisition

Object
Problem Domain Recognition
Colour Image Image
Processing Compression
40

40
Colour Image Processing:

Image Morphological
 Colour image processing Restoration Processing

includes no of colour modeling


Image
Segmentation
techniques in a digital domain Enhancement

Representation
Image & Description
Acquisition

Object
Problem Domain Recognition
Colour Image Image
Processing Compression
41

41
Components of Image Processing System
Network

Image Displays Computer Mass Storage

Specialized Image Processing


Hardware Image Processing
Hard copy
Software

Image sensors

42

Problem
42
Components of Image Processing System(contd.)
Image sensors:
 It consists of two major elements.
1)Physical Device: It produces electrical output proportional to light
energy.
2)Digitizers: It converts the electrical output into digital form.
Image processing hardware:
 It usually consists of digitizers & hardware that performs other
primitive operations such as arithmetic & logical operations on entire
images.

Computer:
 It is a general purpose computer and can range from pc to super
computer in keeping view of required level of performance.

43

43
Components of Image Processing System(contd.)
Hard copy:
 The devices for recording images include laser printers, film cameras,
heat sensitive devices, inkjet devices & digital units.

Image displays:
 We are using color and flat screen monitors as image displays.
Monitors are driven by outputs of image & graphics display cards the
are an integral parts of computer system.

Image processing software:


 It consists of specialized modules that perform specific task. It includes
the capability for the user to write the code.

Network:
 It is almost a default function in any computer system in use today because of
44

the large amount of data inherent in image processing applications.


44
Components of Image Processing System(contd.)

Mass storage:
 It is a must in image processing applications. An image of size
1024 × 1024 pixels, in which the intensity of each pixel is an 8-bit
quantity, requires one megabyte of storage space if the image is not
compressed.

 Digital storage for image processing applications falls into three


principal categories

1. Short-term storage: to use during processing.


2. On-line storage: for relatively fast retrieval/recall.
3. Archival storage: characterized by infrequent access such as
magnetic tapes and disk
45

45
Image Sampling and Quantization

 An image may be continuous with respect to the x- and y-


coordinates, and also in amplitude. To convert it to digital form, we
have to sample the function in both coordinates and in amplitude.

 Digitizing the coordinate values is called sampling.


 Digitizing the amplitude values is called quantization.

The location of each samples is given by a vertical tick back (mark) in


the bottom part.

46

46
Image Sampling and Quantization(contd.)

The samples are shown as block squares super imposed on function the
set of these discrete locations gives the sampled function.

In order to form a digital, the gray level values must also be converted
(quantized) into discrete quantities.

So we divide the gray level scale into eight discrete levels ranging from
black to white.

47

47
Image Sampling and Quantization

The vertical tick mark assign the specific value assigned to


each of the eight level values.

The continuous gray levels are quantized simply by assigning


one of the eight discrete gray levels to each sample.

48

48
Basic Relationships between pixels
1. Neighbors of a Pixel:
A pixel p at coordinates (x,y) has four horizontal and vertical
neighbors whose coordinates are given by:
(x+1,y), (x-1, y), (x, y+1), (x,y-1)

x,y-1
x-1, y P(x,y) x+1,y
x, y+1

This set of pixels, called the 4-neighbors or p, is denoted by N4(p).


Each pixel is one unit distance from (x,y) and some of the
neighbors of p lie outside the digital image if (x,y) is on the border
of the image.
49

49
Basic Relationships between pixels(contd.)
The four diagonal neighbors of p have coordinates:
x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1) and are denoted by ND (p).
x-1, y+1 x+1, y-1
P(x,y)
x-1, y-1 x+1, y+1

 The points ND (p), together with the 4-neighbors, are called the 8-
neighbors of p, denoted by N8 (p). As before, some of the points in ND (p)
and N8 (p) fall outside the image if (x,y) is on the border of the image.
x-1, y+1 x,y-1 x+1, y-1
x-1, y P(x,y) x+1,y
x-1, y-1 x, y+1 x+1, y+1

As before, some of the points in ND (p) and N8 (p) fall outside the image
50

if (x,y) is on the border of the image 50


Basic Relationships between pixels(contd.)
2. Adjacency and Connectivity
 Let V: a set of intensity values used to define adjacency and connectivity.

 In a binary image, V = {1}, if we are referring to adjacency of pixels with


value 1.

 In a gray-scale image, the idea is the same, but V typically contains more
elements, for example,
V = {180, 181, 182, …, 200}.

 If the possible intensity values 0 – 255, V set can be any subset of these
256 values.

51

51
Basic Relationships between pixels(contd.)
Types of Adjacency

 4-adjacency: Two pixels p and q with values from V are 4-adjacent if


q is in the set N4(p).
 8-adjacency: Two pixels p and q with values from V are 8-adjacent if
q is in the set N8(p).
 m-adjacency =(mixed): Two pixels p and q with values from V are m-
adjacent if :
q is in N4(p) or
q is in ND(p) and the set N4(p) ∩ N4(q) has no pixel whose values
are from V.

52

Department of Computer Science & Engineering 52


Basic Relationships between pixels(contd.)
m – Adjacency

 Mixed adjacency is a modification of 8-adjacency.

 It is introduced to eliminate the ambiguities that


often arise when 8-adjacency is used.
For example:

Arrangement Pixels that are Pixels that are


of Pixels 8-adjacent m-adjacent
53

53
Basic Relationships between pixels(contd.)

 In this example, we can note that to connect between two pixels


(finding a path between two pixels):
1. In 8-adjacency way, you can find multiple paths between two pixels
2. While, in m-adjacency, you can find only one path between two
pixels

 So, m-adjacency has eliminated the multiple path connection that has
been generated by the 8-adjacency.

 Two subsets S1 and S2 are adjacent, if some pixel in S1 is adjacent to


some pixel in S2. Adjacent means, either 4-, 8- or m-adjacency.

54

54
Basic Relationships between pixels(contd.)
A Digital Path
Return to the previous example:

(a) (b)
(c)

 In figure (b) the paths between the top right and bottom right
pixels are 8-paths. And the path between the same 2 pixels in
figure (c) is m-path.
55

55
Basic Relationships between pixels(contd.)
Connectivity
 Let S represent a subset of pixels in an image, two pixels p and q are
said to be connected in S if there exists a path between them consisting
entirely of pixels in S.

 For any pixel p in S, the set of pixels that are connected to p in S is


called a connected component of S.

 If S has only one component and that is connected, then S is called


Connected set.

56

56
Basic Relationships between pixels(contd.)

3. Region and Boundary

Region
 Let R be a subset of pixels in an image, we call R a region of the
image if R is a connected set.

Boundary
 The boundary (also called border or contour) of a region R.
 It is the set of pixels in the region that have one or more neighbors
that are not in R.

57

57
Basic Relationships between pixels(contd.)

4. Distance Measures
 For pixels p, q and z, with coordinates (x,y), (s,t) and (v,w), respectively,
D is a distance function if:
(a) D (p,q) ≥ 0 (D (p,q) = 0 if (p = q),
(b) D (p,q) = D (q, p), and
(c) D (p,z) ≤ D (p,q) + D (q,z).

 The Euclidean Distance between p and q is defined as:

De (p,q) = [(x – s)2 + (y - t)2]1/2

 Pixels having a distance less than or equal to some value r from (x,y)
form a disk of radius r centered at (x,y).
58

58
Basic Relationships between pixels(contd.)
 The D4 distance (also called city-block distance) between p
and q is defined as:
D4 (p,q) = | x – s | + | y – t |

 Pixels having a D4 distance from (x,y), that is less than or


equal to some value d form a diamond centered at (x,y)

 The pixels with distance D4 ≤ 2


from (x,y) form the contours of
constant distance.

 The pixels with D4 = 1 are the


4-neighbors of (x,y).
59

59
Basic Relationships between pixels(contd.)

 The D8 distance (also called chessboard distance) between p


and q is defined as:
D8 (p,q) = max(| x – s |,| y – t |)

 Pixels having a D8 distance from (x,y), less than or equal to


some value r form a square Centered at (x,y).
 The pixels with D8 = 1 are the 8-neighbors of (x,y).

60

60
Basic Relationships between pixels(contd.)
Dm distance: is defined as the shortest m-path between the points.
 In this case, the distance between two pixels will depend on the
values of the pixels along the path, as well as the values of
their neighbors.
Example:
Consider the following arrangement of pixels and assume that p,
p2, and p4 have value 1 and that p1 and p3 can have a value of 0 or
1
Suppose that we consider
the adjacency of pixels
values 1 (i.e. V = {1})

61

61
Basic Relationships between pixels(contd.)

Now, to compute the Dm between points p and p4


Here we have 4 cases:

Case1:
If p1 =0 and p3 = 0
The length of the shortest m-path
(the Dm distance) is 2 (p, p2, p4)

Case2:
If p1 =1 and p3 = 0
now, p1 and p will no longer be adjacent
(see m-adjacency definition)
then, the length of the shortest 62

path will be 3 (p, p1, p2, p4) 62


Basic Relationships between pixels(contd.)

Case3: If p1 =0 and p3 = 1
now, p2 and p4 will no longer be adjacent (see m-
adjacency definition)
Then shortest –m-path will be 3 (p, p2, p3,
p4)

Case4: If p1 =1 and p3 = 1
now, p and p2 will no longer be adjacent (see m-
adjacency definition)
The length of the shortest m-path will be 4
(p, p1 , p2, p3, p4)

63

63
Introduction to Fourier Transform and DFT

• A Fourier transform (FT) is a mathematical transform that decomposes


functions into frequency components.
• Fourier transform breaks down an image into sine and cosine
components.
• Applications like image reconstruction, image compression, or image
filtering.
Sinusoid, it comprises of three things:
• Magnitude – related to contrast
• Spatial frequency – related to brightness
• Phase – related to colour information 64

Fig. Image in frequency domain


65
66
DFT (Discrete Fourier Transform)

The Discrete Fourier Transform (DFT) is the equivalent of the continuous


Fourier Transform for signals known only at N instants separated by sample
Times T (i.e. a finite sequence of data)

67
Fig.1.After Applying DFT

68
69
Fig. Continuous Signal

70
71
72
73
Properties of Fourier Transform

74
75
76
77
78
79
Scaling:

Scaling is the method that is used to the change the range of the
independent variables or features of data.

Linearity:

Addition of two functions corresponding to the addition of the two


frequency spectrum is called the linearity. If we multiply a function by a
constant, the Fourier transform of the resultant function is multiplied by
the same constant.

80
81
Fast Fourier transform (FFT)
A fast Fourier transform (FFT) is a highly optimized implementation
of the discrete Fourier transform (DFT), which convert discrete signals
from the time domain to the frequency domain.

Fig. Audio signal decomposed into its frequency components using 82

FFT.

You might also like