0% found this document useful (0 votes)
15 views114 pages

Unit 1 DIP

The document outlines various aspects of digital image processing, including techniques for image acquisition, enhancement, restoration, segmentation, and recognition. It discusses the components of an image processing system, such as image sensors, specialized hardware, and software, as well as color models and image transforms. Additionally, it includes self-test questions to reinforce understanding of the material covered.

Uploaded by

surabhi92005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views114 pages

Unit 1 DIP

The document outlines various aspects of digital image processing, including techniques for image acquisition, enhancement, restoration, segmentation, and recognition. It discusses the components of an image processing system, such as image sensors, specialized hardware, and software, as well as color models and image transforms. Additionally, it includes self-test questions to reinforce understanding of the material covered.

Uploaded by

surabhi92005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Medical field

 Gamma ray imaging


 PET scan
 X Ray Imaging
 Medical CT
 UV imaging

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Fundamental Steps in Digital Image Processing

18
19
Image Acquisition:
• The first step is to acquire a digital image using a digital camera, scanner, or any other image capture device.

• The image can be obtained from various sources such as photographs, videos, or medical imaging devices.
20
Image Enhancement:

• Image enhancement techniques aim to improve the visual quality of the image or highlight specific features of interest.
• Common enhancement techniques include filtering operations like sharpening, blurring, and noise reduction.
21
• Other methods like contrast stretching, histogram modification, and spatial domain operations can also be applied.
• Image restoration is used to recover images that have been degraded by noise, motion blur, or other
distortions.

• It involves techniques such as image deblurring, denoising, and image inpainting to reconstruct the
original image as accurately as possible.
22
• Morphological processing is a fundamental technique in digital image processing that deals with the shape and structure
of objects within an image.

• It is based on mathematical morphology, which analyzes the geometrical properties of objects by using set theory and
algebraic operations.

• Morphological processing is particularly useful in tasks such as image segmentation, noise removal, object detection, and
image analysis. 23
• Image segmentation involves partitioning an image into meaningful regions or objects.

• It helps in object recognition, image understanding, and computer vision tasks.

• Techniques like thresholding, region growing, edge detection, and clustering algorithms are used for segmentation.
24
• Object recognition is a computer
vision task that involves
identifying and classifying objects
or specific patterns within digital
images or video frames.

• The goal is to enable machines to


understand and interpret visual
data, similar to how humans
perceive and recognize objects in
their surroundings.

• Object recognition has numerous


practical applications, including
autonomous vehicles, surveillance
systems, augmented reality,
robotics, and image retrieval.

25
• Representation and
description are crucial steps in
the process of analyzing and
interpreting digital images.

• These steps involve


transforming the raw pixel
data into meaningful and
informative representations
that capture the essential
characteristics of objects or
regions of interest within an
image

26
• Image compression reduces
the storage space required for
images while minimizing the
loss of visual quality.

• Compression techniques such


as JPEG (Joint Photographic
Experts Group) and PNG
(Portable Network Graphics)
are widely used to achieve
efficient storage and
transmission of images.

27
• Color image processing
involves the analysis,
manipulation, and
enhancement of images that
contain color information.

• It deals with images


represented in color spaces
such as RGB (Red, Green,
Blue), CMYK (Cyan, Magenta,
Yellow, Black), HSV (Hue,
Saturation, Value), or other
color models.

• Color image processing


techniques are used in various
applications, including
computer vision, medical
imaging, remote sensing, and
digital photography.

28
29
Components of an Image Processing
System

30
Components of an Image Processing System
Image Sensors
• Starting with image sensing, there are two
elements required to acquire digital images.

 The first is a physical device that is


sensitive to the energy radiated by the
object we wish to image.

 The second, called a digitizer, is a device


for converting the output of the physical
sensing device into digital form.

• For instance, in a digital video camera, the


sensors produce an electrical output
proportional to light intensity.

• The digitizer converts these outputs to digital


data

31
Components of an Image Processing System
(Contd.)
Specialized Image Processing hardware
• Specialized image processing hardware usually
consists of the digitizer just mentioned, plus
hardware that performs other primitive operations,
such as an arithmetic logic unit (ALU), that performs
arithmetic and logical operations in parallel on
entire images.

• One example of how an ALU is used is in averaging


images as quickly as they are digitized, for the
purpose of noise reduction.

• This type of hardware sometimes is called a front-


end subsystem, and its most distinguishing
characteristic is speed.

• In other words, this unit performs functions that


require fast data throughputs (e.g., digitizing and
averaging video images at 30 frames/s) that the
typical main computer cannot handle. 32
Components of an Image Processing System
(Contd.)
Computer
• The computer in an image processing system is
a general-purpose computer and can range from
a PC to a supercomputer.

• In dedicated applications, sometimes custom


computers are used to achieve a required level
of performance, but our interest here is on
general-purpose image processing systems.

• In these systems, almost any well-equipped PC-


type machine is suitable for off-line image
processing tasks.

33
Components of an Image Processing System
(Contd.)
Image Processing Software
• Software for image processing consists of
specialized modules that perform specific tasks.

• A well-designed package also includes the


capability for the user to write code that, as a
minimum, utilizes the specialized modules.

• More sophisticated software packages allow the


integration of those modules and general-
purpose software commands from at least one
computer language.

34
Components of an Image Processing System
(Contd.)
Mass Storage
Mass storage capability is a must in image
processing applications. An image of size pixels, in
which the intensity of each pixel is an 8-bit
quantity, requires one megabyte of storage space if
the image is not compressed.

Digital storage for image processing applications


falls into three principal categories:

(1) short-term storage for use during processing,


(Frame Buffer)

(2) on-line storage for relatively fast recall

(3) archival storage, characterized by infrequent


access.

35
Components of an Image Processing System
(Contd.)
Image Displays
• Image displays in use today are mainly color (preferably
flat screen) TV monitors.

• Monitors are driven by the outputs of image and


graphics display cards that are an integral part of the
computer system.

• Seldom are there requirements for image display


applications that cannot be met by display cards
available commercially as part of the computer system.

• In some cases, it is necessary to have stereo displays,


and these are implemented in the form of headgear
containing two small displays embedded in goggles
worn by the user.

36
Components of an Image Processing System
(Contd.)
Hardcopy
• Hardcopy devices for recording images include laser
printers, film cameras, heat-sensitive devices, inkjet
units, and digital units, such as optical and CDROM
disks.

• Film provides the highest possible resolution, but paper


is the obvious medium of choice for written material.

• For presentations, images are displayed on film


transparencies or in a digital medium if image
projection equipment is used.

• The latter approach is gaining acceptance as the


standard for image presentations.

37
Components of an Image Processing System
(Contd.)
Networking
• Networking is almost a default function in any
computer system in use today.

• Because of the large amount of data inherent in image


processing applications, the key consideration in image
transmission is bandwidth.

• In dedicated networks, this typically is not a problem,


but communications with remote sites via the Internet
are not always as efficient.

• Fortunately, this situation is improving quickly as a


result of optical fiber and other broadband
technologies.

38
Self Test Questions
1. Which step involves extracting relevant information 3. Which step involves presenting processed images or
or features from an image?
analysis results in a visually meaningful form?
a) Image Enhancement
a) Image Visualization
b) Image Restoration
b) Image Analysis
c) Image Segmentation
c) Image Restoration
d) Feature Extraction
d) Image Enhancement
Answer: d) Feature Extraction
Answer: a) Image Visualization

2. What is the process of assigning labels or class


identities to objects in an image called? 4. What is the final step in digital image processing?

a) Image Recognition a) Image Acquisition


b) Image Analysis b) Image Enhancement
c) Image Compression c) Image Analysis
d) Image Segmentation d) Image Visualization
39
Answer: a) Image Recognition Answer: d) Image Visualization
Self Test Questions
5. Which component of an image processing system 7. Which step involves partitioning an image into meaningful
captures images from various sources?
regions or objects?
a) Image Analysis Algorithms
a) Image Acquisition
b) Preprocessing Components
c) Image Acquisition Devices b) Image Segmentation

d) User Interface c) Image Restoration


Answer: c) Image Acquisition Devices d) Image Analysis
Answer: b) Image Segmentation
6. Which step involves converting an image from one 8. What is the process of recovering images that have been
color space to another?
degraded by noise or distortions called?
a) Image Acquisition
a) Image Compression
b) Image Compression
c) Color Image Processing b) Image Segmentation

d) Image Analysis c) Image Restoration


Answer: c) Color Image Processing d) Image Recognition
40
Answer: c) Image Restoration
Self Test Questions
9. What is the first step in digital image processing? 11. What are the names of the various color image
a) Image Enhancement processing categories?
b) Image Acquisition a) Pseudo-color and Multi-color processing
c) Image Compression
b) Half-color and pseudo-color processing
d) Image Analysis
c) Full-color and pseudo-color processing
Answer: b) Image Acquisition
d) Half-color and full-color processing
Answer: c) Full-color and pseudo-color processing
10. Which step in digital image processing involves
reducing noise and enhancing image quality? 12. Which of the following process helps in Image
a) Image Segmentation enhancement?
b) Image Enhancement
a) Digital Image Processing
c) Image Compression
b) Analog Image Processing
d) Image Recognition
c) Both a and b
Answer: b) Image Enhancement
d) None of the above
41
Answer: c) Both a and b
42
43
44
45
sk

46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
1. An Image segment is shown below. Let V be the set of gray level values used to define the
connectivity in the image. Compute D4 and D8 distance between pixels ‘p’ and ‘q’ for

I. V = {2,3}
3 1 2 1
II. V = {2,5}
0 2 0 2
1 2 1 1
1 0 1 2

71
Color fundamentals and models

72
Color fundamentals
• Physical phenomenon
• Physical nature of color is known

• Psysio-psychological phenomenon
• How human brain perceive and interpret color?

73
Physical quantities to describe a
chromatic light source
• Radiance: total amount of energy that flow from the light source,
measured in watts (W)

• Luminance: amount of energy an observer perceives from a light


source, measured in lumens

• Far infrared light: high radiance, but 0 luminance

• Brightness: subjective descriptor that is hard to measure, similar to


the achromatic notion of intensity
74
Color models
• Color model, color space, color system
• Specify colors in a standard way
• A coordinate system that each color is represented by a single point

• RGB model
• CYM model
Suitable for hardware or
• CYMK model applications
• HSI model
- match the human description

75
RGB color model

76
Safe RGB colors
• Subset of colors is enough for some application
• Safe RGB colors (safe Web colors, safe browser colors)

(6)3 = 216

77
CMY model (+Black = CMYK)
• CMY: secondary colors of light, or primary colors of pigments
• Used to generate hardcopy output

 C  1  R 
 M   1  G 
    
 Y  1  B 
78
HSI color model
• Will you describe a color using its R, G, B components?
• Human describe a color by its hue, saturation, and brightness
• Hue : color attribute
• Saturation: purity of color (white->0, primary color->1)
• Brightness: achromatic notion of intensity

79
HSI color model (cont.)
• RGB -> HSI model
Colors on this triangle
Have the same hue
Intensity
line
saturation

80
Image Transforms
• Many times, image processing tasks can be best performed in a
domain other than the spatial domain.
• Key steps
(1) Transform the image
(2) Carry the task(s) in the transformed domain.
(3) Apply inverse transform to return to the spatial domain.

81
Representative Forward/Inverse
Equations
• Forward Transformation (discrete case)

forward transformation kernel


M 1 N 1
T (u, v)   f ( x, y)r ( x, y, u, v)
x 0 y  0
u  0,1,..., M  1, v 0,1,..., N  1

• Inverse Transformation (discrete case)


inverse transformation kernel
M 1 N 1
f ( x, y )   T (u, v)s( x, y, u, v)
u 0 v 0
x  0,1,..., M  1, y  0,1,..., N  1

82
Discrete Fourier Transform (DFT) (cont’d)

Forward DFT:
N 1  j 2 ux
F (u )   f ( x)e N
, u  0,1, 2,..., N  1
x 0
u-th sample (i.e., u is now an index)

Inverse DFT:
N 1 j 2 ux
1
f ( x) 
N
 F (u )e
u 0
N
, x  0,1, 2,..., N  1
x-th sample (i.e., x is now an index)

83
Discrete Fourier Transform (DFT) (cont’d)

• We will be using the equivalent definition shown below where the


1/N constant appears in the forward DFT instead of the inverse
DFT:

Forward DFT N 1  j 2 ux
1
F (u ) 
N
 f ( x )e
x 0
N
, u  0,1, 2,..., N  1
u-th sample (i.e., u is now an index)

Inverse DFT
N 1 j 2 ux
f ( x)   F (u )e N
, x  0,1, 2,..., N  1
u 0
x-th sample (i.e., x is now an index)

84
Example

f(0) = 2
f(1) = 3
f(2) = 4
f(3) = 4

N 1  j 2 ux
1 13/4
F (u ) 
N

x 0
f ( x )e N
, u  0,1, 2,..., N  1

F(0) = 13/4 5 5
F(1) = 1/4(-2+j) 4 1 4
F(2) = -1/4
F(3) = -1/4(2+j)

85
DFT Properties: (1) Separability

• The 2D DFT can be computed using 1D DFTs:

Forward DFT:

This is because
the exponential ux  vy ux vy
 j 2 ( )  j 2 ( )  j 2 ( )
kernel is e N
e N
e N

separable!

86
DFT Properties: (1) Separability (cont’d)

• Using the kernel separability:

• Let’s set: 1

2D DFT steps:
1. Compute F1(x,v)
• Then: 1
2. Compute F(u,v)

87
DFT Properties: (1) Separability (cont’d)
• How to compute F1(x,v)?

) x,v=0,1,2,…N-1

N x DFT of the rows of f(x,y)

• How to compute F(u,v)?


1 u,v=0,1,2,…N-1

DFT of the cols of F1(x,v)


88
DFT Properties: (1) Separability (cont’d)
Computing 2D DFT using 1D DFTs:

f(x,y)F1(x,v)F(u,v)

The same exactly steps (i.e., rows first,


columns second) can be used to compute the
2D IDFT using 1D IDFTs.
89
DFT Properties: (2) Periodicity
• The DFT and its inverse are periodic with period N:

90
DFT Properties: (3) Symmetry
f ( x, y ) 
 F (u, v)  R (u, v )  jI (u , v)

91
DFT Properties: (4) Translation
f(x,y) F(u,v)

• Translation in spatial domain:

• Translation in frequency domain:


)
N

92
DFT Properties: (4) Translation (cont’d)

• To “see” a full period of the DFT, we need to


translate the origin of F(u) to N/2 in 1D or of F(u,v)
to (N/2,N/2) in 2D
this is what we see!
|F(u)|

after translation
|F(u-N/2)|

93
DFT Properties: (4) Translation (cont’d)
• Use the following property to preform the
translation F(u-N/2,v-N/2):
)
N
Set

• This results to the following result:

94
DFT Properties: (4) Translation (cont’d)
• Need to pre-multiply f(x,y) by (-1) x+y to see a full
period.

• You need to “undo” the multiplication by (-1) x+y


when you transform your results back to the
spatial domain.

f(x,y) (-1)x+y 2D DFT Filtering  2D IDFT g(x,y) (-1)x+y

95
DFT Properties: (4) Translation (cont’d)
sinc

before translation after translation 96


DFT Properties: (5) Rotation
• Rotating f(x,y) by θ rotates F(u,v) by θ

97
DFT Properties: (6) Addition/Multiplication

98
DFT Properties: (7) Scale

99
DFT Properties: (8) Average value

Average:

F(u,v) at u=0, v=0:

So:
100
DFT Properties
Property Description Formula Example

The 2D DFT of a 2D signal can be


F(u, v) = DFT(DFT(f(x, y)) in x If f(x, y) = f(x) * f(y), then
Separability computed by applying two 1D DFTs
direction) in y direction F(u, v) = F(u) * F(v)
along the rows and columns separately.

If the 2D signal is periodic,


The 2D DFT is periodic in both F(u + M, v + N) = F(u, v), for
Periodicity its DFT has sharp peaks at
dimensions. integers M and N.
discrete frequencies.

The 2D DFT of a real-valued signal has F(u, v) = conj(F(-u, -v)), for Real input with symmetric
Symmetry
conjugate symmetry. real-valued f(x, y). DFT magnitudes.

A spatial translation of the input signal


DFT(f(x - x0, y - y0)) = F(u, v) Translating f(x, y) shifts the
Translation results in a linear phase shift in the
* exp(-j(2π(ux0 + vy0))) phase of F(u, v).
DFT.

A spatial rotation of the input signal DFT(f_rot(x, y)) = F(u, v) * Rotating f(x, y) shifts F(u,
Rotation
results in a circular shift in the DFT. exp(-j(2π(ucosθ + vsinθ))) v) in a circular pattern.
101
Discrete Cosine Transform
 The basis functions of DCT are real. (DFT has complex basis functions.)
 DCT has very good energy compaction properties.
 DCT can be expressed in terms of DFT, therefore, Fast Fourier Transform
implementation can be used.
 In the case of block-based image compression, (e.g., JPEG), DCT
produces less artifacts along the boundaries than DFT does.

102
Discrete Cosine Transform
 1D Discrete Cosine Transform (DCT)
N 1
 (2n  1)k 
F [ k ]   f [ n] ( k ) cos  
n 0  2N 

 1
 for k  0
 N
where k  0,1,..., N and
1  (k )  
 2 for k  1, 2,..., N  1
 N

• Inverse DCT
N 1
 (2n  1)k 
f [n]   F [k ] (k ) cos  
k 0  2N 

103
Discrete Cosine Transform
 2D Discrete Cosine Transform (DCT)
M 1 N 1
 (2m  1)k   (2n  1)l 
F [ k .l ]    f [ m, n] (k ) (l ) cos   cos  
m0 n 0  2M   2N 

 1
 for k  0
 N
where k , l  0,1,..., N and
1  (k )  
 2 for k  1, 2,..., N  1
 N

• Inverse DCT
M 1 N 1
 (2m  1)k   (2n  1)l 
f [m, n]    F [k , l ] (k ) (l ) cos   cos  
k 0 l 0  2M   2N 

104
DCT and DFT
 N-point DCT of x[n] can be obtained from 2N-point DFT
of symmetrically extended x[n].

Symmetric extension:  x[n] for n  0,1,..., N  1


x[n]  
 x[2 N  n  1] for n  N , N  1,..., 2 N  1
2 N 1  k 
 j 2  n
DFT of x[n] : X F [k ]   x[n]e
n0
 2N 

 (2n  1) k 
N 1
DCT of x[n] : X C [k ]   (k ) x[ n]cos  
n 0  2N 

k
 (k ) j
X C [k ]  e 2N
X F [k ]
2

105
Discrete Cosine Transform

DCT DFT

106
Discrete Cosine Transform
 Matrix Representation of DCT
N 1
 (2n  1)k 
F [ k ]   f [ n] ( k ) cos  
n 0  2N 

  (2*0  1)0   (2*1  1)0   (2*( N  1)  1)0  


  (0) cos    (0) cos     (0) cos   
 2 N   2 N   2N 
 F [0]     f [0] 
 F [1]    (2*0  1)1   (2*1  1)1   (2*( N  1)  1)1   
 
 (1) cos    (1) cos     (1) cos     f [1] 
 2 N   2 N   2 N 
     
        
 F [ N  1]    f [ N  1]
 ( N  1) cos  (2*0  1)( N  1)   ( N  1) cos  (2*1  1)( N  1)    ( N  1) cos  (2*( N  1)  1)( N  1) 
      
 2N   2N   2N 
F f
D
F  Df

107
Discrete Cosine Transform
 Matrix Representation of Inverse DCT
N 1
 (2n  1)k 
f [n]   F [k ] (k ) cos  
k 0  2N 

  (2*0  1)0   (2 *0  1)1   (2*0  1)( N  1)  


  (0) cos  2 N
  (1) cos 
2 N
   ( N  1) cos 
2N
 
     
 f [0]     F [0] 
 f [1]    (2*1  1)0    (2*1  1)1   (2*1  1)( N  1)   
 
 (0) cos    (1) cos     ( N  1) cos     F [1] 
 2 N   2 N   2 N 
     
        
 f [ N  1]    F [ N  1]
 (0) cos  (2* ( N  1)  1)0   (1) cos  (2* ( N  1)  1)1    ( N  1) cos  (2*( N  1)  1)( N  1) 
      
 2N   2N   2N 
f F

D1
f  D1F

108
Discrete Cosine Transform
 Inverse DCT matrix is equal to the transpose of DCT
matrix!

D1  DT

DDT  DT D  I

109
Discrete Cosine Transform
 2D Discrete Cosine Transform (DCT)
M 1 N 1
 (2m  1)k   (2n  1)l 
F [ k .l ]    f [ m, n] (k ) (l ) cos   cos  
m0 n 0  2M   2N 

 1
 for k  0
 N
where k , l  0,1,..., N  1 and  (k )  
 2 for k  1, 2,..., N  1
 N

• Inverse DCT
M 1 N 1
 (2m  1)k   (2n  1)l 
f [m, n]    F [k , l ] (k ) (l ) cos   cos  
k 0 l 0  2M   2N 

110
Discrete Cosine Transform
 For two-dimensional signals:

Β = DΑDT

     T 
=
 Β   D  A  D 
     

111
Summary
Application in Application in
Transform Formula Explanation of Variables
Image Audio

Audio spectrum - X[k]: Complex DFT coefficient at


DFT (Discrete
Image spectral analysis, frequency index k. - x[n]: Input signal
Fourier X[k] = ∑(x[n] * e^(-i * 2π * k * n / N))
analysis, filtering. modulation sample at index n. - N: Total number of
Transform)
analysis. samples in the signal.

- X[k]: DCT coefficient at frequency


DCT (Discrete Image compression Audio compression
index k. - x[n]: Input signal sample at
Cosine (e.g., JPEG), feature (e.g., MP3), feature X[k] = ∑(x[n] * cos(π * k * (2n + 1) / 2N))
index n. - N: Total number of samples
Transform) extraction. extraction.
in the signal.

112
113
114
THANK YOU

115

You might also like