0% found this document useful (0 votes)
21 views17 pages

21EC722 Module 4 DIP

digital image processing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views17 pages

21EC722 Module 4 DIP

digital image processing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

DIP 15EC72 MODULE-2

Q1. Explain the steps involved in frequency domain Filtering. (10 marks)

Filtering in frequency domain

Steps to be followed for filtering in frequency domain

1. Multiply the input image by(−1)𝑥+𝑦 to center the transform to u = M/2 and v = N/2 (if M
and N are even numbers, then the shifted coordinates will be integers)
2. Compute F(u,v), the DFT of the image from (1)
3. Multiply F(u,v) by a filter function H(u,v)
4. Compute the inverse DFT of the result in (3)
5. Obtain the real part of the result in (4)
6. Multiply the result in (5) by (−1)𝑥+𝑦 to cancel the multiplication of the input image.

Q2. Explain the smoothing of images in frequency domain using:

(i) Ideal low pass filter


(ii) Butterworth lowpass filter
(iii) Gaussian lowpass filter (10 marks)

Smoothing filters in the Frequency Domain

1. Ideal low pass filter (ILPF)

Low pass filter performs smoothing or blurring of images. It is achieved in frequency domain by
attenuation of high frequency or high intensity transition. The output of low pass filter contains
small or less intensity transition in to a particular group of pixels.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 1


DIP 15EC72 MODULE-2

ILPF gives ideal response for image filtering. It passes all the frequency within a circle of radius
D0 which is cutoff frequency of ILPF, while attenuates all the frequency lies outside this circle. It
is specified by the function in equation.

1 𝐷(𝑢, 𝑣) ≤ 𝐷0
𝐻 (𝑢, 𝑣) = { }
0 𝐷(𝑢, 𝑣) > 𝐷0

𝐷(𝑢, 𝑣) = √𝑢2 + 𝑣 2

where D(u,v) is the distance to the center freq.

𝐷0= cut of frequency

• A lowpass filter is also called a “blurring” or smoothing filter.


• The simplest lowpass filter just calculates the average of a pixel and all of its eight
immediate neighbours.
• Lowpass is also called as blurring mask.
• A 2-D lowpass filter that passes without attenuation all frequencies within a circle of radius
D0 from the origin and “cuts off” all frequencies outside this circle is called an ideal lowpass
filter(ILPF)
2. Butterworth low pass filter (BLPF)

Butterworth low pass filter (BLPF) Butterworth filter is very popular filter due to its varying
performance depends on its order. An Increased ordered Butterworth filter tends its performance
towards ideal filter. The filter function of BLPF is given by equation.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 2


DIP 15EC72 MODULE-2

1
𝐻 (𝑢, 𝑣) =
𝐷(𝑢, 𝑣) 2𝑛
1+[ 𝐷0 ]

Where n is the order of the filter

where D (u, v) is the distance to the center freq.

𝐷0= cut of frequency

• The Butterworth filter has a parameter called the filter order.


• For high order values, the Butterworth filter approaches the ideal filter. For low order
values, Butterworth filter is more like a Gaussian filter.
• Thus, the Butterworth filter may be viewed as providing a transition between two “extremes
3. Gaussian low pass filter (GLPF)

Gaussian filter is the generalized filter for digital images. Filter function for GLPF is denoted by
equation

−𝐷2 (𝑢,𝑣)⁄
𝐻 (𝑢, 𝑣) = 𝑒 2𝜎 2

By letting 𝝈=𝐷0 we get

−𝐷2 (𝑢,𝑣)
⁄ 2
𝐻 (𝑢, 𝑣) = 𝑒 2𝐷0

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 3


DIP 15EC72 MODULE-2

Q3. Explain the sharpening filters in frequency domain (10 marks)

Sharpening in the Frequency Domain

• Edges and fine detail in images are associated with high frequency components
• High pass filters pass only the high frequencies and drops the low ones
• High pass filters are precisely the reverse of low pass filters, so:
𝐻ℎ𝑝 (𝑢, 𝑣) = 1 − 𝐻𝑙𝑝 (𝑢, 𝑣)

1. Ideal High Pass Filters

The ideal high pass filter is given as:

0 𝐷(𝑢, 𝑣) ≤ 𝐷0
𝐻 (𝑢, 𝑣) = { }
1 𝐷(𝑢, 𝑣) > 𝐷0

where D(u,v) is the distance to the center freq.

𝐷0= cut of frequency

2. Butterworth High Pass Filters

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 4


DIP 15EC72 MODULE-2

The Butterworth high pass filter is given as:

1
𝐻 (𝑢, 𝑣) =
𝐷0 2𝑛
1+[ ]
𝐷(𝑢, 𝑣)

where n is the order and 𝐷0 is the cut off distance

3. Gaussian High Pass Filters

he Gaussian high pass filter is given as:

−𝐷2 (𝑢,𝑣)
⁄ 2
𝐻 (𝑢, 𝑣) = 1 − 𝑒 2𝐷0

where n is the order and D0 is the cut off distance

Q4. With a block diagram and equations explain homomorphic filtering (10 marks)

Homomorphic filtering

Images are sometimes been acquired under poor illumination. Under this condition, the same
uniform

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 5


DIP 15EC72 MODULE-2

region will appear brighter on some areas and darker on others. This undesired situation will lead
to

several severe problems in computer vision-based system. The pixels might be misclassified,
leading to
wrong segmentation results, and therefore contribute to inaccurate evaluation or analysis from the
system. Therefore, it is very crucial to process this type of images first before they are fed into the
system.

One of the popular methods used to enhance or restore the degraded images by uneven illumination
is
by using homomorphic filtering.

An image is been characterized by two primary components.

1) The first component is the amount of source illumination incident on the scene being viewed I
(x, y).
2) The second component is the reflectance component of the objects on the scene r (x, y).

The image f (x, y) is then defined as


𝑓 (𝑥, 𝑦) = 𝑖 (𝑥, 𝑦)𝑟(𝑥, 𝑦)

In this model, the intensity of I (x, y) changes slower than r (x, y). Therefore, I (x, y) is considered
to have more low frequency components than r (x, y). Using this fact, homomorphic filtering
technique aims to reduce the significance of I (x, y) by reducing the low frequency components of
the image. This can be achieved by executing the filtering process in frequency domain.

In general, homomorphic filtering can be implemented using five stages, as stated as follows:

STAGE 1: Take a natural logarithm of both sides to decouple I (x, y) and r (x, y) components

𝑧(𝑥, 𝑦) = 𝑙𝑛 𝑖(𝑥, 𝑦) + 𝑙𝑛 𝑟(𝑥, 𝑦)

STAGE 2: Use the Fourier transform to transform the image into frequency domain:

𝔍{𝑍 (𝑥, 𝑦)} = 𝔍{𝑙𝑛 𝑖 (𝑥, 𝑦)} + 𝔍{𝑙𝑛 𝑟(𝑥, 𝑦)}

Or

𝑍(𝑢, 𝑣) = 𝐹𝑖 (𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣)

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 6


DIP 15EC72 MODULE-2

where 𝐹𝑖 (𝑢, 𝑣) and 𝐹𝑟 (𝑢, 𝑣) are the Fourier transforms of 𝑙𝑛 𝑖 (𝑥, 𝑦) and 𝑙𝑛 𝑟(𝑥, 𝑦) respectively.

STAGE 3: High pass the Z (u, v) by means of a filter function H (u, v) in frequency domain,

and get a filtered version S (u, v) as the following:

𝑆(𝑢, 𝑣) = 𝑍(𝑢, 𝑣)𝐻 (𝑢, 𝑣) = 𝐹𝑖 (𝑢, 𝑣)𝐻(𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣)𝐻(𝑢, 𝑣)

STAGE 4: Take an inverse Fourier transform to get the filtered image in the spatial domain:

𝑠(𝑥, 𝑦) = 𝔍−1 {𝑆 (𝑢, 𝑣)} = 𝔍−1 {𝑍(𝑢, 𝑣)𝐻(𝑢, 𝑣)} = 𝔍−1 {𝐹𝑖 (𝑢, 𝑣)𝐻 (𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣)𝐻 (𝑢, 𝑣)}

STAGE 5: The filtered enhanced image g(x,y) can be obtained by using the following

equations:

𝑔(𝑥, 𝑦) = exp (𝑠(𝑥, 𝑦))

In homomorphic filter by removing the low frequencies (high pass filtering) the effects of
illumination (low frequencies) can be removed

𝐷2(𝑢,𝑣)
−𝑐[ ]
𝐷2
For example, in the above block diagram is𝐻 (𝑢, 𝑣) = (ᵞ𝐻 − ᵞ𝐿 ) [1 − 𝑒 0 ] + ᵞ𝐿

• c → controls slope of curve between γH and γL.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 7


DIP 15EC72 MODULE-2

• γH > 1 and γL < 1 ⇒ attenuate low frequencies (luminance), amplify high frequencies
(reflectance).

Q5. Explain the following color models. (10 marks)

(i) RGB color model (ii) HIS color model (iii) CMY color model

Primary colors and secondary colors

• According to CIE (Commission Internationale de l’Eclariage) standard for primary color


– Red: 700 nm(wavelength)
– Green: 546.1 nm
– Blue: 435.8 nm
• Primary color can be added to produce secondary colors produce secondary colors
– Primary colors cannot produce all colors
– Define the primary colors to be the absorbing one and reflect other two
• Colors are seen as a combination of primary colors (red, green and blue).
• Primary colors can be added to produce secondary colors of light.
𝑅 + 𝐵 = 𝑀𝐴𝐺𝐸𝑁𝑇𝐴(𝑀)
𝐺 + 𝐵 = 𝐶𝑌𝐴𝑁(𝐶 )
𝑅 + 𝐺 = 𝑌𝐸𝐿𝐿𝑂𝑊(𝑌)
Ink color Absorbs Reflects Appears
C Red Green and blue Cyan
M green Red and blue Magenta
Y blue Red and green yellow
M+Y Blue and green red red
C+Y Blue and red green green
C+M Red and green blue blue
Applications: used in monitors, projectors, digital cameras, scanners etc.

1. RGB Color model

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 8


DIP 15EC72 MODULE-2

• The RGB model is an additive system.


• (R, G, B): all values of R, G, B are between 0 and 1.
• With digital representation, using a fixed length of bits each color element. The total number of bits
is called color depth, or pixel depth. For example, 24-bit RGB color (r, g, b) with 8-bits for each
color..
• The RGB model is also used for recording colors in digital cameras, including still image and video
cameras. it is also used in scanners.

Displaying Colors in RGB model

• Safe RGB colors: A 24-bit color image is having more than 16 million colors.so only a subset of colors
is used that can be reproduced faithfully. This subset of color is called safe colors.
• Only 216 colors are used as safe colors.
• For 216 safe colors, each RGB value can only be 0, 51, 102, 204, or 255.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 9


DIP 15EC72 MODULE-2

R G B
Black 00 00 00
white FF FF FF
Purest red FF 00 00
2. CMY and CMYK model
• Used in electrostatic/ink-jet plotters that deposit pigment on paper
• Cyan, magenta, and yellow are complements of red, green, and blue
• Subtractive primaries: colors are specified by what is subtracted from white light, rather than by what
is added to blackness
• Cartesian coordinate system
• Subset is unit cube
• white is at origin, black at (1, 1, 1):

RGB to CMY

𝑹+𝑮+𝑩 = 𝟏

𝑮+𝑩 = 𝟏−𝑹

𝑹+𝑩= 𝟏−𝑮

𝑹+𝑮= 𝟏−𝑩

CMY to RGB

𝑅 1 𝐶
[𝐺 ] = [1] − [𝑀 ]
𝐵 1 𝑌

3. HSI Color model

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 10


DIP 15EC72 MODULE-2

• The HSI (hue, saturation, intensity) color model, decouples the intensity component from the color-
carrying information (hue and saturation) in a color image.
• The HSI model is an ideal tool for developing image processing algorithms based on color descriptions
that are natural and intuitive to humans.

we can extract the hue from the RGB color cube


• Consider a plane defined by the three points cyan, black and white
• All points contained in this plane must have the same hue (cyan) as black and white cannot contribute
hue information to a color
• is composed of a vertical intensity axis and the locus of color points that lie on planes perpendicular to
that axis.

we see a hexagonal shape and an arbitrary color point


• The hue is determined by an angle from a reference point, usually red
• The saturation is the distance from the origin to the point

The intensity is determined by how far up the vertical intensity axis this hexagonal plane sits (not apparent
from this diagram.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 11


DIP 15EC72 MODULE-2

-------------------------------------------------------------------------------------------------------------------------------

Q6. Explain the procedure for converting colors from RGB to HIS and vice versa. (10 marks)

Converting from RGB to HSI

Given a color as R, G, and B its H, S, and I value are calculated as follows:

𝞱 𝒊𝒇 𝑩 ≤ 𝑮
𝑯={ }
𝟑𝟔𝟎𝟎 𝒊𝒇 𝑩 > 𝑮

𝟏
[(𝑹 − 𝑮) + (𝑹 − 𝑩)
𝞱 = 𝒄𝒐𝒔 −𝟏
{ 𝟐
𝟏 }
[(𝑹 − 𝑮)𝟐 + (𝑹 − 𝑩)(𝑮 − 𝑩)] ⁄𝟐

𝟑
𝑺 = 𝟏− [𝒎𝒊𝒏(𝑹, 𝑮, 𝑩)]
(𝑹 + 𝑮 + 𝑩)

𝟏
𝑰= (𝑹 + 𝑮 + 𝑩)
𝟑

Converting from HSI to RGB

Given a color as H, S, and I it’s R, G, and B values are calculated as follows:

RG sector (0 ≤ H < 120°)

𝐵 = 𝐼(1 − 𝑆)

𝑆 𝑐𝑜𝑠𝐻
𝑅 = 𝐼 [1 + ]
cos (600 − 𝐻)

𝐺 = 3𝐼 − (𝑅 + 𝐵)

GB sector (𝟏𝟐𝟎𝟎 ≤ 𝑯 < 𝟐𝟒𝟎𝟎 )

𝐻 = 𝐻 − 1200

𝑅 = 𝐼(1 − 𝑆)

𝑆 𝑐𝑜𝑠𝐻
𝐺 = 𝐼 [1 + ]
cos (600 − 𝐻)

𝐵 = 3𝐼 − (𝑅 + 𝐺)

BR sector (𝟐𝟒𝟎𝟎 ≤ 𝑯 < 𝟑𝟔𝟎𝟎)

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 12


DIP 15EC72 MODULE-2

𝐻 = 𝐻 − 2400

𝐺 = 𝐼(1 − 𝑆)

𝑆 𝑐𝑜𝑠𝐻
𝐵 = 𝐼 [1 + ]
cos (600 − 𝐻)

𝑅 = 3𝐼 − (𝐵 + 𝐺)

Q7. Explain gray scale to color transformation with diagrams. (10 marks)

Explain pseudo color image processing. (10 marks)

Pseudo color image processing

• Humans can distinguish few shades of gray but thousands of colors.


• Pseudo color image processing is assigning false colors to a gray scale image.
• Pseudo colors are used to improve human visualization of gray scale image.

There are three basic methods of pseudo color image processing

1. Intensity level slicing


2. Intensity to color transformation
3. Frequency slicing

Intensity level slicing

• Intensity Slicing One of the simplest methods for pseudocolor image processing
• Grayscale image can be viewed as 3D function (x,y, and intensity)
• Suppose we define P planes perpendicular to intensity axis.
• Each plane i is associated with a color 𝐶𝑖
• Pixels with intensities lying along a particular plane i is assigned the color 𝐶𝑖 corresponding to the plane

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 13


DIP 15EC72 MODULE-2

Case 1

𝑆 = 𝑇(𝑟)

S= output image with pseudo colors

r= input gray scale image

T= Transformation

Case 2

Color assignment is done at multiple intervals

𝐶1 𝑖𝑓 𝑟 ≤ 𝐼1
𝐶 𝑖𝑓 𝐼1 < 𝑟 ≤ 𝐼2
𝑠={ 2 }
𝐶3 𝑖𝑓𝐼2 < 𝑟 ≤ 𝐼3
𝐶4 𝑖𝑓 𝑟 > 𝐼3

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 14


DIP 15EC72 MODULE-2

Gray level to color transformation

• Intensity levels are converted into different colors artificially by Performing three independent
transformations on the gray level of any input pixel as shown in figure.
• The transformations are smooth nonlinear functions which give more flexibility than intensity slicing
method.
• The three results can then serve as the red, green, and blue components of a color image.
• The three results are then fed separately into the red, green, blue channels of a color TV monitor
separately.
• This provides a produces an image whose color content is modulated by the nature of the
transformation.
• If all the three transformations have same phase and frequency, then we get a monochrome image.
• But, if small change in phase is done between the three transformations, then different colors R, G, B
generate a unique combination for gray level values.
• Little change is expected in pixel whose gray levels correspond to peak sinusoids.
• For pixels with grey levels corresponding to steep sections of sinusoids, a much stronger color is assigned
because of significant difference in the amplitude of three transformations.
• Thus, using different frequencies and phase changes, we try to highlight a portion of image in the best
possible way.

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 15


DIP 15EC72 MODULE-2

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 16


DIP 15EC72 MODULE-2

S.SHOBHA, SCE, DEPT. OF ECE, BENGALURU 17

You might also like