21EC722 Module 4 DIP
21EC722 Module 4 DIP
Q1. Explain the steps involved in frequency domain Filtering. (10 marks)
1. Multiply the input image by(−1)𝑥+𝑦 to center the transform to u = M/2 and v = N/2 (if M
and N are even numbers, then the shifted coordinates will be integers)
2. Compute F(u,v), the DFT of the image from (1)
3. Multiply F(u,v) by a filter function H(u,v)
4. Compute the inverse DFT of the result in (3)
5. Obtain the real part of the result in (4)
6. Multiply the result in (5) by (−1)𝑥+𝑦 to cancel the multiplication of the input image.
Low pass filter performs smoothing or blurring of images. It is achieved in frequency domain by
attenuation of high frequency or high intensity transition. The output of low pass filter contains
small or less intensity transition in to a particular group of pixels.
ILPF gives ideal response for image filtering. It passes all the frequency within a circle of radius
D0 which is cutoff frequency of ILPF, while attenuates all the frequency lies outside this circle. It
is specified by the function in equation.
1 𝐷(𝑢, 𝑣) ≤ 𝐷0
𝐻 (𝑢, 𝑣) = { }
0 𝐷(𝑢, 𝑣) > 𝐷0
𝐷(𝑢, 𝑣) = √𝑢2 + 𝑣 2
Butterworth low pass filter (BLPF) Butterworth filter is very popular filter due to its varying
performance depends on its order. An Increased ordered Butterworth filter tends its performance
towards ideal filter. The filter function of BLPF is given by equation.
1
𝐻 (𝑢, 𝑣) =
𝐷(𝑢, 𝑣) 2𝑛
1+[ 𝐷0 ]
Gaussian filter is the generalized filter for digital images. Filter function for GLPF is denoted by
equation
−𝐷2 (𝑢,𝑣)⁄
𝐻 (𝑢, 𝑣) = 𝑒 2𝜎 2
−𝐷2 (𝑢,𝑣)
⁄ 2
𝐻 (𝑢, 𝑣) = 𝑒 2𝐷0
• Edges and fine detail in images are associated with high frequency components
• High pass filters pass only the high frequencies and drops the low ones
• High pass filters are precisely the reverse of low pass filters, so:
𝐻ℎ𝑝 (𝑢, 𝑣) = 1 − 𝐻𝑙𝑝 (𝑢, 𝑣)
0 𝐷(𝑢, 𝑣) ≤ 𝐷0
𝐻 (𝑢, 𝑣) = { }
1 𝐷(𝑢, 𝑣) > 𝐷0
1
𝐻 (𝑢, 𝑣) =
𝐷0 2𝑛
1+[ ]
𝐷(𝑢, 𝑣)
−𝐷2 (𝑢,𝑣)
⁄ 2
𝐻 (𝑢, 𝑣) = 1 − 𝑒 2𝐷0
Q4. With a block diagram and equations explain homomorphic filtering (10 marks)
Homomorphic filtering
Images are sometimes been acquired under poor illumination. Under this condition, the same
uniform
region will appear brighter on some areas and darker on others. This undesired situation will lead
to
several severe problems in computer vision-based system. The pixels might be misclassified,
leading to
wrong segmentation results, and therefore contribute to inaccurate evaluation or analysis from the
system. Therefore, it is very crucial to process this type of images first before they are fed into the
system.
One of the popular methods used to enhance or restore the degraded images by uneven illumination
is
by using homomorphic filtering.
1) The first component is the amount of source illumination incident on the scene being viewed I
(x, y).
2) The second component is the reflectance component of the objects on the scene r (x, y).
In this model, the intensity of I (x, y) changes slower than r (x, y). Therefore, I (x, y) is considered
to have more low frequency components than r (x, y). Using this fact, homomorphic filtering
technique aims to reduce the significance of I (x, y) by reducing the low frequency components of
the image. This can be achieved by executing the filtering process in frequency domain.
In general, homomorphic filtering can be implemented using five stages, as stated as follows:
STAGE 1: Take a natural logarithm of both sides to decouple I (x, y) and r (x, y) components
STAGE 2: Use the Fourier transform to transform the image into frequency domain:
Or
where 𝐹𝑖 (𝑢, 𝑣) and 𝐹𝑟 (𝑢, 𝑣) are the Fourier transforms of 𝑙𝑛 𝑖 (𝑥, 𝑦) and 𝑙𝑛 𝑟(𝑥, 𝑦) respectively.
STAGE 3: High pass the Z (u, v) by means of a filter function H (u, v) in frequency domain,
STAGE 4: Take an inverse Fourier transform to get the filtered image in the spatial domain:
𝑠(𝑥, 𝑦) = 𝔍−1 {𝑆 (𝑢, 𝑣)} = 𝔍−1 {𝑍(𝑢, 𝑣)𝐻(𝑢, 𝑣)} = 𝔍−1 {𝐹𝑖 (𝑢, 𝑣)𝐻 (𝑢, 𝑣) + 𝐹𝑟 (𝑢, 𝑣)𝐻 (𝑢, 𝑣)}
STAGE 5: The filtered enhanced image g(x,y) can be obtained by using the following
equations:
In homomorphic filter by removing the low frequencies (high pass filtering) the effects of
illumination (low frequencies) can be removed
𝐷2(𝑢,𝑣)
−𝑐[ ]
𝐷2
For example, in the above block diagram is𝐻 (𝑢, 𝑣) = (ᵞ𝐻 − ᵞ𝐿 ) [1 − 𝑒 0 ] + ᵞ𝐿
• γH > 1 and γL < 1 ⇒ attenuate low frequencies (luminance), amplify high frequencies
(reflectance).
(i) RGB color model (ii) HIS color model (iii) CMY color model
• Safe RGB colors: A 24-bit color image is having more than 16 million colors.so only a subset of colors
is used that can be reproduced faithfully. This subset of color is called safe colors.
• Only 216 colors are used as safe colors.
• For 216 safe colors, each RGB value can only be 0, 51, 102, 204, or 255.
R G B
Black 00 00 00
white FF FF FF
Purest red FF 00 00
2. CMY and CMYK model
• Used in electrostatic/ink-jet plotters that deposit pigment on paper
• Cyan, magenta, and yellow are complements of red, green, and blue
• Subtractive primaries: colors are specified by what is subtracted from white light, rather than by what
is added to blackness
• Cartesian coordinate system
• Subset is unit cube
• white is at origin, black at (1, 1, 1):
RGB to CMY
𝑹+𝑮+𝑩 = 𝟏
𝑮+𝑩 = 𝟏−𝑹
𝑹+𝑩= 𝟏−𝑮
𝑹+𝑮= 𝟏−𝑩
CMY to RGB
𝑅 1 𝐶
[𝐺 ] = [1] − [𝑀 ]
𝐵 1 𝑌
• The HSI (hue, saturation, intensity) color model, decouples the intensity component from the color-
carrying information (hue and saturation) in a color image.
• The HSI model is an ideal tool for developing image processing algorithms based on color descriptions
that are natural and intuitive to humans.
The intensity is determined by how far up the vertical intensity axis this hexagonal plane sits (not apparent
from this diagram.
-------------------------------------------------------------------------------------------------------------------------------
Q6. Explain the procedure for converting colors from RGB to HIS and vice versa. (10 marks)
𝞱 𝒊𝒇 𝑩 ≤ 𝑮
𝑯={ }
𝟑𝟔𝟎𝟎 𝒊𝒇 𝑩 > 𝑮
𝟏
[(𝑹 − 𝑮) + (𝑹 − 𝑩)
𝞱 = 𝒄𝒐𝒔 −𝟏
{ 𝟐
𝟏 }
[(𝑹 − 𝑮)𝟐 + (𝑹 − 𝑩)(𝑮 − 𝑩)] ⁄𝟐
𝟑
𝑺 = 𝟏− [𝒎𝒊𝒏(𝑹, 𝑮, 𝑩)]
(𝑹 + 𝑮 + 𝑩)
𝟏
𝑰= (𝑹 + 𝑮 + 𝑩)
𝟑
𝐵 = 𝐼(1 − 𝑆)
𝑆 𝑐𝑜𝑠𝐻
𝑅 = 𝐼 [1 + ]
cos (600 − 𝐻)
𝐺 = 3𝐼 − (𝑅 + 𝐵)
𝐻 = 𝐻 − 1200
𝑅 = 𝐼(1 − 𝑆)
𝑆 𝑐𝑜𝑠𝐻
𝐺 = 𝐼 [1 + ]
cos (600 − 𝐻)
𝐵 = 3𝐼 − (𝑅 + 𝐺)
𝐻 = 𝐻 − 2400
𝐺 = 𝐼(1 − 𝑆)
𝑆 𝑐𝑜𝑠𝐻
𝐵 = 𝐼 [1 + ]
cos (600 − 𝐻)
𝑅 = 3𝐼 − (𝐵 + 𝐺)
Q7. Explain gray scale to color transformation with diagrams. (10 marks)
• Intensity Slicing One of the simplest methods for pseudocolor image processing
• Grayscale image can be viewed as 3D function (x,y, and intensity)
• Suppose we define P planes perpendicular to intensity axis.
• Each plane i is associated with a color 𝐶𝑖
• Pixels with intensities lying along a particular plane i is assigned the color 𝐶𝑖 corresponding to the plane
Case 1
𝑆 = 𝑇(𝑟)
T= Transformation
Case 2
𝐶1 𝑖𝑓 𝑟 ≤ 𝐼1
𝐶 𝑖𝑓 𝐼1 < 𝑟 ≤ 𝐼2
𝑠={ 2 }
𝐶3 𝑖𝑓𝐼2 < 𝑟 ≤ 𝐼3
𝐶4 𝑖𝑓 𝑟 > 𝐼3
• Intensity levels are converted into different colors artificially by Performing three independent
transformations on the gray level of any input pixel as shown in figure.
• The transformations are smooth nonlinear functions which give more flexibility than intensity slicing
method.
• The three results can then serve as the red, green, and blue components of a color image.
• The three results are then fed separately into the red, green, blue channels of a color TV monitor
separately.
• This provides a produces an image whose color content is modulated by the nature of the
transformation.
• If all the three transformations have same phase and frequency, then we get a monochrome image.
• But, if small change in phase is done between the three transformations, then different colors R, G, B
generate a unique combination for gray level values.
• Little change is expected in pixel whose gray levels correspond to peak sinusoids.
• For pixels with grey levels corresponding to steep sections of sinusoids, a much stronger color is assigned
because of significant difference in the amplitude of three transformations.
• Thus, using different frequencies and phase changes, we try to highlight a portion of image in the best
possible way.