0% found this document useful (0 votes)
9 views1 page

Image Processing 3

The document discusses low-pass and high-pass filtering in image processing, explaining their mathematical foundations and effects on images. It covers various types of filters, including linear, non-linear, frequency domain, and directional filters, along with their applications and interpretations. Additionally, it introduces derivative filtering and homomorphic filtering, providing mathematical preliminaries and examples for each technique.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views1 page

Image Processing 3

The document discusses low-pass and high-pass filtering in image processing, explaining their mathematical foundations and effects on images. It covers various types of filters, including linear, non-linear, frequency domain, and directional filters, along with their applications and interpretations. Additionally, it introduces derivative filtering and homomorphic filtering, providing mathematical preliminaries and examples for each technique.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Image processing 3

Discuss briefly low pass filtering and high pass filtering in the context of image processing with
mathematical prelims

In image processing, low-pass filtering and high-pass filtering are fundamental operations used to
manipulate and analyze the frequency components of images. These filters are based on the Fourier
transform, which decomposes an image into its frequency components.

1. Mathematical Preliminaries

Let an image be represented as a 2D function:

f (x, y)

The 2D Discrete Fourier Transform (DFT) of the image is:

M −1 N −1
F (u, v) = ∑ ∑ f (x, y) ⋅ e−j2π( M + N )
ux vy
​ ​

​ ​

x=0 y=0

Where:

(x, y): spatial domain coordinates


(u, v): frequency domain coordinates
M , N : dimensions of the image

The inverse DFT reconstructs the image:

M −1 N −1
1
∑ ∑ F (u, v) ⋅ ej2π( M + N )
ux vy
f (x, y) = ​ ​ ​
​ ​

MN
u=0 v=0

2. Low-Pass Filtering

Purpose: To preserve low-frequency components (smooth areas) and suppress high-frequency


components (edges, noise).
Effect: Blurs the image and reduces detail/noise.

Mathematical Formulation

Apply a low-pass filter HLP (u, v) to the frequency domain:


G(u, v) = HLP (u, v) ⋅ F (u, v) ​

Then apply inverse DFT to get the filtered image.

Example: Ideal Low-Pass Filter

HLP (u, v) = {
1 if D(u, v) ≤ D0 ​

0 if D(u, v) > D0
​ ​ ​

Where D(u, v) = (u − M /2)2 + (v − N /2)2 is the distance from the center frequency and D0 is ​ ​

the cutoff frequency.

3. High-Pass Filtering

Purpose: To preserve high-frequency components (edges, sharp transitions) and suppress low-
frequency components (smooth regions).
Effect: Enhances edges and fine details.

Mathematical Formulation

Apply a high-pass filter HHP (u, v) to the frequency domain:


G(u, v) = HHP (u, v) ⋅ F (u, v) ​

Then apply inverse DFT to get the filtered image.

Example: Ideal High-Pass Filter

HHP (u, v) = {
0 if D(u, v) ≤ D0 ​

1 if D(u, v) > D0
​ ​ ​

4. Spatial Domain Interpretation

In the spatial domain, these filters are applied using convolution:

Low-pass filter: Gaussian blur, averaging filter

fLP (x, y) = f (x, y) ∗ hLP (x, y)


​ ​

High-pass filter: Laplacian, Sobel, or other edge detection kernels

fHP (x, y) = f (x, y) ∗ hHP (x, y)


​ ​

Summary

Filter Type Frequency Domain Effect on Image Example Kernel


Low-Pass Retains low frequencies Blurs image, smooths noise Gaussian
High-Pass Retains high frequencies Enhances edges/details Laplacian

These filters are critical in tasks like noise reduction, edge detection, and image enhancement.

Discuss briefly about different types of filters with their mathematical prelims and interpretetion

In image processing, filters are essential tools used to manipulate images for various purposes such as
smoothing, sharpening, edge detection, noise removal, and feature extraction. These filters are applied
either in the spatial domain (using convolution) or in the frequency domain (using Fourier transforms).

🔍 Types of Filters (with Mathematical Preliminaries and


Interpretation)

1. Linear Filters

Definition:

Filters where the output is a linear combination of the input pixels and their neighbors.

Mathematical Preliminaries:

In the spatial domain, filtering is performed using convolution:


m n
g(x, y) = f (x, y) ∗ h(x, y) = ∑ ∑ f (x − i, y − j) ⋅ h(i, j) ​ ​

i=−m j=−n

Where:

f (x, y): input image


h(i, j): filter kernel (mask)
g(x, y): filtered image

Common Linear Filters:

Filter Purpose Kernel Example (3×3) Interpretation


1 1 1
1
Mean (Box) Smoothing 9
​ 1 1 1
​ ​ ​ ​ ​
Averages pixel values to reduce noise
1 1 1
2 2
Gaussian Smoothing Varies, e.g., ∼ exp(− x2σ+y2 ) ​ Smooths with more weight to center pixel

0 1 0
Laplacian Edge detection 1 −4 1
​ ​ ​ ​ ​
Detects areas with rapid intensity change
0 1 0
−1 0 1
Sobel Edge detection Gx = −2 0 2 ​ ​ ​ ​ ​ ​
Detects vertical edges
−1 0 1

2. Non-Linear Filters

Definition:

Filters where the output is a non-linear function of input pixels, often used to remove noise without
blurring edges.

Mathematical Preliminaries:

No convolution. The output is derived from a function applied over the neighborhood.

g(x, y) = nonlinear_function ({f (x + i, y + j)}i,j∈window ) ​

Common Non-Linear Filters:

Filter Purpose Operation Interpretation


Replace pixel with median of Removes salt-and-pepper noise
Median Noise removal
neighborhood effectively
Max/Min Feature enhancement Max/Min of neighborhood Enhances bright/dark spots
Edge-preserving Considers spatial + intensity
Bilateral Smooths without losing edges
smoothing distance

3. Frequency Domain Filters

Definition:

Operate on the Fourier transform of the image by modifying its frequency components.

Mathematical Preliminaries:

Apply DFT:

F (u, v) = DFT{f (x, y)}

Multiply with frequency filter:

G(u, v) = H(u, v) ⋅ F (u, v)

Inverse DFT:

g(x, y) = IDFT{G(u, v)}

Common Frequency Filters:

Filter Type Description Mathematical Form (Ideal)

Low-pass Removes high-frequency (details) H(u, v) = 1 if D(u, v) ≤ D0 , else 0 ​

High-pass Removes low-frequency (background) H(u, v) = 1 if D(u, v) > D0 , else 0 ​

Band-pass Preserves a range of frequencies H(u, v) = 1 if D1 ≤ D(u, v) ≤ D2 ​ ​

4. Directional Filters

Definition:

Specialized edge detectors sensitive to edges in specific directions (e.g., vertical, horizontal, diagonal).

Example:

Prewitt Operator (horizontal edge detection):

−1 0 1
−1 0 1
​ ​ ​ ​ ​

−1 0 1

📌 Summary Table
Filter Type Domain Purpose Example Filter
Linear Spatial Smoothing, Edge Detection Gaussian, Sobel
Non-linear Spatial Noise removal, detail preservation Median, Bilateral
Frequency-based Frequency Frequency component analysis Low/High/Band-pass
Directional Spatial Direction-specific edge detection Prewitt, Sobel

These filters serve various applications in image enhancement, computer vision, feature extraction, and
pre-processing for machine learning. Each type has its own strengths and is chosen based on the goal of
the image processing task.

What is derivative filtering and homomorphic filtering . Explain briefly with mathematical prelims

✅ Derivative Filtering and Homomorphic Filtering in Image Processing


Both derivative filtering and homomorphic filtering are advanced techniques in image processing
used for feature enhancement, edge detection, and illumination correction. Below is a brief
explanation of each with mathematical preliminaries.

🔹 1. Derivative Filtering
🧠 Idea:
Derivative filtering is based on approximating the derivatives of an image to detect edges, intensity
changes, and features. It uses the first and second-order derivatives of image intensity.

📘 Mathematical Preliminaries:
Let f (x, y) be a grayscale image. The first-order derivatives (gradient) are:

∂f
≈ f (x + 1, y) − f (x, y)
∂x

∂f
≈ f (x, y + 1) − f (x, y)
∂y

The gradient magnitude is:

2 2
∂f ∂f
∣∇f (x, y)∣ = ( ) +( )
∂x ∂y
​ ​ ​

The second-order derivative (Laplacian):

∂2f ∂2f
∇2 f (x, y) = +
∂x2 ∂y 2
​ ​

🛠️ Examples of Derivative Filters:


Filter Name Type Kernel
−1 0 1
Sobel First-order Gx = −2 0 2 ​ ​ ​ ​ ​ ​

−1 0 1
Prewitt First-order Similar to Sobel, used for edge detection
0 1 0
Laplacian Second-order 1 −4 1
​ ​ ​ ​ ​

0 1 0

🎯 Interpretation:
Highlights edges and transitions in intensity.
Useful in detecting object boundaries, textures, etc.

🔹 2. Homomorphic Filtering
🧠 Idea:
Homomorphic filtering is used to separate illumination and reflectance components of an image. It
enhances image contrast and corrects non-uniform lighting.

📘 Mathematical Preliminaries:
An image can be modeled as:

f (x, y) = i(x, y) ⋅ r(x, y)

Where:

i(x, y): illumination (slow-varying, low frequency)


r(x, y): reflectance (detail, high frequency)

Taking the logarithm:

ln f (x, y) = ln i(x, y) + ln r(x, y)

This converts multiplication into addition, allowing separation in the frequency domain.

🧮 Steps:
1. Take log of image:

z(x, y) = ln f (x, y)

2. Apply Fourier Transform:

Z(u, v) = F{z(x, y)}

3. Apply a high-pass filter H(u, v) to suppress illumination:

S(u, v) = H(u, v) ⋅ Z(u, v)

4. Inverse Fourier Transform:

s(x, y) = F −1 {S(u, v)}

5. Exponentiate the result to get enhanced image:

g(x, y) = exp(s(x, y))

🛠️ Common Filter Used:


A Butterworth high-pass filter:

1
H(u, v) = 2n
( D(u,v) )

D0
1+ ​

Where:

D(u, v): distance from origin


D0 : cutoff frequency

n: order of filter

🎯 Interpretation:
Enhances contrast
Normalizes uneven illumination
Useful in medical imaging, remote sensing, face enhancement, etc.

✅ Summary Table
Feature Derivative Filtering Homomorphic Filtering

Purpose Edge detection, feature enhancement Illumination correction, contrast enhancement


Based on First and second derivatives Logarithmic and frequency filtering

Domain Spatial (mostly), can be frequency too Frequency domain


Key Math Tool Gradient, Laplacian Logarithm, Fourier transform

Example Application Detect object edges Remove shadows or uneven lighting

Let me know if you'd like a visual example or code snippet for either!

Explain all above filters with numerical example of each

Sure! Let's walk through derivative filtering and homomorphic filtering with simple numerical
examples using small grayscale images (3×3 or 5×5). We’ll demonstrate the effect of the filters step by
step.

🔹 1. Derivative Filtering
🔸 A. Sobel Filter (First-order Derivative)
Used for edge detection – detects horizontal or vertical changes in intensity.

➤ Input Image (3×3):

100 100 100


I= 0 0 0 ​ ​ ​ ​ ​

0 0 0

➤ Sobel Kernel (Horizontal Edge Detection – Gx ): ​

−1 0 1
Gx = −2 0 2 ​ ​ ​ ​ ​ ​

−1 0 1

➤ Convolution at center (only):

We place the kernel centered at pixel I2,2 (value = 0). ​

Gx ∗ I = (−1)(100) + (0)(100) + (1)(100) + (−2)(0) + (0)(0) + (2)(0) + (−1)(0) + (0)(0) + (1)


➤ Interpretation:

Since the top and middle rows have strong contrast, edges are detected between them.
The Sobel filter returns 0, indicating no horizontal edge at the center. Vertical edge detection
would show more here.

🔸 B. Laplacian Filter (Second-order Derivative)


➤ Laplacian Kernel:

0 1 0
L = 1 −4 1 ​ ​ ​ ​ ​

0 1 0

➤ Same input image:

100 100 100


I= 0 0 0 ​ ​ ​ ​ ​

0 0 0

➤ Convolution at center pixel (0):

(0)(100) + (1)(100) + (0)(100) + (1)(0) + (−4)(0) + (1)(0) + (0)(0) + (1)(0) + (0)(0) = 100

➤ Interpretation:

Laplacian detects regions with sharp intensity transitions.


Output 100 → edge detected at the center due to change from 100 → 0.

🔹 2. Homomorphic Filtering
🧠 Recall the model:
f (x, y) = i(x, y) ⋅ r(x, y)

Let’s use a 2×2 image with poor lighting:

30 50
f (x, y) = [ ]
200 220
​ ​

We assume:

Low values (30, 50) are due to low illumination.


High values (200, 220) are high reflectance.

✅ Step 1: Take logarithm


ln(30) ln(50) 3.40 3.91
ln f (x, y) = [ ]=[ ]
ln(200) ln(220) 5.30 5.39
​ ​ ​ ​

✅ Step 2: Apply High-Pass Filter in Frequency Domain


Let’s assume a simple high-pass filtering operation subtracts the mean from each pixel:

3.40 + 3.91 + 5.30 + 5.39


Mean = = 4.50
4

−1.10 −0.59
Filtered = [ ]
0.80 0.89
​ ​

✅ Step 3: Exponentiate to return to image domain


exp(−1.10) exp(−0.59) 0.33 0.55
exp(filtered) = [ ]≈[ ]
exp(0.80) exp(0.89) 2.23 2.43
​ ​ ​ ​

🎯 Interpretation:
The dynamic range is compressed, and contrast is enhanced.
Low illumination areas are brightened, and overly bright areas are normalized.
This makes the image visually more balanced.

✅ Final Summary Table with Examples


Filter Type Example Matrix Output/Effect

Sobel (Edge) Input with horizontal edge Emphasized vertical contrast


Laplacian Intensity drop Highlights abrupt change

Homomorphic Uneven lighting image Enhances contrast and removes uneven brightness

Let me know if you want a code-based simulation (Python/Matlab/OpenCV) of these filters as well!

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/1

You might also like