Module 5 Cgi Revision
Module 5 Cgi Revision
The file you uploaded, titled "Module 5 Computer Graphics & Image Processing PPT Notes -
Kerala Notes", appears to be a PowerPoint-to-PDF conversion for Module 5 of the CST 304 course
under the KTU 2019 Scheme. The file is partially corrupted and heavily garbled in many places, but
from what’s recoverable, the document likely covers topics related to:
Image Segmentation
However, due to extensive corruption and encoding issues, much of the text is unreadable or
distorted.
Do you want a summary, help preparing for an exam, or clarification on a certain topic from
Module 5?
Certainly! Let’s break down Basic Gray Level Transformations in Image Processing in a simple,
clear, and complete way.
In a grayscale image:
0 = black
255 = white
Prepare images for further processing (like edge detection, segmentation, etc.)
s=L−1−r
where:
What it does: Inverts the image colors (black becomes white and vice versa).
Use: Useful for medical images (X-rays, etc.) to highlight dark regions.
2. Logarithmic Transformation
Formula:
s = c ⋅ log(1 + r)
where:
2/37
c is a constant (to scale the result)
What it does:
Use: Useful for images with large dynamic range (very bright and very dark areas).
s = c ⋅ rγ
where:
c is a scaling constant
What it does:
Use: Used in display systems, TVs, camera sensors (called gamma correction).
3/37
📊 Visualization Example
Imagine you have a pixel with intensity 100.
In negative:
s = 1000.5 = 10
Each method gives you a different transformation result depending on your purpose.
🧪 Summary Table
Transformation Formula Effect Use Case
Certainly! Let’s break down Contrast Stretching in image processing in a simple and complete
way.
🧠 Why?
Some images look dull or faded because their pixel values fall within a narrow range (e.g.,
100–150 instead of 0–255).
Contrast stretching spreads these values across the full range (0–255), making dark areas
darker and bright areas brighter.
(rmax − rmin )
Where:
📉 Example:
Suppose an image has pixel values between 50 and 180, and we want to stretch it to 0 to 255.
(100 − 50) 50
s= × 255 = × 255 ≈ 98
(180 − 50) 130
5/37
📊 Types of Contrast Stretching
There are different methods based on the shape of the transformation function:
Best when the image has a known min and max intensity
Example:
6/37
Pixel Value Range Appearance
Low-light photographs
🔁 Summary
Feature Contrast Stretching
(r−rmin )
Formula s=
(rmax −rmin )
× 255
Would you like me to show you a diagram or Python code that demonstrates contrast stretching
visually?
7/37
explain more about histogram equalisation
Sure! Let’s go step by step and explain Histogram Equalization in a simple and complete way.
Histogram equalization spreads out the pixel values over the full range (0–255), making the
image look clearer and better contrasted.
📊 What is a Histogram?
A histogram shows how many pixels have each gray level.
Example:
If many pixels have value 50–100, the histogram will be high in that region and flat elsewhere.
Count how many times each gray level (0–255) occurs in the image.
b f l h l l
8/37
Number of pixels with gray level rk
p(rk ) =
k
CDF(rk ) = ∑ p(rj )
j=0
sk = (L − 1) × CDF(rk )
🖼️ Result
Original image: pixel values may be clustered → low contrast
🧪 Example (simplified)
Let’s say you have a small image with 4 gray levels: 0, 1, 2, 3
Original histogram:
Gray 0: 4 pixels
Gray 1: 2 pixels
Gray 2: 2 pixels
Gray 3: 0 pixels
Step 1: Probability
Gray Level Pixels Probability
0 4 0.5
9/37
Gray Level Pixels Probability
1 2 0.25
2 2 0.25
3 0 0.0
Step 2: CDF
Gray Level CDF
0 0.5
1 0.75
2 1.0
3 1.0
Step 3: Transform
sk = (L − 1) × CDF(rk )
L=4
0 1.5 ≈ 2
1 2.25 ≈ 2
2 3
3 3
10/37
Simple and fast
⚠️ Disadvantages
May over-enhance noise
Can produce unnatural results for images with bimodal histograms (two peaks)
Use CLAHE (Contrast Limited Adaptive Histogram Equalization) for better results
🧾 Summary
Feature Histogram Equalization
Would you like to see a Python code demo or graphical example of how histogram equalization
works?
Certainly! Let's break down the Fundamentals of Spatial Filtering in Image Processing in a simple
and complete way.
11/37
🧠 What is Spatial Filtering?
Spatial filtering is a technique used to process an image by modifying pixel values using a filter
mask (also called a kernel or window) based on the values of neighboring pixels.
It's called "spatial" because it works directly in the image space (x, y), not in the frequency
domain.
📦 Basic Idea
You move a small matrix (filter mask) across the image.
🧱 Key Components
1. Image:
A 2D array of intensity values (gray level pixels).
ini
[ a b c
d e f
g h i ]
3. Operation:
Typically convolution or correlation is used:
Multiply corresponding values from the filter and the image region.
12/37
Add the results.
Common types:
Reduces noise
scss
css
Laplacian mask:
[ 0 -1 0
-1 4 -1
0 -1 0 ]
🔶 2. Non-Linear Filters
Operations are not based on addition/multiplication.
Example:
13/37
Median Filter:
Replaces the center pixel with the median of the surrounding pixels.
🔁 Convolution vs Correlation
Operation Difference
📌 Summary
Feature Description
14/37
Feature Description
Would you like visual examples or sample Python code for any of these filters?
explain in detail linear and non linear spatial filter in detial with
all included flter
Certainly! Let's explore Linear and Non-Linear Spatial Filters in Image Processing in full detail,
including the most commonly used filters, their working principles, formulas, and use cases.
1. Linear Filters
2. Non-Linear Filters
a b
g(x, y) = ∑ ∑ w(s, t) ⋅ f (x + s, y + t)
s=−a t=−b
15/37
w(s,t) : filter mask/kernel
csharp
[1 1 1
1 1 1
1 1 1] / 9
Same as mean filter, but gives more weight to the central pixel.
Example:
csharp
[1 2 1
2 4 2
1 2 1] / 16
16/37
🧨 B. Sharpening Filters (High-Pass Filters)
🔴 Purpose:
Enhance edges, fine details, and boundaries
1. Laplacian Filter
Sensitive to noise
Example:
ini
[ 0 -1 0
-1 4 -1
0 -1 0 ]
Another version:
ini
[-1 -1 -1
-1 8 -1
-1 -1 -1]
2. Unsharp Masking
Horizontal:
ini
[-1 0 1
-2 0 2
17/37
-1 0 1]
Vertical:
ini
[ 1 2 1
0 0 0
-1 -2 -1]
➤ Prewitt Operator
Horizontal:
ini
[-1 0 1
-1 0 1
-1 0 1]
Vertical:
ini
[ 1 1 1
0 0 0
-1 -1 -1]
markdown
[ 1 0
0 -1] and [ 0 1
-1 0 ]
They are especially good for noise removal without blurring edges.
💎 A. Median Filter
Replaces the central pixel with the median of the surrounding values.
2. Min Filter
🎯 C. Midpoint Filter
Averages the max and min values in the neighborhood.
1
g(x, y) = (max + min)
2
19/37
Useful for reducing uniform noise
🌀 D. Mode Filter
Replaces the pixel with the most frequent value in the neighborhood.
Good for reducing impulse noise when many pixels are repeated.
🧾 Summary Table
Filter Type Filter Name Purpose Notes
Linear Mean Filter Blur, reduce random noise Simple, but blurs edges
Non-Linear Median Remove salt & pepper noise Best for edge preservation
📸 Use Cases
Medical images: Median filter to preserve edges while removing noise.
20/37
Would you like a visual diagram or example images to compare these filters?
Let’s break down Gradient and Laplacian operators in image processing in a detailed and
beginner-friendly way.
📌 1. GRADIENT OPERATOR
🔷 What is the Gradient?
The gradient measures the rate and direction of change in intensity at each pixel. It points in the
direction of the most rapid increase in intensity and its magnitude shows how strong the change
(edge) is.
🧠 Used For:
Edge detection
📐 Mathematical Form:
Let f(x, y) be the image intensity at point (x, y).
∂f ∂f
∇f = [ , ]
∂x ∂y
∂f
∂x
: horizontal change (edge in vertical direction)
∂f
∂y
: vertical change (edge in horizontal direction)
📏 Gradient Magnitude:
2 2
∂f ∂f
∣∇f ∣ = ( ) +( )
∂x ∂y
Approximation (faster):
∂f ∂f
∣∇f ∣ ≈ +
∂x ∂y
21/37
🔧 Common Gradient Filters:
➤ Sobel Operator (best for noisy images)
Horizontal (Gx):
ini
[-1 0 1
-2 0 2
-1 0 1]
Vertical (Gy):
ini
[ 1 2 1
0 0 0
-1 -2 -1]
Gx:
ini
[-1 0 1
-1 0 1
-1 0 1]
Gy:
ini
[ 1 1 1
0 0 0
-1 -1 -1]
Gx:
csharp
22/37
[1 0
0 -1]
Gy:
csharp
[0 1
-1 0]
📌 2. LAPLACIAN OPERATOR
🔷 What is Laplacian?
The Laplacian measures the second derivative of the image. It tells how intensity is changing in all
directions—used to detect edges and fine details.
🧠 Used For:
Edge enhancement
Image sharpening
📐 Mathematical Form:
∂2f ∂2f
∇2 f = +
∂x2 ∂y 2
ini
[ 0 -1 0
-1 4 -1
0 -1 0 ]
23/37
➤ 8-neighbor Laplacian:
ini
[-1 -1 -1
-1 8 -1
-1 -1 -1]
Use Case Detecting edges and boundaries Enhancing edges and corners
📸 Visual Effect:
Gradient: Produces thin, strong edge lines, good for outlining objects.
Would you like a visual diagram comparing the effect of these filters on a sample image?
Certainly! Let's go over the three main edge detection operators—Roberts, Sobel, and Prewitt—
in complete detail, with clear explanations of how they work, their kernels, and when to use each.
24/37
🧠 What is Edge Detection?
Edge detection identifies the boundaries of objects in an image. These boundaries are typically
areas where there is a sudden change in intensity (brightness). Edge detectors help highlight
such transitions.
📐 Masks (Kernels):
Gx Gy
[1 0] [0 1]
[0 -1] [-1 0]
💡 Formula:
Let I(x, y) be the intensity at pixel (x, y):
Gx = I(x, y) − I(x + 1, y + 1)
Gy = I(x + 1, y) − I(x, y + 1)
Gradient magnitude:
✅ Pros:
Fast and simple
❌ Cons:
25/37
Very sensitive to noise
🟩 2. Sobel Operator
📌 Overview:
Most commonly used edge detector
📐 Masks (Kernels):
Horizontal (Gx):
ini
[-1 0 1
-2 0 2
-1 0 1]
Vertical (Gy):
ini
[ 1 2 1
0 0 0
-1 -2 -1]
💡 Formula:
∣G∣ = G2x + G2y
or approximate:
✅ Pros:
26/37
Good for detecting both horizontal and vertical edges
❌ Cons:
Slightly more computationally expensive than Roberts
🟨 3. Prewitt Operator
📌 Overview:
Similar to Sobel, but simpler
📐 Masks (Kernels):
Horizontal (Gx):
ini
[-1 0 1
-1 0 1
-1 0 1]
Vertical (Gy):
ini
[ 1 1 1
0 0 0
-1 -1 -1]
✅ Pros:
Easy to implement
❌ Cons:
27/37
Less accurate than Sobel
📊 Comparison Table
Feature Roberts Sobel Prewitt
Prewitt → Good balance of speed and performance, suitable for teaching and lightweight
systems.
Would you like a visual side-by-side comparison on an image to see how each operator performs?
Sure! Let's go through Image Segmentation and all its related techniques like Thresholding,
Region-based methods, Region Growing, and Region Splitting & Merging in very detailed yet
simple language.
28/37
📌 What is Image Segmentation?
Image Segmentation is the process of dividing an image into meaningful regions—such as
separating an object from its background.
Imagine drawing boundaries around objects in an image. The goal is to group pixels that have
similar properties (like color, intensity, or texture).
🧠 Why is it important?
Helps in object detection and recognition
🎯 Formula:
For a given threshold value T :
g(x, y) = {
1 if f (x, y) ≥ T
0 if f (x, y) < T
29/37
1 = white (object), 0 = black (background)
🟦 1. Global Thresholding
A single threshold value (T) is applied to the entire image.
Works well if the image has high contrast between object and background.
🔧 How to choose T?
If the histogram is bimodal (two peaks), you can choose T between the peaks
✅ Pros:
Simple and fast
❌ Cons:
Doesn’t work well with uneven lighting or low contrast images
Each region or pixel gets its own threshold, based on local properties (mean or median)
✅ Pros:
Works well in non-uniform illumination
❌ Cons:
More complex to compute
🔶 Region-Based Segmentation
Instead of setting a threshold, these methods group connected pixels with similar properties (like
intensity, texture).
30/37
🟧 1. Region Growing
📌 What is it?
Start with a seed point (a pixel known to be part of the object).
🎯 Similarity condition:
Difference in intensity less than a threshold
✅ Pros:
Produces connected regions
❌ Cons:
Sensitive to noise
🔧 Homogeneity Criteria:
All pixel values in a region have similar intensity
✅ Pros:
Works well for complex images
31/37
No need to define seed points
❌ Cons:
More computationally expensive
🧾 Summary Table
Technique Type Key Idea Pros Cons
Global Thresholding Intensity One value separates object Fast, simple Fails under poo
& background lighting
Local Thresholding Intensity Threshold varies locally Better with uneven Slower, more co
lighting
Region Growing Region- Start from seed, grow Accurate with good Seed selection,
based similar pixels seeds sensitive to nois
Region Splitting & Region- Divide & combine based on No seed needed, Slow, blocky res
Merging based similarity handles complexity sometimes
📸 Real-Life Examples
Medical imaging: Segmenting tumors from MRI scans
Would you like Python code examples or diagrams to help you visualize these techniques better?
spatial filter
linear ,non linear
general classification
smoothing spatial filter
linear
32/37
mean filter
average filter and weighted avg fitering
order statics
minimum,maximum,median
sharpening spatial filters
Absolutely! Let’s go step-by-step and explain Spatial Filters and their types (Linear, Non-linear,
Smoothing, Sharpening, Mean, Median, Min, Max, etc.) in very simple and clear language.
The kernel moves over the image, and for each pixel, it computes a new value using the
surrounding pixels.
1. Linear Filters
2. Non-Linear Filters
Mean Filter
Averaging Filter
To remove noise
csharp
[1 1 1
1 1 1
1 1 1] / 9
Each pixel is replaced with the average of its 8 neighbors and itself.
Example:
34/37
csharp
[1 2 1
2 4 2
1 2 1] / 16
Better than the simple average because it preserves the image structure slightly better.
They are good at removing specific types of noise like salt-and-pepper noise without blurring the
edges.
🔹 Median Filter
Replaces the center pixel with the median value of the neighborhood.
Very effective for removing salt-and-pepper noise (black and white dots).
Example: Values = [12, 14, 255, 13, 10, 11, 13, 14, 15] → Sorted = [10, 11, 12, 13, 13, 14, 14, 15, 255]
→ Median = 13
🔸 Minimum Filter
35/37
Replaces the pixel with the minimum value in the neighborhood.
🔸 Maximum Filter
Replaces the pixel with the maximum value in the neighborhood.
Opposite of smoothing
🔹 High-Pass Filtering
Passes high-frequency content (edges) and blocks low-frequency (smooth regions)
ini
[ 0 -1 0
-1 4 -1
0 -1 0 ]
36/37
🔹 Unsharp Masking
A blurred version of the image is subtracted from the original to sharpen it.
🧾 Summary Table
Type Filter Name Purpose Effect
Non-Linear Median Filter Remove salt & pepper noise Preserves edges
🧠 Real-life Uses
Mean filter: General smoothing
Would you like visual diagrams or Python code examples to see how each filter works practically?
37/37