module 4 cgi
module 4 cgi
Based on the provided PPT note titled "CGIP_Mod4 ppt1", here's an analysis and summary to
help answer typical questions related to Module 4 - Part 1: Fundamentals of Digital Image
Processing in Computer Graphics & Image Processing (CST 304).
📌 2. Image Representation
Grayscale, Binary, Color images.
Types:
Binary (0, 1)
1/22
📌 3. Fundamental Steps in Image Processing
1. Image Acquisition
2. Image Enhancement
3. Image Restoration
6. Compression
7. Morphological Processing
8. Segmentation
2/22
Resolution:
📌 6. Image as Matrix
Size = M x N , Gray Levels = 2^k
Required bits: b = M x N x k
📌 7. Pixel Relationships
Neighbourhoods:
4-neighbors N4(p)
8-neighbors N8(p)
Adjacency:
📌 8. Distance Measures
Euclidean Distance D(p,q) = sqrt((x-s)^2 + (y-t)^2)
3/22
📌 9. Convolution Operation
Kernel/Mask/Filter:
🧠 If you need help answering specific questions from this module, feel
free to ask them directly now.
Would you like to proceed with sample questions based on this content, or do you have
specific ones you want answered?
Here's a detailed explanation of the topics you've asked about from Digital Image
Processing:
f (x, y)
Key Points:
Images are stored as a matrix of pixels (short for “picture elements”).
The value of each pixel indicates the brightness (gray level) or color.
4/22
All values are finite and discrete, suitable for computer storage and processing.
📌 2. Coordinate Conventions
To process images effectively, we follow specific conventions:
🔷 a) Mathematical Notation:
(x, y) – where x is the row index and y is the column index.
This distinction is essential when working with MATLAB or other image processing libraries.
0 = black, 1 = white
🔷 c) Grayscale Image
8-bit format, allows 256 levels of gray:
5/22
0 = black, 255 = white, 127 = mid-gray
3 samples → RGB
📌 4. Image as a Matrix
In digital form, an image is a matrix of pixels:
Size: M × N
Total Bits = M × N × k
Example:
6/22
128 × 128 image with 8 bits/pixel:
🔷 Dynamic Range:
Ratio between the brightest and darkest values.
🔷 Contrast:
Difference between highest and lowest intensity.
✅ Summary Table
Concept Description
Storage Bits = M × N × k
7/22
Would you like this summarized as notes or flashcards for exam prep?
Here’s a detailed explanation of Image Sampling and Quantization, which are fundamental
processes in Digital Image Processing.
📌 What is Digitization?
Digitization is the process of converting an analog image (real-world scene) into a digital
image so that it can be processed by a computer.
This involves two main steps:
📌 1. Image Sampling
🔷 Definition:
Sampling is the process of converting continuous spatial coordinates (x, y) into discrete
spatial coordinates.
In simple words: It selects where the image will be measured (i.e., how many pixels to
use).
🔷 How It Works:
Imagine laying a grid over a photograph.
At each grid point, you take a sample of the image’s brightness (intensity).
🔷 Sampling Rate:
Defined by the number of samples per unit distance (dots per inch – dpi).
8/22
aliasing.
🔷 Aliasing:
Happens when sampling rate is too low.
📌 2. Image Quantization
🔷 Definition:
Quantization is the process of converting the continuous range of intensity values into
finite discrete levels.
In simple words: It decides how many shades of gray (or color) will be used.
🔷 Gray-Level Resolution:
If an image uses k bits per pixel, it can represent:
L = 2k gray levels
🔷 Quantization Error:
Happens because real-world brightness values are approximated to the nearest discrete
level.
9/22
Aspect Sampling Quantization
📌 Example
Let’s say we digitize a real-world image:
Quantization: Each pixel can take one of 256 gray levels ⇒ 8 bits per pixel
Storage Requirement:
📌 Visualization Aid
Concept Example
10/22
Image compression
Enhancement
Recognition
Transmission
✅ Summary
Term Key Idea
Would you like diagrams or animations to help visualize sampling vs. quantization?
Here's a detailed explanation of Spatial Resolution and Gray Level Resolution — two crucial
concepts in Digital Image Processing.
✅ 1. Spatial Resolution
🔷 Definition:
11/22
Spatial resolution refers to the smallest discernible detail in an image.
It describes how finely an image is sampled in space (x, y).
🔷 Factors:
Sampling density – how many pixels per unit area.
Measured in: pixels per inch (ppi) or dots per inch (dpi).
More pixels per unit area Fewer pixels per unit area
🔷 Example:
A 1024×1024 image has better spatial resolution than a 256×256 image of the same
physical size.
🔷 Subsampling:
Reducing spatial resolution by removing rows and columns (e.g., 4x4 → 2x2).
🔷 Factors:
Determined by the number of bits per pixel (k).
12/22
🔷 High vs Low Gray Level Resolution:
High Gray Level Resolution Low Gray Level Resolution
🔷 Quantization:
Gray level resolution is directly affected by quantization.
Concerned With Pixel spacing (image detail) Intensity detail (brightness steps)
📌 Example
A 256×256 image with 256 gray levels:
13/22
Increasing to 1024 gray levels (10-bit):
📌 Summary
Term Definition Controlled By Improves
Spatial Resolution Detail per unit area Sampling rate Image sharpness
Would you like a visual diagram comparing these two resolutions for your notes or
presentation?
✅ 1. Neighborhoods of a Pixel
Let a pixel p be located at coordinates (x, y) in an image.
🔷 a) 4-Neighborhood N4(p)
14/22
Consists of the pixels immediately horizontal and vertical to p .
Coordinates:
(x + 1, y)
(x - 1, y)
(x, y + 1)
(x, y - 1)
Coordinates:
(x + 1, y + 1)
(x + 1, y - 1)
(x - 1, y + 1)
(x - 1, y - 1)
🔷 c) 8-Neighborhood N8(p)
✅ 2. Adjacency
Adjacency defines whether two pixels are considered neighbors based on their intensity
values.
🔷 a) 4-Adjacency
Pixels p and q are 4-adjacent if q is in N4(p) and both have values in V .
🔷 b) 8-Adjacency
Pixels p and q are 8-adjacent if q is in N8(p) and their values belong to V .
15/22
Two pixels p and q are m-adjacent if:
q ∈ N4(p) , OR
This ensures only one valid path exists between adjacent pixels.
✅ 3. Digital Path
A digital path (or curve) from a pixel p to q is a sequence of adjacent pixels that connects
p to q .
Types:
8-path
m-path
✅ 4. Connectivity
Connectivity defines whether two pixels belong to the same object or region.
Pixels p and q are connected in set S if there’s a path between them that lies entirely
in S .
16/22
🔷 Adjacency of Regions:
Two regions R₁ and R₂ are adjacent if their union forms a connected set.
🔷 Boundary:
The boundary (or contour) of a region is the set of pixels in the region that have at least
one neighbor outside the region (i.e., in the background).
✅ 6. Distance Measures
Distance functions measure how far apart two pixels are:
🔷 a) Euclidean Distance:
DE (p, q) =
(x − s)2 + (y − t)2
Straight-line distance.
🔷 d) m-Distance:
Shortest m-path between p and q .
17/22
If p1 = 0 , p3 = 0 : path = p → p2 → p4 (length = 2)
If p1 = 1 , p3 = 0 : path = p → p1 → p2 → p4 (length = 3)
If p1 = 1 , p3 = 1 : path = p → p1 → p2 → p3 → p4 (length = 4)
📌 Summary Table
Concept Description
Distance How far apart two pixels are (Euclidean, D4, D8, Dm)
Would you like a labeled diagram to visualize 4-, 8-, and m-neighborhoods or the distance
contours?
18/22
📌 What is Spatial Domain?
In image processing, spatial domain refers to operations performed directly on the pixels of
an image.
📌 What is Convolution?
Convolution is a fundamental spatial domain operation where an image is modified by a
small matrix called a kernel (also known as a mask or filter).
✅ Convolution – Definition:
Let:
Image = I(x, y)
a b
O(x, y) = ∑ ∑ H(m, n) ⋅ I(x − m, y − n)
m=−a n=−b
Where:
19/22
📌 How Convolution Works (Step-by-Step):
1. Place the kernel at a pixel location in the image.
📌 Terminology
Term Description
Kernel / Mask Small matrix (e.g., 3×3, 5×5) used for filtering
1 1 1
1
1 1 1
9
1 1 1
2. Sharpening Filter
Enhances edges and transitions.
20/22
Example:
0 −1 0
−1 5 −1
0 −1 0
Sobel X:
−1 0 1
−2 0 2
−1 0 1
Sobel Y:
−1 −2 −1
0 0 0
1 2 1
🔷 Normalization
After convolution, values may exceed the typical pixel range (e.g., 0–255).
📌 Applications of Convolution
21/22
Task Example Kernel
📌 Summary Table
Concept Description
Convolution Operation where each output pixel is a weighted sum of neighborhood pixels
Would you like a visual example of convolution using a 3×3 filter on a sample image patch?
22/22