0% found this document useful (0 votes)
3 views

module 4 cgi

The document provides an overview of Digital Image Processing fundamentals, including image representation, sampling, quantization, and pixel relationships. Key topics include types of images, fundamental processing steps, and the importance of spatial and gray level resolution. It emphasizes the significance of digitization processes in enhancing image quality for various applications.

Uploaded by

boxabhi2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

module 4 cgi

The document provides an overview of Digital Image Processing fundamentals, including image representation, sampling, quantization, and pixel relationships. Key topics include types of images, fundamental processing steps, and the importance of spatial and gray level resolution. It emphasizes the significance of digitization processes in enhancing image quality for various applications.

Uploaded by

boxabhi2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

module 4 cgi

CGIP_Mod4 ppt1 (1).pptx


Presentation

analyse the note comple and answer the questions asked

Based on the provided PPT note titled "CGIP_Mod4 ppt1", here's an analysis and summary to
help answer typical questions related to Module 4 - Part 1: Fundamentals of Digital Image
Processing in Computer Graphics & Image Processing (CST 304).

🔍 Key Topics Covered

📌 1. Introduction to Digital Image Processing


Purpose:

Enhance pictorial information for human interpretation.

Process image data for machine understanding.

Applications: CAD (Computed Aided Diagnosis), medicine, GIS, HCI, etc.

📌 2. Image Representation
Grayscale, Binary, Color images.

Image as a 2D matrix: f(x, y) — x, y: spatial coordinates; f: intensity.

Types:

Binary (0, 1)

Black and White

Grayscale (8-bit, 256 levels)

High Color (16-bit, 65,536 colors – RGB)

1/22
📌 3. Fundamental Steps in Image Processing
1. Image Acquisition

2. Image Enhancement

3. Image Restoration

4. Color Image Processing

5. Wavelets (for compression & multiresolution)

6. Compression

7. Morphological Processing

8. Segmentation

9. Representation & Description

10. Object Recognition

11. Knowledge Base

📌 4. Components of Image Processing System


Image Sensors

Specialized Hardware (Digitizer, ALU)

Computer (general-purpose or dedicated)

Image Processing Software

Mass Storage (short-term, online, archival)

Displays (usually color monitors)

Hardcopy Devices (laser printers, CD-ROMs)

Networking (bandwidth important)

📌 5. Sampling & Quantization


Sampling: Digitizes coordinates (x-axis)

Quantization: Digitizes amplitude/gray level (y-axis)

Digitization = Sampling + Quantization

2/22
Resolution:

Spatial: Smallest discernible detail (related to sampling)

Gray Level: Smallest intensity change (related to quantization)

📌 6. Image as Matrix
Size = M x N , Gray Levels = 2^k

Required bits: b = M x N x k

Example: For 128×128 with 256 gray levels (8 bits):


128 × 128 × 8 = 131072 bits = 16384 bytes

📌 7. Pixel Relationships
Neighbourhoods:

4-neighbors N4(p)

Diagonal neighbors ND(p)

8-neighbors N8(p)

Adjacency:

4-, 8-, and m-adjacency (used to remove ambiguity)

Path, Connectivity, Region & Boundary:

A region is a connected set.

Boundary: Pixels in the region adjacent to background pixels.

📌 8. Distance Measures
Euclidean Distance D(p,q) = sqrt((x-s)^2 + (y-t)^2)

City-block (D4): |x-s| + |y-t|

Chessboard (D8): max(|x-s|, |y-t|)

m-distance: shortest path using m-adjacency, depends on neighbors.

3/22
📌 9. Convolution Operation
Kernel/Mask/Filter:

Matrix used for filtering (blur, edge detection, etc.)

Applied over image via convolution.

🧠 If you need help answering specific questions from this module, feel
free to ask them directly now.
Would you like to proceed with sample questions based on this content, or do you have
specific ones you want answered?

digital image representation,coordinate convertion,types of


image,image as matrix explain all in detail

Here's a detailed explanation of the topics you've asked about from Digital Image
Processing:

📌 1. Digital Image Representation


A digital image is an array of discrete values that represent visual information. It's
essentially a 2D function defined as:

f (x, y)

x, y are spatial coordinates (horizontal and vertical positions).

f(x, y) gives the intensity (or gray level) at that point.

Key Points:
Images are stored as a matrix of pixels (short for “picture elements”).

The value of each pixel indicates the brightness (gray level) or color.

4/22
All values are finite and discrete, suitable for computer storage and processing.

📌 2. Coordinate Conventions
To process images effectively, we follow specific conventions:

🔷 a) Mathematical Notation:
(x, y) – where x is the row index and y is the column index.

Origin usually at (0,0) – top-left corner.

Coordinates increase rightward (x) and downward (y).

🔷 b) Matrix or Array Notation (Toolbox Convention):


Uses (r, c) → row and column.

Origin is (1,1) (indexing starts at 1).

r ranges from 1 to M (rows), and c from 1 to N (columns).

This distinction is essential when working with MATLAB or other image processing libraries.

📌 3. Types of Digital Images


Images can be categorized based on pixel values and color depth:

🔷 a) Binary Image (Monochrome)


Only two intensity levels: 0 and 1

0 = black, 1 = white

Used in documents, barcodes, etc.

🔷 b) Black & White Image


Similar to binary, but may have slight variations (usually just 2 colors).

🔷 c) Grayscale Image
8-bit format, allows 256 levels of gray:

5/22
0 = black, 255 = white, 127 = mid-gray

Common in medical imaging, photography.

🔷 d) Color Image (16-bit, 24-bit, etc.)


Composed of three channels: Red, Green, Blue (RGB)

Common bit depths:

16-bit = High Color (65,536 colors)

24-bit = True Color (16.7 million colors: 8 bits per R, G, and B)

Some formats include a 4th channel: Alpha (transparency)

🔷 e) Image Formats by Sampling per Pixel


1 sample → B&W or Grayscale

3 samples → RGB

4 samples → RGBA (with Alpha channel)

📌 4. Image as a Matrix
In digital form, an image is a matrix of pixels:

Size: M × N

M = number of rows (height)

N = number of columns (width)

Each matrix element = pixel

Pixel value = intensity (for grayscale) or vector (for color)

🔷 Bit Depth and Gray Levels


Gray Levels (G) = 2k , where k = bits per pixel

8-bit image = 28 = 256 gray levels


Storage Requirement:

Total Bits = M × N × k

Example:

6/22
128 × 128 image with 8 bits/pixel:

128 × 128 × 8 = 131072 bits = 16, 384 bytes ≈ 17KB

🔷 Dynamic Range:
Ratio between the brightest and darkest values.

Higher dynamic range → better detail in both shadows and highlights.

🔷 Contrast:
Difference between highest and lowest intensity.

Enhanced using techniques like histogram equalization.

✅ Summary Table
Concept Description

Digital Image 2D function with intensity values

Pixel Smallest image unit, holds intensity/color

Coordinate Convention (x,y) = (0,0) top-left or (r,c) = (1,1)

Binary Image 0 (black), 1 (white)

Grayscale Image 256 shades of gray (8-bit)

Color Image RGB (16-bit, 24-bit)

Matrix Representation M×N array; each cell = pixel

Storage Bits = M × N × k

Dynamic Range Max/min intensity ratio

Contrast Difference between light/dark levels

7/22
Would you like this summarized as notes or flashcards for exam prep?

image sampling and quantization explain in detail

Here’s a detailed explanation of Image Sampling and Quantization, which are fundamental
processes in Digital Image Processing.

📌 What is Digitization?
Digitization is the process of converting an analog image (real-world scene) into a digital
image so that it can be processed by a computer.
This involves two main steps:

✅ Digitization = Sampling + Quantization

📌 1. Image Sampling
🔷 Definition:
Sampling is the process of converting continuous spatial coordinates (x, y) into discrete
spatial coordinates.

In simple words: It selects where the image will be measured (i.e., how many pixels to
use).

🔷 How It Works:
Imagine laying a grid over a photograph.

At each grid point, you take a sample of the image’s brightness (intensity).

More samples = better spatial resolution.

🔷 Sampling Rate:
Defined by the number of samples per unit distance (dots per inch – dpi).

Must obey the Nyquist Theorem:


Sampling rate must be at least twice the highest frequency in the image to avoid

8/22
aliasing.

🔷 Aliasing:
Happens when sampling rate is too low.

Results in distorted or misleading image patterns (like jagged edges).

📌 2. Image Quantization
🔷 Definition:
Quantization is the process of converting the continuous range of intensity values into
finite discrete levels.

In simple words: It decides how many shades of gray (or color) will be used.

🔷 Gray-Level Resolution:
If an image uses k bits per pixel, it can represent:

L = 2k gray levels

Example: k = 8 ⇒ 256 gray levels (0–255)

🔷 Quantization Error:
Happens because real-world brightness values are approximated to the nearest discrete
level.

This can cause loss of detail or banding artifacts.

📊 Sampling vs Quantization – Comparison Table


Aspect Sampling Quantization

Operates on Spatial coordinates (x, y) Intensity (gray level or color)

Axis X-axis (horizontal, vertical positions) Y-axis (brightness, amplitude)

9/22
Aspect Sampling Quantization

Output Discrete pixels Discrete gray/color values

Affects Spatial resolution Gray level resolution

Problem Aliasing if undersampled Banding/artifacts if under-quantized

📌 Example
Let’s say we digitize a real-world image:

Sampling: Image is sampled to 512 × 512 grid ⇒ 262,144 pixels

Quantization: Each pixel can take one of 256 gray levels ⇒ 8 bits per pixel

Storage Requirement:

512 × 512 × 8 = 2, 097, 152 bits = 262, 144 bytes ≈ 256KB

📌 Visualization Aid
Concept Example

Sampling Choosing 512 × 512 grid points over a scene

Quantization Mapping brightness to 256 levels (0 = black, 255 = white)

📌 Importance in Image Processing


High Sampling Rate ⇒ Better detail and fewer distortions.

High Quantization Levels ⇒ Smoother gradients and better contrast.

Optimizing Both is critical for:

10/22
Image compression

Enhancement

Recognition

Transmission

✅ Summary
Term Key Idea

Sampling Converts spatial domain into a grid of pixels

Quantization Converts amplitude (intensity) into fixed levels

Combined Effect Transforms analog images into a matrix of digital numbers

Would you like diagrams or animations to help visualize sampling vs. quantization?

Spatial and Gray Level Resolution

Here's a detailed explanation of Spatial Resolution and Gray Level Resolution — two crucial
concepts in Digital Image Processing.

📌 Spatial and Gray Level Resolution


Both types of resolution impact how much detail an image can convey — spatially and in
terms of brightness.

✅ 1. Spatial Resolution
🔷 Definition:
11/22
Spatial resolution refers to the smallest discernible detail in an image.
It describes how finely an image is sampled in space (x, y).

🔷 Factors:
Sampling density – how many pixels per unit area.

Measured in: pixels per inch (ppi) or dots per inch (dpi).

🔷 High vs Low Spatial Resolution:


High Spatial Resolution Low Spatial Resolution

More pixels per unit area Fewer pixels per unit area

Finer image details Blurry or blocky appearance

Sharp edges Loss of detail

🔷 Example:
A 1024×1024 image has better spatial resolution than a 256×256 image of the same
physical size.

🔷 Subsampling:
Reducing spatial resolution by removing rows and columns (e.g., 4x4 → 2x2).

Leads to loss of detail.

✅ 2. Gray Level Resolution


🔷 Definition:
Gray level resolution refers to the number of distinct intensity levels (brightness values)
that each pixel can represent.

🔷 Factors:
Determined by the number of bits per pixel (k).

If k = 8, then 28 = 256 gray levels (0 to 255).

12/22
🔷 High vs Low Gray Level Resolution:
High Gray Level Resolution Low Gray Level Resolution

More intensity levels Fewer intensity levels

Smooth gradation Posterization (banding)

Better contrast & realism Loss of subtle brightness differences

🔷 Quantization:
Gray level resolution is directly affected by quantization.

Finer quantization = better gray level resolution.

🎯 Difference Between Spatial and Gray Level Resolution


Feature Spatial Resolution Gray Level Resolution

Concerned With Pixel spacing (image detail) Intensity detail (brightness steps)

Controlled By Sampling Quantization

Affects Image sharpness/detail Smoothness of tones

Improved By Increasing image dimensions Using more bits per pixel

📌 Example
A 256×256 image with 256 gray levels:

Spatial resolution: 256×256 pixels (medium detail)

Gray level resolution: 8 bits per pixel = 256 intensity levels

Increasing to 512×512 pixels (same scene):

Improves spatial resolution (finer details)

13/22
Increasing to 1024 gray levels (10-bit):

Improves gray level resolution (smoother tones)

📌 Summary
Term Definition Controlled By Improves

Spatial Resolution Detail per unit area Sampling rate Image sharpness

Gray Level Number of brightness Bits per pixel Tonal


Resolution levels (quantization) smoothness

Would you like a visual diagram comparing these two resolutions for your notes or
presentation?

explain in detail relationship between pixels

Here is a detailed explanation of the relationship between pixels in digital image


processing, which helps in understanding how pixels are connected, how regions are
defined, and how image segmentation and analysis work.

📌 Relationship Between Pixels


Pixels in a digital image interact with their neighbors. Understanding their relationships
helps in region growing, segmentation, edge detection, boundary extraction, and more.

✅ 1. Neighborhoods of a Pixel
Let a pixel p be located at coordinates (x, y) in an image.

🔷 a) 4-Neighborhood N4(p)

14/22
Consists of the pixels immediately horizontal and vertical to p .

Coordinates:

(x + 1, y)

(x - 1, y)

(x, y + 1)

(x, y - 1)

🔷 b) Diagonal (D) Neighbors ND(p)

Consists of the four diagonal pixels.

Coordinates:

(x + 1, y + 1)

(x + 1, y - 1)

(x - 1, y + 1)

(x - 1, y - 1)

🔷 c) 8-Neighborhood N8(p)

Union of N4(p) and ND(p)

Total 8 neighbors around the pixel.

✅ 2. Adjacency
Adjacency defines whether two pixels are considered neighbors based on their intensity
values.

Let V be a set of intensity values of interest (e.g., {1} in a binary image).

🔷 a) 4-Adjacency
Pixels p and q are 4-adjacent if q is in N4(p) and both have values in V .

🔷 b) 8-Adjacency
Pixels p and q are 8-adjacent if q is in N8(p) and their values belong to V .

🔷 c) m-Adjacency (Mixed Adjacency)


Removes ambiguities in 8-adjacency (e.g., multiple paths).

15/22
Two pixels p and q are m-adjacent if:

q ∈ N4(p) , OR

q ∈ ND(p) and N4(p) ∩ N4(q) contains no pixels from V .

This ensures only one valid path exists between adjacent pixels.

✅ 3. Digital Path
A digital path (or curve) from a pixel p to q is a sequence of adjacent pixels that connects
p to q .

Path length = number of steps in the sequence.

Types:

4-path (uses 4-adjacency)

8-path

m-path

✅ 4. Connectivity
Connectivity defines whether two pixels belong to the same object or region.

Pixels p and q are connected in set S if there’s a path between them that lies entirely
in S .

A connected component is a set of pixels in S that are all connected.

If S has only one connected component, it’s a connected set.

✅ 5. Regions and Boundaries


🔷 Region:
A region R in an image is a connected set of pixels with similar properties (like
intensity).

16/22
🔷 Adjacency of Regions:
Two regions R₁ and R₂ are adjacent if their union forms a connected set.

🔷 Boundary:
The boundary (or contour) of a region is the set of pixels in the region that have at least
one neighbor outside the region (i.e., in the background).

✅ 6. Distance Measures
Distance functions measure how far apart two pixels are:

Let p = (x, y) and q = (s, t)

🔷 a) Euclidean Distance:
DE (p, q) =
​ (x − s)2 + (y − t)2 ​

Straight-line distance.

🔷 b) City Block Distance (D4):


D4 (p, q) = ∣x − s∣ + ∣y − t∣

Forms diamond-shaped contours.

🔷 c) Chessboard Distance (D8):


D8 (p, q) = max(∣x − s∣, ∣y − t∣)

Forms square-shaped contours.

🔷 d) m-Distance:
Shortest m-path between p and q .

Depends on adjacency and values of intermediate pixels.

✅ 7. Example: m-Adjacency Path Length


Suppose pixels p , p2 , and p4 have value 1, and p1 , p3 can be 0 or 1:

17/22
If p1 = 0 , p3 = 0 : path = p → p2 → p4 (length = 2)

If p1 = 1 , p3 = 0 : path = p → p1 → p2 → p4 (length = 3)

If p1 = 1 , p3 = 1 : path = p → p1 → p2 → p3 → p4 (length = 4)

This shows how m-adjacency restricts ambiguous multiple connections.

📌 Summary Table
Concept Description

N4(p) Horizontal and vertical neighbors

ND(p) Diagonal neighbors

N8(p) All 8 surrounding pixels

Adjacency Rules for whether neighboring pixels are related (4, 8, m)

Path Sequence of adjacent pixels

Connectivity Pixels form a region if connected via a path

Region Connected set of pixels

Boundary Pixels in a region with at least one background neighbor

Distance How far apart two pixels are (Euclidean, D4, D8, Dm)

Would you like a labeled diagram to visualize 4-, 8-, and m-neighborhoods or the distance
contours?

explain in detail fundamental of spatial domain-convolution


operation

Here is a detailed explanation of the Fundamentals of Spatial Domain – Convolution


Operation in Digital Image Processing, especially useful for edge detection, smoothing,
sharpening, and feature extraction.

18/22
📌 What is Spatial Domain?
In image processing, spatial domain refers to operations performed directly on the pixels of
an image.

It contrasts with frequency domain (where processing is done using Fourier


transforms).

A spatial domain technique modifies pixel values based on neighboring pixels.

📌 What is Convolution?
Convolution is a fundamental spatial domain operation where an image is modified by a
small matrix called a kernel (also known as a mask or filter).

✅ Convolution – Definition:
Let:

Image = I(x, y)

Kernel (mask) = H(m, n) of size MxN

The output image = O(x, y)

Then the convolution operation is defined as:

a b
O(x, y) = ∑ ∑ H(m, n) ⋅ I(x − m, y − n)
​ ​

m=−a n=−b

Where:

(x, y) is the current pixel

a = floor(M/2) , b = floor(N/2) to center the kernel

19/22
📌 How Convolution Works (Step-by-Step):
1. Place the kernel at a pixel location in the image.

2. Multiply the overlapping kernel values with corresponding image pixels.

3. Sum the result of these multiplications.

4. Assign the result to the output image at that pixel.

5. Repeat for every pixel in the image.

📌 Terminology
Term Description

Kernel / Mask Small matrix (e.g., 3×3, 5×5) used for filtering

Centering Kernel is centered on the current pixel

Sliding Window Kernel slides across the entire image

📌 Types of Convolution Kernels


1. Smoothing (Blurring) Filter
Removes noise, reduces detail.

Example 3×3 Mean Filter:

1 1 1
1
1 1 1
9
​ ​ ​ ​ ​ ​

1 1 1

2. Sharpening Filter
Enhances edges and transitions.

20/22
Example:

0 −1 0
−1 5 −1
​ ​ ​ ​ ​

0 −1 0

3. Edge Detection Filters

Sobel (Horizontal and Vertical edges):

Sobel X:

−1 0 1
−2 0 2
​ ​ ​ ​ ​

−1 0 1

Sobel Y:

−1 −2 −1
​0 0 0​ ​ ​ ​

1 2 1

📌 Important Concepts in Convolution


🔷 Padding
Adding extra rows/columns around the image (usually zeros).

Needed to ensure the kernel fits at the borders.

🔷 Normalization
After convolution, values may exceed the typical pixel range (e.g., 0–255).

You may need to scale or clip the results.

📌 Applications of Convolution
21/22
Task Example Kernel

Smoothing Mean, Gaussian blur

Sharpening Laplacian, high-pass filter

Edge Detection Sobel, Prewitt, Roberts

Embossing Emboss mask

Feature Detection Custom designed kernels

📌 Summary Table
Concept Description

Spatial Domain Direct pixel-wise image manipulation

Convolution Operation where each output pixel is a weighted sum of neighborhood pixels

Kernel Small matrix (e.g., 3×3) used in convolution

Output Image Result of applying the kernel across all pixels

Applications Blur, sharpen, edge detection, noise reduction

Would you like a visual example of convolution using a 3×3 filter on a sample image patch?

22/22

You might also like