0% found this document useful (0 votes)
23 views8 pages

Imp Img

Histogram equalization is an image processing technique that enhances contrast by redistributing pixel intensity values. It involves computing the histogram, calculating the cumulative distribution function, normalizing, and mapping pixels to improve visual quality. Additionally, image filtering in the spatial domain manipulates pixel values to achieve effects like smoothing, with various filters available for noise reduction and detail preservation.

Uploaded by

hirnaikbapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views8 pages

Imp Img

Histogram equalization is an image processing technique that enhances contrast by redistributing pixel intensity values. It involves computing the histogram, calculating the cumulative distribution function, normalizing, and mapping pixels to improve visual quality. Additionally, image filtering in the spatial domain manipulates pixel values to achieve effects like smoothing, with various filters available for noise reduction and detail preservation.

Uploaded by

hirnaikbapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Image processing

Write a note on histogram equalization

Histogram Equalization

Histogram equalization is a technique used in image processing to improve the contrast of


an image. It enhances the visual quality by spreading out the most frequent intensity values
over a wider range, which makes details more visible, especially in images with poor
contrast.

How It Works:

1. Compute Histogram:

o Calculate the frequency of each intensity level in the image.

2. Calculate Cumulative Distribution Function (CDF):

o The cumulative sum of the histogram values is used to map old intensity
values to new ones.

3. Normalize:

o The CDF is scaled to match the intensity range (0-255 for an 8-bit image).

4. Map Pixels:

o Replace each pixel value in the original image with its corresponding value
in the new intensity mapping.

Example Application:

 Original Image: An underexposed image with most pixel values concentrated in a


narrow intensity range.

 Histogram Equalized Image: The pixel intensities are redistributed, resulting in an


image with better contrast.

Use Cases:

 Medical imaging (X-rays, MRI scans).

 Satellite imagery.

 Document enhancement (scanned texts).

 General photography for better visual quality.

Advantages:

 Improves global contrast.

 Simple and fast to implement.

Limitations:

 May introduce artifacts or excessive contrast in areas with already high contrast.

 Not always suitable for images with subtle variations.


Explain image filtering in spatial domain also, explain spatial filters used of in process
of image smoothening

Image Filtering in the Spatial Domain

Image filtering in the spatial domain involves manipulating pixel values directly in an
image to achieve desired effects, such as smoothing, sharpening, or edge detection. The
process generally involves applying a filter (also called a kernel or mask) to each pixel
in the image based on its neighboring pixels.

Types of Spatial Filters for Smoothing:

Smoothing filters reduce noise and details, resulting in a blurred or softened image.
Common smoothing filters include:

1. Averaging Filter (Mean Filter):

 Purpose: Replaces each pixel value with the average of its neighboring pixels.

 Effect: Reduces noise but may blur edges.

2. Gaussian Filter:

 Purpose: Uses a Gaussian function to assign weights to neighboring pixels based on


their distance.

 Effect: Smooths the image while preserving edges better than the mean filter.

3. Median Filter:

 Purpose: Replaces each pixel with the median value of its neighboring pixels.

 Effect: Effective for removing salt-and-pepper noise without blurring edges.

4. Bilateral Filter:

 Purpose: Combines spatial closeness and intensity similarity to preserve edges


while smoothing.

 Effect: Smooths regions while keeping edges sharp.

Applications of Smoothing Filters:

 Noise Reduction: Removes random noise from images.

 Preprocessing: Helps in tasks like edge detection and segmentation by reducing


noise.

 Image Enhancement: Creates aesthetically pleasing images by softening harsh


transitions.
Note on walsh hadamard transform

The Walsh-Hadamard Transform (WHT) is a non-sinusoidal, orthogonal transform used in


image processing, signal processing, and data compression. Unlike the Fourier Transform,
which uses sine and cosine functions, WHT employs Walsh functions—piecewise constant
functions that take values of +1 or -1.

Key Characteristics:

1. Binary and Orthogonal Basis:


o The Walsh-Hadamard basis functions consist of sequences of +1 and -1,
which are orthogonal to each other.
2. Fast Computation:
o WHT can be computed efficiently using the Fast Walsh-Hadamard Transform
(FWHT) algorithm, similar to the Fast Fourier Transform (FFT).
3. Discrete and Power-of-Two Size:
o Typically applied to signals or images with sizes that are powers of two.

Properties of WHT:

 Orthogonality: The Walsh functions form an orthogonal basis, ensuring no


redundancy.
 Energy Compaction: Less efficient than the Fourier Transform in energy compaction
but useful in specific applications.
 Non-sinusoidal Basis: Uses square waveforms, making it suitable for digital
systems.

Applications:

 Image and Signal Compression: Reduces redundancy in binary and digital images.
 Pattern Recognition: Useful in matching binary patterns.
 Error Detection and Correction: Applied in coding theory for error correction codes
(e.g., Hadamard codes).
 Speech and Audio Processing: Analyzes and compresses audio signals.
Explain lossless predictive coding

Lossless Predictive Coding

Lossless predictive coding is a data compression technique that predicts the value of a
pixel or sample based on its neighboring values, computes the difference between the
actual and predicted value, and encodes this difference (residual). Since the residuals are
often smaller and have less variability, they can be compressed efficiently using entropy
coding methods (like Huffman or arithmetic coding). As the name suggests, this method is
lossless, meaning the original data can be perfectly reconstructed.

How Lossless Predictive Coding Works:

1. Prediction:
o A predictor estimates the current pixel value based on its neighboring pixel
values. Common prediction methods include:
 Linear Prediction: Combines neighboring pixel values using a linear
model.
 Previous Pixel: Simply uses the previous pixel value as the
prediction.
2. Error Calculation (Residual):

3. Entropy Encoding:
o The residuals are encoded using an entropy coding technique such as
Huffman coding or arithmetic coding to achieve compression.
4. Decoding:
o During decompression, the residuals are added back to the predicted values
to reconstruct the original image.

Applications:

 Image Compression: Used in formats like PNG and JPEG-LS.


 Medical Imaging: Perfect reconstruction is critical, making lossless methods
essential.
 Remote Sensing and Satellite Imagery: Retains all original details for analysis.
 Archiving and Digital Preservation: Ensures no data loss for critical archives.

Advantages:

 Perfect reconstruction of the original image or data.


 Efficient for images with low entropy (little variation).
Explain point, line and edge detection in Image Segmentation

Point, Line, and Edge Detection in Image Segmentation

In image segmentation, point, line, and edge detection are fundamental techniques used to
identify distinct features within an image. These features often represent important
structural elements, helping divide the image into meaningful regions or objects.

1. Point Detection

Point detection identifies isolated points or pixels with significant intensity differences
compared to their neighbours. It is useful for detecting small, localized features such as
stars in astronomical images or micro calcifications in medical images.

2. Line Detection

Line detection aims to find linear structures in an image, such as roads in satellite images
or blood vessels in medical images.

3. Edge Detection

Edge detection identifies the boundaries between different regions in an image, where pixel
intensities change abruptly. It is a key step in image segmentation and object detection.

Applications in Image Segmentation:

 Point Detection: Identifies key features or interest points for tasks like feature
matching.
 Line Detection: Useful in detecting structural lines in images, such as roads, veins,
or cracks.
 Edge Detection: Helps delineate object boundaries, leading to region-based
segmentation and contour detection.
What are three networks of estimating the degradation function. Explain in detail

Explain Boundary descriptors in details


Note on Morphological Algorithm

1. Skeletons

2. Thinning

Morphological Algorithms: Skeletons and Thinning

Morphological algorithms are techniques used in image processing to analyze and


manipulate the shapes of objects within binary images. They are commonly applied to
simplify structures while preserving essential geometric features.

1. Skeletonization (Skeletons)

Definition:

Skeletonization reduces an object to its medial axis (a thin line equidistant from its
boundaries), preserving its overall structure while reducing its thickness to one pixel in
width.

Purpose:

 Simplifies objects while retaining their topology (shape and connectivity).


 Useful in shape analysis, pattern recognition, and OCR (Optical Character
Recognition).

Process:

 Iteratively remove pixels from the boundary while preserving the object’s
connectivity.
 The algorithm stops when further erosion would break the object into separate
components.

Example Application:

 Medical Imaging: Extracting the centerline of blood vessels or airways.


 Character Recognition: Extracting the structure of handwritten text.

2. Thinning

Definition:

Thinning is a morphological operation that progressively removes pixels from the


boundaries of objects without breaking their connectivity, resulting in a thin version of the
object (not necessarily one pixel wide like skeletons).

Purpose:

 Simplifies binary images while preserving essential shape characteristics.


 Often used as a pre-processing step for skeletonization.
Process:

 Apply a series of structuring elements to iteratively remove pixels while


maintaining the connectivity of the shape.
 Stops when the shape reaches a stable state.

Example Application:

 Barcode Detection: Simplifying barcodes for easier decoding.


 Road Mapping in Aerial Images: Thinning road structures for easier analysis.

Key Differences Between Skeletonization and Thinning:

Aspect Skeletonization Thinning

Output
One-pixel wide medial axis Thin but not necessarily one-pixel wide
Width

Retains the topology (shape & Simplifies structure while maintaining


Purpose
connectivity) connectivity

Shape analysis, medial axis Preprocessing, barcode detection, road


Application
extraction mapping

You might also like