Image Processing Unit 3,4 and 5
Image Processing Unit 3,4 and 5
Image restoration is the process of recovering an image that has been degraded by some
knowledge of degradation function H and the additive noise term {{\eta}(x,y)}. Thus in
restoration, degradation is modelled and its inverse process is applied to recover the original
image.
What is Noise?
Noise is any unwanted or random variation in pixel values in an image, which can degrade its
visual quality. It often occurs due to:
- Imaging sensors.
- Environmental factors.
- Errors during transmission.
Noise introduces distortions that can complicate image analysis and processing tasks.
Restoration techniques in the presence of noise aim to recover the original image from a
degraded version. Noise can degrade the image during acquisition, storage, or transmission.
These techniques help reduce noise and enhance image quality.
1. Noise-Only Spatial Filtering
Spatial filtering operates directly on the pixels of the degraded image to remove noise while
preserving important image details. Common spatial filters include:
- Mean Filter: Averages pixel values in a neighborhood to reduce noise.
- Median Filter: Replaces each pixel with the median of its neighborhood, effective against
Salt-and-Pepper noise.
- Gaussian Filter: Uses a Gaussian kernel to smooth the image, reducing Gaussian noise.
Spatial filters are simple and effective for low-level noise but may blur image details.
2. Wiener Filtering
Wiener filtering minimizes the mean square error between the original image and the restored
image. It is based on statistical properties of the image and noise.
Mathematical Model:
H(u, v) = ( |H(u, v)|^2 / (|H(u, v)|^2 + Sn(u, v) / Sf(u, v)) ) * (1 / H(u, v))
Where:
- H(u, v): Degradation function in the frequency domain.
- Sn(u, v): Power spectral density of noise.
- Sf(u, v): Power spectral density of the original image.
Characteristics:
- Adaptively balances noise reduction and detail preservation.
- Works well for images with known noise characteristics.
3. Constrained Least Squares Filtering
Constrained least squares filtering uses an optimization approach to minimize restoration error
subject to a constraint that the restored image remains smooth.
Mathematical Model:
F = argmin ||Hf - g||^2 + λ ||Cf||^2
Where:
- H: Degradation function.
- f: Restored image.
- g: Degraded image.
- λ: Regularization parameter controlling smoothness.
- C: High-pass operator enforcing smoothness.
Characteristics:
- Effective for images degraded by blur and noise.
- Regularization ensures balance between noise reduction and detail retention.
Definition of Color
Color is the perception created when light interacts with our eyes. It is a result of three primary
factors:
Light Source: The type and spectrum of light (e.g., sunlight, artificial lighting).
Object Surface: The material properties that reflect or absorb different wavelengths of light.
Observer: The human eye or any sensor that interprets light.
css
Copy code
Image[256][256][3]
Where each entry in the array contains the red, green, and blue intensity values for a single
pixel.
Color Filtering
Filtering in RGB Channels: Individual color channels (red, green, or blue) can be processed
separately to filter or enhance specific color components in an image.
Gaussian Filtering: Applied to the entire image to smooth the image and remove noise while
preserving the color information.
Color Space Transformations
RGB to HSV/HSI: This transformation is useful for tasks like segmentation and object detection
where the color hue plays an important role.
RGB to YUV: Converts the image into a luminance and chrominance representation, useful for
video compression and reducing file sizes.
Color Segmentation
Thresholding: Based on color values, pixels are classified into different segments. For example,
pixels with red values greater than a threshold can be considered part of a red object.
K-means Clustering: This technique groups similar colors together into clusters for segmenting
an image based on dominant colors.
Histogram Matching (Specification)
Color Histogram Matching: Involves adjusting the color distribution of an image to match a
reference histogram. This is useful in applications like color correction or transferring the color
characteristics of one image to another.
Smoothing (Blurring): Techniques like Gaussian blur are used to reduce high-frequency noise or
fine details in an image. This can affect the color components (RGB channels) equally or
separately.
Sharpening: Filters such as the Laplacian or high-pass filters can enhance edges and fine
details in the image by emphasizing high-frequency components.
Edge Detection: Techniques like Sobel, Canny, or Prewitt operators can be applied on color
images to detect boundaries by analyzing the gradients of the pixel intensities across different
channels.
5. Color-Based Object Detection
Color Thresholding: By setting thresholds for each color channel, objects of a particular color
can be detected and isolated.
Template Matching: Involves searching for a small template of an object within a larger color
image using a color-based metric.
1. Detection of Discontinuities
Detection of discontinuities is the first step in identifying edges, corners, and other key features
in an image. It identifies abrupt changes in pixel intensity or color that signify boundaries.
Key Discontinuities:
Point Discontinuities:
Abrupt changes in intensity in a single pixel.
Detected using filters like Laplacian or Difference of Gaussian (DoG).
Line Discontinuities:
Intensity changes along a line in an image.
Detected using directional filters like Sobel or Prewitt.
Edge Discontinuities:
Significant and continuous changes in intensity.
Edges often signify boundaries of objects.
Edge Detection Techniques:
Gradient-Based Methods:
Measure intensity gradient to locate edges.
Examples: Sobel, Prewitt, Roberts, Canny.
Second-Derivative Methods:
Identify zero-crossings of the second derivative to detect edges.
Example: Laplacian of Gaussian (LoG).
2. Edge Linking
Once edges are detected, edge linking ensures continuity, forming coherent edges rather than
isolated fragments.
Techniques for Edge Linking:
Local Methods:
Link edges based on proximity and direction.
Check pixel connectivity (e.g., 8-connected neighborhoods).
Similar gradient magnitude and orientation ensure linking.
Global Methods:
Use more sophisticated approaches like graph-based or Hough transform techniques.
Hough Transform:
Detects straight lines or curves by voting in parameter space.
3. Boundary Detection
Boundary detection identifies the exact outline of objects in an image, often requiring edge
refinement and contour tracing.
Thresholding:
Set a threshold to separate strong edges from weak ones, retaining significant boundaries.
Contour Tracing Algorithms:
Traverse detected edges to form continuous boundaries.
Examples: Moore's Neighbor, Chain Code.
Region-Based Methods:
Use segmented regions to determine boundaries.
Detect boundaries where regions meet.
Applications
Object Recognition: Identify and isolate objects in an image.
Medical Imaging: Detect organ boundaries or abnormalities.
Image Compression: Simplify images by preserving edges and boundaries.
Computer Vision: Essential for tasks like scene understanding, face recognition, and feature
extraction.
Region-based segmentation
Region-based segmentation is a technique in image processing that groups pixels into regions
based on their similarity. The method focuses on identifying connected areas of an image where
the pixels share certain properties, such as intensity, color, or texture.
Key Principles
Homogeneity:
Regions are formed by grouping pixels with similar properties, such as brightness or color.
Connectivity:
Pixels in a region are spatially connected to form continuous areas.
Types of Region-Based Segmentation
Region Growing:
Description:
Starts with a seed pixel and expands the region by adding neighboring pixels that satisfy a
similarity criterion.
Steps:
Choose a seed pixel (manually or automatically).
Compare neighboring pixels' properties with the region's average.
Add similar pixels to the region and repeat the process.
Advantages:
Simple and intuitive.
Effective for segmenting regions with uniform properties.
Limitations:
Sensitive to noise.
Seed selection impacts results
.
Region Splitting and Merging:
Description:
The image is recursively divided into regions, which are then merged if they meet certain
similarity criteria.
Steps:
Start with the entire image as one region.
Split regions where pixel properties vary significantly.
Merge neighboring regions that are similar.
Advantages:
Handles complex images with varying textures or intensities.
Limitations:
Computationally intensive.
Requires careful selection of merging criteria.
Watershed Algorithm:
Description:
Treats the image as a topographic surface, where pixel intensities represent elevations. Regions
are formed by flooding low-intensity valleys (basins).
Steps:
Identify intensity gradients.
Simulate water filling the basins to form regions.
Advantages:
Effective for detecting object boundaries.
Limitations:
Prone to over-segmentation, requiring preprocessing.