CV Imp
CV Imp
Histogram matching is a technique used in image processing to adjust the pixel intensity distribution of an image so that it resembles the histogram of a reference image.
consistency across different images.
Steps in Histogram Matching:
1. Compute the Histogram – Calculate the histogram of both the input image and the reference image (the image whose histogram we want to match).
2. Compute the Cumulative Distribution Function (CDF) – Generate the CDF for both the input and reference histograms.
3. Find the Mapping Function – Match the CDF values of the input image to the CDF values of the reference image to determine a mapping function.
4. Transform the Input Image – Apply the mapping function to the input image to adjust its pixel intensities to match the reference histogram.
• Minimizes Noise Sensitivity: Makes images more robust against variations in intensity.
• Bilinear Interpolation
• Bicubic Interpolation
(b) Resizing
Resizing modifies an image’s dimensions, which may distort its aspect ratio if not handled properly.
When would it be suitable to use histogram equalization? Briefly outline the steps to
histogram-equalize a given image.
Histogram equalization is useful when an image has poor contrast, such as in medical imaging or low-light conditions.
Steps for Histogram Equalization:
1. Compute the Histogram – Count the occurrence of pixel intensity values.
2. Compute the CDF – Determine the cumulative distribution function.
3. Normalize the CDF – Adjust intensity levels to spread across the full range.
4. Map the Pixel Values – Transform each pixel in the input image using the new intensity values.
Describe the Canny edge detector. What are the steps involved in edge detection using this detector.
The Canny edge detector is an advanced method used to detect edges in an image. It is widely used because it produces clean, thin, and continuous edges.
Steps in Canny Edge Detection:
1. Noise Reduction: Apply a Gaussian filter to smooth the image.
2. Compute Gradient Magnitude and Direction: Use Sobel filters to find intensity changes.
3. Non-Maximum Suppression: Thin out edges by removing weak gradients.
4. Double Thresholding: Identify strong and weak edges.
5. Edge Tracking by Hysteresis: Remove weak edges that are not connected to strong edges.
Use diagrams where necessary in your explanation.
Which pre-processing filter is effective for removing fine black noise on targets with a white background?
Median Filter: Best for removing salt-and-pepper noise while preserving edges.
Morphological Opening: Removes small black noise without affecting the main objects.
Which pre-processing filter is effective for removing fine black noise on targets with a white background?
Median Filter
Morphological Opening
Define Digital Image?
A digital image is a representation of a two-dimensional image using pixels stored in a grid format. Each pixel has an intensity value representing color or brightness.
What are the different machine vision system types and platforms?
Machine vision systems are used in automation, robotics, quality inspection, and surveillance. These systems can be categorized into different types based on their applic
Types of Machine Vision Systems:
1. 2D Vision Systems:
o Used in barcode scanning, OCR (Optical Character Recognition), and pattern recognition.
2. 3D Vision Systems:
o Create a 128-dimensional feature descriptor for each keypoint based on local gradients.
5. Keypoint Matching:
o Compute the corner strength using the determinant and trace of the matrix.
4. Apply a Threshold:
• Useful for Object Recognition: Helps in image matching and feature extraction.
o Blurs the image by replacing each pixel with the average of its neighbors.
o Replaces each pixel with the median value of its neighboring pixels.
• Examples:
• Examples:
o Laplacian Filter (Second-order derivative for edge detection).
o Unsharp Masking (Subtracts blurred image from the original to enhance sharpness).
• Object recognition
• Image matching
• Feature tracking
• RGB Image: Each pixel has three values (Red, Green, Blue).
How are pixel operations like addition, subtraction, and multiplication used in image processing?
Pixel operations modify image intensity values for different applications.
Common Pixel Operations:
• Higher Resolution Improves Accuracy: More details allow better object detection and recognition.
• Impacts Feature Extraction: Fine details are easier to analyze in high-resolution images.
• Trade-Off Between Speed and Quality: Lower resolution images process faster but may lose important details.
Example:
• Facial Recognition: High resolution captures fine details for better identification.
• Medical Imaging: Higher resolution helps in detecting small anomalies in X-rays and MRIs.
What is the RGB color model, and how is it used in computer vision?
The RGB color model represents colors using Red, Green, and Blue components.
How RGB Works:
• Each pixel has three values: (R, G, B), each ranging from 0 to 255.
• Example:
o (255, 0, 0) = Red
Explain the difference between RGB, HSV, and YCbCr color spaces.
• Better Processing Efficiency: Some color spaces separate brightness and color for easier processing.
• Noise Reduction: YCbCr color space can reduce noise in compressed images.
• Enhances Object Detection: HSV is more robust to lighting changes than RGB.
Example Applications:
What are the common types of noise found in digital images, and how do they affect image quality?
Compare the performance of Gaussian smoothing and median filtering for noise reduction.
What is the difference between linear and non-linear spatial filters?
Applications:
• Noise reduction.
• Unsharp Masking: Combines high-pass filtering with the original image for sharpening.
o A night photo with poor lighting will have most of its histogram values concentrated on the left.
2. Over-Exposed Image:
o Most pixels are bright, making the image appear washed out.
Example:
o A photo taken in bright sunlight without proper settings will have most of its histogram values concentrated on the right.
3. Well-Exposed Image:
What is the purpose of image sharpening, and how does it differ from image smoothing?
Purpose of Image Sharpening:
Explain the Laplacian operator and its role in edge detection and image sharpening.
How does the Sobel operator work for gradient-based edge detection?
What is the difference between first-order and second-order derivative filters in image sharpening?