0% found this document useful (0 votes)
5 views12 pages

Q1 Part (I) : Methodology

Uploaded by

Kirollos shawki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Q1 Part (I) : Methodology

Uploaded by

Kirollos shawki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Q1 part (i): Experiment on Grey-Level Reduction

Methodology

The script for this experiment employs a quantization algorithm to reduce the grey levels of the
original Morse256 image. The algorithm adjusts the image’s pixel values to the new reduced
grey scales by scaling and rounding down the original 8-bit values to fit the target number of
grey levels. Each rescaled image is calculated by mapping the original intensity values to the
nearest value within the set of the reduced grey levels, ensuring that the spatial dimensions
remain unchanged. The process is repeated for each target grey level, generating a series of
images for visual comparison to the original.

Results and Visual Analysis

The resultant images exhibit a progressive reduction in detail and contrast with decreasing grey
levels. The original image shows a rich gradation of tones, providing a smooth appearance in
areas of subtle intensity changes, such as the subject's skin and the background.

● The image with 128 grey levels maintains most of the original's detail, with slightly visible
posterization.
● With 64 grey levels, the posterization effect becomes more pronounced, particularly in
the background.
● The image with 32 grey levels further diminishes details, with broader tonal bands
replacing smooth gradients.
● At 16 grey levels, facial features become stylized, and background elements start
merging.
● The 8 grey-level image exhibits a drastic simplification of tones, resulting in a highly
abstracted representation where details are significantly compromised.
Discussion
The degradation of image quality with reduced grey levels can be attributed to the loss of
information; finer tonal variations are replaced by flat regions of identical grey values. In
scenarios where subtle intensity differences are crucial, such as medical imaging or fine art
reproduction, higher grey-level resolutions are imperative. Conversely, for applications like
stylized graphics or when bandwidth for image transmission is limited, fewer grey levels may be
acceptable.

Visual inspection of the images reveals the trade-off between file size and visual fidelity. While
fewer grey levels can reduce the amount of data required to store and transmit an image, it also
leads to a loss of detail that can be critical for both human interpretation and computer vision
tasks.

Q1 part (ii) Report Section: Image Restoration of Discs2 Using Median and Bilateral Filtering

Methodology
A sequence of median filters with varying kernel sizes (3x3 to 10x10) was applied to Discs2 to
mitigate the noise. Subsequently, a bilateral filter, which employs a Gaussian filter in the spatial
domain and an intensity difference in the range domain, was utilized to preserve edges while
further smoothing the image. This was followed by a systematic application of the filters to the
image, aiming for the best possible noise reduction while maintaining the integrity of the original
image.

Results and Visual Analysis

The application of median filters of increasing kernel sizes resulted in a gradual smoothing of
the image. Smaller kernels provided moderate noise reduction, while larger kernels offered
more pronounced smoothing at the expense of image detail.
● The 3x3 median filter showed marginal noise reduction with significant speckle
presence.
● As the kernel size increased to 5x5 and 7x7, noise suppression improved, but the image
began losing fine details.
● Median filters with 8x8, 9x9, and 10x10 kernels produced smoother images, but also
introduced a blur effect, particularly in the smaller disc structures.

The subsequent application of the bilateral filter to the median-filtered images significantly
enhanced the visual quality. This filter effectively smoothed out noise while preserving edges, as
illustrated by the clearer definition between the discs and the background.

Discussion
The combined filtering approach of median followed by bilateral filtering demonstrated the
capability of sequentially applied filters in noise reduction tasks. The median filter effectively
reduced salt-and-pepper noise, and the bilateral filter's preservation of edges proved
advantageous in maintaining structural details. The experiment highlights the importance of filter
selection and parameter tuning in image processing applications.

The results indicate that a balance between noise removal and detail preservation is crucial,
with larger median kernels coupled with bilateral filtering providing the best compromise
between smoothness and detail retention.

Q1 part (ii) Report Section: Restoration of Discs3 Using Various Filtering Techniques

Methodology
The first algorithm applied a combination of classic image restoration filters. Gaussian filtering
addressed general image blur, median filtering tackled impulse noise, and Wiener filtering aimed
at reducing overall noise while preserving details. Unsharp masking enhanced edges to
counteract the blurring effects. For the second algorithm, Lucy-Richardson deconvolution was
utilized to reverse the blur imposed by a known point spread function, followed by mean,
median, and Gaussian filters to smooth the resultant image and reduce noise.

Results and Visual Analysis


The results from both algorithms reveal improvements over the original blurred image. In the
first algorithm, the Gaussian filter smoothed the image, while the median filter effectively
reduced noise at the expense of detail sharpness. The Wiener filter and unsharp masking
yielded a clearer image, with enhanced edges and reduced noise.

In the second algorithm, Lucy-Richardson deconvolution notably restored details that were
obscured by the blur. The subsequent application of mean and median filters offered additional
noise reduction, with the Gaussian filter providing a balance between noise suppression and
detail preservation.

The two approaches showed different strengths: the first algorithm excelled at noise reduction,
while the second algorithm was more effective at deblurring.
Discussion
The experiment with Discs3 demonstrated the efficacy of different filtering techniques in image
restoration. The choice of filter and parameters significantly affects the outcome, with each filter
providing unique benefits. Gaussian and median filters are well-suited for noise reduction, while
Wiener filtering and Lucy-Richardson deconvolution excel at deblurring. Unsharp masking
effectively enhances edges, important for perceptual quality.

This comparative analysis of restoration techniques highlights the importance of understanding


the nature of degradation when selecting an approach for image restoration. The combination of
filters within each algorithm illustrates the benefits of multi-step filtering strategies for tackling
various types of image distortions.

Q1 part (ii) Report Section: Restoration of Discs4 Using Notch Filtering

Methodology
The MATLAB script began by performing a two-dimensional fast Fourier transform (FFT) on
Discs4. The magnitude spectrum was visualized, and the prominent noise frequencies were
manually identified and selected using `ginput`. Notch filters were then constructed around the
selected frequencies to suppress the noise while preserving the image's underlying structure.
The notch filter masks were applied to the Fourier-transformed image, followed by an inverse
FFT to convert the image back to the spatial domain.

Results and Visual Analysis

The results show the original Discs4 image with visible periodic noise, the Fourier magnitude
spectrum highlighting the noise frequencies, and the final cleaned image post notch-filtering.
The manual selection of noise frequencies allowed for a targeted approach to noise reduction,
as evident in the cleaned image, which exhibits significant noise removal without compromising
the integrity of the discs.

Discussion
Notch filtering in the frequency domain proved effective for mitigating periodic noise, which is
often challenging to address with spatial domain filters. The manual selection process, though
user-dependent, allowed for precise identification of noise frequencies, resulting in a
high-quality restoration. This method's success is particularly noticeable in the preservation of
edges and fine details, making it a powerful tool for image restoration tasks involving periodic
noise.

Q2 part (i) Report Section: Effect of Salt-and-Pepper Noise on Motion Detection Accuracy

Methodology
A MATLAB script processed the video `Video1.mp4` by applying background subtraction to
detect motion. To assess the impact of noise, the script injected salt-and-pepper noise at three
different intensities—0.05, 0.1, and 0.2—into the video frames. The background model was a
single frame taken at the beginning of the video, converted to grayscale. The script then
calculated the absolute difference between the current frame and the background, applying a
threshold to distinguish between the background and motion areas.

Results and Visual Analysis


The frames extracted from the processed video illustrate the challenges of motion detection as
noise levels increase. At a low noise level (0.05), the algorithm managed to maintain a
reasonable distinction between motion and static areas. However, as the noise level increased
to 0.1 and then to 0.2, the motion detection became increasingly erratic, with noise pixels
frequently misidentified as motion.

Discussion
The experiment highlights the susceptibility of basic motion detection techniques to image
noise. While low levels of salt-and-pepper noise had a modest impact, higher levels significantly
compromised the accuracy of motion detection. This underscores the need for noise-resistant
motion detection methods or pre-processing steps to filter noise prior to motion analysis.
Furthermore, the results suggest that maintaining the reliability of motion detection in noisy
environments is a critical challenge for video surveillance applications.

Q2 part (ii) Report Section: Motion Detection with Dynamic Background in Video2

Methodology
The background model is dynamically updated as the video progresses. The initial frame of the
video is taken as the starting background, and with each new frame, the background model is
updated by a weighted average. The learning rate (alpha) for updating the background is set to
0.01, allowing the background to incorporate new static elements over time gradually. This
dynamic adaptation helps to distinguish between moving objects and the evolving background.

Results and Visual Analysis

The sequence of frames demonstrates the algorithm's effectiveness in isolating motion from a
dynamic background. Early frames show minimal detection artifacts, with more significant
changes in the background becoming apparent in later frames. The dynamic background
subtraction method successfully identifies moving objects without incorporating them into the
background model.

Discussion
Dynamic background subtraction proves to be a useful technique for dealing with videos where
the background changes over time. The choice of the learning rate is critical, as it governs the
speed at which the background model adapts to new conditions. If the rate is too high, the
background may change too quickly, and moving objects could be absorbed into the model.
Conversely, if the rate is too low, the model may not adapt sufficiently to the changes in the
environment, leading to false detections.

The experiment illustrates that carefully tuned parameters are vital for achieving accurate
motion detection in videos with a dynamic background. This approach is particularly relevant in
outdoor surveillance scenarios where lighting conditions and background scenes change
frequently.

Q2 part (iii) Report Section: Identifying Unattended Objects Using Frame Differencing and
Background Subtraction

Methodology
The method involves the continuous analysis of video frames to detect static objects over a
series of frames. The background is initially set as the first frame and updated throughout the
video to adapt to changes in the scene. Frame differencing identifies changes between the
current and previous frames, while background subtraction identifies changes from the overall
background. By combining these methods, the algorithm distinguishes between temporary and
persistent changes in the scene, isolating objects that have remained static for a predefined
threshold of frames.

Results and Visual Analysis


The set of images showcases various stages of object identification over time. Initially, the
scene is empty, but as objects appear and remain stationary beyond the static threshold, they
are highlighted by the algorithm. The series of frames illustrates the algorithm's capability to
consistently detect unattended objects while ignoring transient changes in the environment.

Discussion
The combination of frame differencing and background subtraction is effective in identifying
objects that become static in a scene. This approach is robust to movements in the scene that
do not persist, allowing the system to focus on potential security risks posed by unattended
items. The key to this method's effectiveness is the dynamic updating of the background model
and the accumulation of static changes over time to define what constitutes an unattended
object.

The method's performance depends on several factors, including the threshold for detecting
changes, the learning rate for background updating, and the static threshold that defines how
long an object must remain in place to be considered unattended. These parameters must be
carefully chosen based on the specific requirements of the surveillance scenario and the
expected types of changes in the scene.

You might also like