Assignment 2 Img PDF
Assignment 2 Img PDF
Submitted by:
Name: Utkrisht
Reg No: GCS/1730957
Batch: GCS-16 (AB Group)
In image processing filters are mainly used to suppress either the high frequencies in the
image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or detecting edges
in the image. The filter function is shaped so as to attenuate some frequencies and
enhance others.
Applying filters to the image is an another way to modify image. And the difference
compare to point operation is the filter use more than one pixel to generate a new
pixel value. For example, smoothing filter which replace a pixel value by average of its
neighboring pixel value.
Average (or mean) filtering is a method of 'smoothing' images by reducing the amount
of intensity variation between neighbouring pixels. The average filter works by moving
through the image pixel by pixel, replacing each value with the average value of
neighbouring pixels, including itself.
Q2 what is an image sharpening? Write a procedure/code to sharpen
an image with laplacian filter.
Image sharpening is an effect applied to digital images to give them a sharper
appearance. Almost all lenses can benefit from at least a small amount of sharpening.
Sharpening is applied in-camera to JPEG images at a level specified by the photographer
or at the default set by the camera manufacturer. Lightroom automatically applies some
sharpening to images unless it is instructed not to. For further sharpening, there are
Photoshop techniques and filters like Unsharp Mask and Smart Sharpen to do the job.
Sharpening works best on images whose blur did not stem from camera shake or
drastically missed focus, though minor camera shake or slightly out-of-focus shots can
also be fixed with sharpening.
Code :
clc;
close all;
a = im2double(imread('moon.png')); // Read in your image
lap = [-1 -1 -1; -1 8 -1; -1 -1 -1]; // Change - Centre is now
positive resp = imfilter(a, lap, 'conv'); // Change
figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');
The Fourier Transform is an important image processing tool which is used to decompose
an image into its sine and cosine components. The output of the transformation
represents the image in the Fourier or frequency domain, while the input image is the
spatial domain equivalent. In the Fourier domain image, each point represents a
particular frequency contained in the spatial domain image.
The Fourier Transform is used in a wide range of applications, such as image analysis,
image filtering, image reconstruction and image compression.
The continuous Fourier transforms
The Fourier transform pair in the most general form for a continuous and aperiodic time
treated as the Laplace transform of the signal evaluated along the imaginary axis (
):
As this notation is closely related to the system analysis concepts such as Laplace transform
and transfer function , it is preferred in the field of system design and control.
However, in practice, it is more convenient to represent the frequency of a signal
Here the forward and inverse Fourier transform are in perfect symmetry with only a
different sign for the exponent, therefore the duality of Fourier transform (Section 4.3.6)
between time and frequency domain is better illustrated. As this notation closely relates the
signal representations in both time and frequency domains, it is preferred in the field of
signal processing.
The discrete Fourier transform
Working with the Fourier transform on a computer usually involves a form of the transform
known as the discrete Fourier transform (DFT). A discrete transform is a transform whose
input and output values are discrete samples, making it convenient for computer
manipulation. There are two principal reasons for using this form of the transform:
The input and output of the DFT are both discrete, which makes it convenient for
computer manipulations.
There is a fast algorithm for computing the DFT known as the fast Fourier transform
(FFT).
The DFT is usually defined for a discrete function that is nonzero only over the
finite region and . The two-dimensional M-by-N DFT and
inverse M-by-N DFT relationships are given by
To convert a continuous image f (x, y) into digital form, we have to sample the function in both
co-ordinates and amplitude.
BASIS OF
SAMPLING QUANTIZATION
COMPARISON
In sampling, the values on
the y-axis, usually In quantization, time or x-axis is
X And Y axis amplitude, are continuous continuous and the y-axis or
but the time or x-axis is amplitude is discretized.
discretized.
Sampling is done prior to Quantization is done after
When It Is Done
the quantization process. sampling process.
The sampling rate
The quantization level determines
determines the spatial
Resolution the number of grey levels in the
resolution of the digitized
digitized image.
image.
Sampling reduces a
Quantization reduces a continuous
continuous curve (Time-
Effect On A curve to a continuous series
Amplitude Graph) to a
Continuous Curve of ‘’stair steps’’ that exist at
series of “tent poles” over
regular time interval.
time.
In the sampling process, a
In quantization process, the values
Values single amplitude value is
representing the time intervals are
Representing The selected from different
rounded off, to create a defined set
Time Intervals values of the time interval
of possible amplitude values.
to represent it.
Q3 Consider an image of size 10x10 pixels with 3 bits/pixel. Compute histogram
of this image. Compute histogram equalization and then show the image after
histogram equalization.