0% found this document useful (0 votes)
110 views

Assignment 2 Img PDF

Here are the steps to solve this problem: 1. Given an image of size 10x10 pixels with 3 bits/pixel, the total number of possible gray levels is 2^3 = 8 levels ranging from 0 to 7. 2. To compute the histogram, count the number of pixels for each gray level. Since the image size is 10x10 = 100 pixels, the histogram will have 8 bins with values ranging from 0 to 100. 3. Apply histogram equalization to flatten and spread out the histogram. This redistributes the pixel values to make use of the full dynamic range. 4. The equalized image will have a more uniform histogram with bin values closer to 100/8 = 12.5

Uploaded by

Utkrisht Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Assignment 2 Img PDF

Here are the steps to solve this problem: 1. Given an image of size 10x10 pixels with 3 bits/pixel, the total number of possible gray levels is 2^3 = 8 levels ranging from 0 to 7. 2. To compute the histogram, count the number of pixels for each gray level. Since the image size is 10x10 = 100 pixels, the histogram will have 8 bins with values ranging from 0 to 100. 3. Apply histogram equalization to flatten and spread out the histogram. This redistributes the pixel values to make use of the full dynamic range. 4. The equalized image will have a more uniform histogram with bin values closer to 100/8 = 12.5

Uploaded by

Utkrisht Kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ASSIGNMENT OF DIGITAL IMAGE PROCESSING

Submitted by:
Name: Utkrisht
Reg No: GCS/1730957
Batch: GCS-16 (AB Group)

Q1 What is the role of a filter in a image processing? How averaging


filter helps in smoothing an image?
Filtering is a technique for modifying or enhancing an image. For example, you can filter
an image to emphasize certain features or remove other features. Image processing
operations implemented with filtering include smoothing, sharpening, and edge
enhancement.
Filtering is a neighborhood operation, in which the value of any given pixel in the output
image is determined by applying some algorithm to the values of the pixels in the
neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of
pixels, defined by their locations relative to that pixel.

In image processing filters are mainly used to suppress either the high frequencies in the
image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or detecting edges
in the image. The filter function is shaped so as to attenuate some frequencies and
enhance others.
Applying filters to the image is an another way to modify image. And the difference
compare to point operation is the filter use more than one pixel to generate a new
pixel value. For example, smoothing filter which replace a pixel value by average of its
neighboring pixel value.

Average (or mean) filtering is a method of 'smoothing' images by reducing the amount
of intensity variation between neighbouring pixels. The average filter works by moving
through the image pixel by pixel, replacing each value with the average value of
neighbouring pixels, including itself.
Q2 what is an image sharpening? Write a procedure/code to sharpen
an image with laplacian filter.
Image sharpening is an effect applied to digital images to give them a sharper
appearance. Almost all lenses can benefit from at least a small amount of sharpening.
Sharpening is applied in-camera to JPEG images at a level specified by the photographer
or at the default set by the camera manufacturer. Lightroom automatically applies some
sharpening to images unless it is instructed not to. For further sharpening, there are
Photoshop techniques and filters like Unsharp Mask and Smart Sharpen to do the job.
Sharpening works best on images whose blur did not stem from camera shake or
drastically missed focus, though minor camera shake or slightly out-of-focus shots can
also be fixed with sharpening.

Code :
clc;
close all;
a = im2double(imread('moon.png')); // Read in your image
lap = [-1 -1 -1; -1 8 -1; -1 -1 -1]; // Change - Centre is now
positive resp = imfilter(a, lap, 'conv'); // Change

// Change - Normalize the response


image minR = min(resp(:));
maxR = max(resp(:))
resp = (resp - minR) / (maxR - minR);

// Change - Adding to original image


now sharpened = a + resp;
// Change - Normalize the sharpened
result minA = min(sharpened(:));
maxA = max(sharpened(:));
sharpened = (sharpened - minA) / (maxA - minA);

// Change - Perform linear contrast enhancement


sharpened = imadjust(sharpened, [60/255 200/255], [0 1]);

figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');

Q4 What is Fourier transformation? What is the difference


between continuous and discrete Fourier transformation ?

The Fourier Transform is an important image processing tool which is used to decompose
an image into its sine and cosine components. The output of the transformation
represents the image in the Fourier or frequency domain, while the input image is the
spatial domain equivalent. In the Fourier domain image, each point represents a
particular frequency contained in the spatial domain image.

The Fourier Transform is used in a wide range of applications, such as image analysis,
image filtering, image reconstruction and image compression.
The continuous Fourier transforms

The Fourier transform pair in the most general form for a continuous and aperiodic time

signal is (Eqs. 4.8, 4.9):

The spectrum is expressed as a function of because the spectrum can be

treated as the Laplace transform of the signal evaluated along the imaginary axis (
):

As this notation is closely related to the system analysis concepts such as Laplace transform

and transfer function , it is preferred in the field of system design and control.
However, in practice, it is more convenient to represent the frequency of a signal

by in cycles/ second or Hertz (Hz, KHz, MHz, GHz, etc.), instead of in

radians/second. Replacing by , we can also express the spectrum as or

simply in this alternative representation:

Here the forward and inverse Fourier transform are in perfect symmetry with only a
different sign for the exponent, therefore the duality of Fourier transform (Section 4.3.6)
between time and frequency domain is better illustrated. As this notation closely relates the
signal representations in both time and frequency domains, it is preferred in the field of
signal processing.
The discrete Fourier transform

Working with the Fourier transform on a computer usually involves a form of the transform
known as the discrete Fourier transform (DFT). A discrete transform is a transform whose
input and output values are discrete samples, making it convenient for computer
manipulation. There are two principal reasons for using this form of the transform:

 The input and output of the DFT are both discrete, which makes it convenient for
computer manipulations.
 There is a fast algorithm for computing the DFT known as the fast Fourier transform
(FFT).

The DFT is usually defined for a discrete function that is nonzero only over the
finite region and . The two-dimensional M-by-N DFT and
inverse M-by-N DFT relationships are given by

The values are the DFT coefficients of . The zero-frequency


coefficient, , is often called the "DC component." DC is an electrical engineering
term that stands for direct current. (Note that matrix indices in MATLAB always start at 1
rather than 0; therefore, the matrix elements f(1,1) and F(1,1) correspond to the
mathematical quantities and , respectively.)
Q5 What is image sampling and image quantization?
To create a digital image, we need to convert the continuous sensed data into digital form.
This process includes 2 processes:
1. Sampling:- Digitizing the co-ordinate value is called sampling.
2. Quantization:- Digitizing the amplitude value is called quantization.

To convert a continuous image f (x, y) into digital form, we have to sample the function in both
co-ordinates and amplitude.

Difference between Image Sampling and Quantization:

BASIS OF
SAMPLING QUANTIZATION
COMPARISON
In sampling, the values on
the y-axis, usually In quantization, time or x-axis is
X And Y axis amplitude, are continuous continuous and the y-axis or
but the time or x-axis is amplitude is discretized.
discretized.
Sampling is done prior to Quantization is done after
When It Is Done
the quantization process. sampling process.
The sampling rate
The quantization level determines
determines the spatial
Resolution the number of grey levels in the
resolution of the digitized
digitized image.
image.
Sampling reduces a
Quantization reduces a continuous
continuous curve (Time-
Effect On A curve to a continuous series
Amplitude Graph) to a
Continuous Curve of ‘’stair steps’’ that exist at
series of “tent poles” over
regular time interval.
time.
In the sampling process, a
In quantization process, the values
Values single amplitude value is
representing the time intervals are
Representing The selected from different
rounded off, to create a defined set
Time Intervals values of the time interval
of possible amplitude values.
to represent it.
Q3 Consider an image of size 10x10 pixels with 3 bits/pixel. Compute histogram
of this image. Compute histogram equalization and then show the image after
histogram equalization.

You might also like