DIP-Lab 05-SP23
DIP-Lab 05-SP23
Objective:
Name of Student
Student ID
Date of Lab Conducted
Marks Obtained
Remarks
Signature
April 15, 2023 Lab 05 – Filters Implementation
Student Name: ___________________________Roll No: ________________ Section: ______________
Image filters implementation.
Lab Objectives
Theory
Image sharpening falls into a category of image processing called spacial filtering. One can take
advantage of how quickly or abruptly gray-scale values or colors change from one pixel to the
next. First order operators (using first derivative measurements) are particularly good at finding
edges in images.
The Sobel and Roberts edge enhancement operators in IDL are examples of these first order
filters, sometimes called gradient filters.
A high-pass filter can be used to make an image appear sharper. These filters emphasize fine details in
the image – exactly the opposite of the low-pass filter. High-pass filtering works in exactly the same
way as low-pass filtering; it just uses a different convolution kernel. In the example below, notice the
minus signs for the adjacent pixels. If there is no change in intensity, nothing happens. But if one
pixel is brighter than its immediate neighbors, it gets boosted.
High-pass filtering can also cause small, faint details to be greatly exaggerated. An over-processed
image will look grainy and unnatural, and point sources will have dark donuts around them. So while
high-pass filtering can often improve an image by sharpening detail, overdoing it can actually degrade
the image quality significantly.
Edge Detection
In an image, an edge is a curve that follows a path of rapid change in image intensity. Edges are often
associated with the boundaries of objects in a scene. Edge detection is used to identify the edges in an
image.
Places where the first derivative of the intensity is larger in magnitude than some threshold
Places where the second derivative of the intensity has a zero crossing
Edge provides several derivative estimators, each of which implements one of these definitions. For
some of these estimators, you can specify whether the operation should be sensitive to horizontal
edges, vertical edges, or both. edge returns a binary image containing 1's where edges are found and
0's elsewhere.
Laplacian
Any feature with a sharp discontinuity (like noise, unfortunately) will be enhanced by a Laplacian
operator. Thus, one application of a Laplacian operator is to restore fine detail to an image which has
been smoothed to remove noise. (The median operator is often used to remove noise in an image.)
The Laplacian operator is implemented as a convolution between an image and a kernel. The
Laplacian kernel can be constructed in various ways, but we will use the same 3-by-3 kernel used
by Gonzalez and Woods, and shown in the figure below.
In image convolution, the kernel is centered on each pixel in turn, and the pixel value is replaced by
the sum of the kernel multiplied by the image values. In the particular kernel we are using here,
we are counting the contributions of the diagonal pixels as well as the orthogonal pixels in the
filter operation.
Sobel Filter
In simple terms, the operator calculates the gradient of the image intensity at each point,
giving the direction of the largest possible increase from light to dark and the rate of change in
that direction. The r e s ul t t h er ef or e shows how "abruptly" or "smoothly" the image changes at
that point, and therefore how likely it is that that part of the image represents an edge, as well as
how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation
is more reliable and easier to interpret than the direction calculation. Mathematically, the operator
uses two 3×3 kernels which are convolved with the original image to calculate approximations of
the derivatives - one for horizontal changes, and one for vertical. If we define A as the source
image, and Gx and Gy are two images which at each point contain the horizontal and vertical
derivative approximations, the computations are as follows:
The x-coordinate is here defined as increasing in the "right"-direction, and the y- coordinate is
defined as increasing in the "down"-direction. At each point in the image, the resulting gradient
approximations can be combined to give the gradient magnitude, using:
For Example
>> I = imread(machine.jpg);
>> imshow(I);
Original
>> h = fspecial('laplacian');
>> J = imfilter(I,h);
>> imshow(J);
Laplacian Filtered
>> myMatrix = [0 1 0; 1 -2 1; 0 1 0];
>> K = imfilter(I, myMatrix);
>> imshow(K);
myMatirx filter
Student Task