0% found this document useful (0 votes)
11 views30 pages

Lab Manual

The document is a laboratory manual for the AP4011 Advanced Digital Image Processing course at Mar Ephraem College of Engineering and Technology. It outlines various experiments including image compression using DCT, edge detection with Canny edge detector, geometric transformations, interpolation techniques, and more, providing aims, required apparatus, theory, programs, procedures, and expected results for each experiment. MATLAB software is used throughout the experiments to perform image processing tasks.

Uploaded by

binusha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views30 pages

Lab Manual

The document is a laboratory manual for the AP4011 Advanced Digital Image Processing course at Mar Ephraem College of Engineering and Technology. It outlines various experiments including image compression using DCT, edge detection with Canny edge detector, geometric transformations, interpolation techniques, and more, providing aims, required apparatus, theory, programs, procedures, and expected results for each experiment. MATLAB software is used throughout the experiments to perform image processing tasks.

Uploaded by

binusha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

MAR EPHRAEM COLLEGE OF

ENGINEERING AND TECHNOLOGY


MALANKARA HILLS, ELAVUVILAI – 629171

AP4011 ADVANCE DIGITAL IMAGE PROCESSING


LABORATORY

LAB MANUAL

Name : ………………………………………………………….

Branch : ………………………………………………………….

Reg. No. : ………………………………………………………….


TABLE OF CONTENTS

Serial Date Title Page Marks Signature


No. No. Obtained

IMAGE COMPRESSION USING


1 DCT
EDGE DETECTION USING CANNY
2 EDGE DETECTOR

GEOMETRICAL
3 TRANSFORMATIONS OF IMAGES

INTERPOLATION OF IMAGES
4
IMAGE FUSION USING DISCRETE
5 WAVELET TRANSFORM (DWT)

SEGMENTATION OF LUNGS FROM


6 3D- CHEST SCAN
Experiment No. Date:

IMAGE COMPRESSION USING DISCRETE COSINE TRANSFORM (DCT)


AIM
To perform image compression using Discrete Cosine Transform (DCT) .

APPARATUS REQUIRED

PC with MATLAB software

THEORY

The discrete cosine transform (DCT) represents an image as a sum of sinusoids of


varying magnitudes and frequencies. The dct2 function computes the two-dimensional
discrete cosine transform (DCT) of an image. The DCT has the property that, for a typical
image, most of the visually significant information about the image is concentrated in just a
few coefficients of the DCT. For this reason, the DCT is often used in image compression
applications. For example, the DCT is at the heart of the international standard lossy image
compression algorithm known as JPEG.

The two-dimensional DCT of an M-by-N matrix Ais defined as follows.

The inverse DCT equation can be interpreted as meaning that any M-by-N matrix A can
be written as a sum of MN functions of the form

1
These functions are called the basis functions of the DCT. The DCT coefficients Bpq,
then,can be regarded as the weights applied to each basis function.
DCT TRANSFORM MATRIX
There are two ways to compute the DCT using Image Processing Toolbox™ software.
The first method is to use the dct2function. dct2uses an FFT-based algorithm for speedy
computationwith large inputs. The second method is to use the DCT transform matrix, which is
returned by the function dctmtxand might be more efficient for small square inputs, such as 8-
by-8 or 16-by-16.
The M-by-M transform matrix Tis given by

For an M-by-M matrix A, T*A is an M-by-M matrix whose columns contain the one-
dimensional DCT of the columns of A. The two-dimensional DCT of A can be computed as
B=T*A*T'. Since T is a real orthonormal matrix, its inverse is the same as its transpose.
Therefore, the inverse two-dimensional DCT of B is given by T'*B*T.

Image Compression with the Discrete Cosine Transform

DCT is used in the JPEG image compression algorithm. The input image is divided into
8- by-8 or 16-by-16 blocks, and the two-dimensional DCT is computed for each block. The
DCT coefficients are then quantized, coded, and transmitted. The JPEG receiver (or JPEG file
reader) decodes the quantized DCT coefficients, computes the inverse two-dimensional DCT
of each block, and then puts the blocks back together into a single image. For typical images,
many of the DCT coefficients have values close to zero. These coefficients can be discarded
without seriously affecting the quality of the reconstructed image.

PROGRAM
I = imread('cameraman.tif');I =
im2double(I);
T = dctmtx(8);
B = blkproc(I,[8 8],’P1*x*P2’,T,T’); mask = [1 1 1
1 0 0 0 0
1 1 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0];
B2 = blkproc(B,[8 8],’P1.*x’,mask);
I2 = blkproc(B2,[8 8],’P1*x*P2’,T’,T);
2
figure;
imshow(I);
title('Original image');figure;
imshow(I2); title('Compressed image');
im1 = double(I);
imf = double(I2);
mse = sum((im1(:)-imf(:)).^2) / prod(size(im1));psnr =
10*log10(255*255/mse)

PROCEDURE
1) Open MATLAB version 7.0 →Select file →Select new script.
2) Type the program → Give file name → Save the program.
3) Run the program and execute.
4) Finally, output will be displayed in the command window and the graph will be displayed.

INPUT IMAGE

3
RESULT
Thus an image is compressed using Discrete Cosine Transform (DCT) and Peak Signal
to NoiseRatio (PSNR) is calculated.

4
Experiment No. Date:

EDGE DETECTION USING CANNY EDGE DETECTOR


AIM
To perform edge detection using canny edge detector.

APPARATUS REQUIRED
PC with MATLAB software

THEORY
In an image, an edge is a curve that follows a path of rapid change in image
intensity. Edges are often associated with the boundaries of objects in a
scene. Edge detection is used to identify the edges in an image. To find edges, the edge
function can be used. This function looks for places in the image where the intensity
changes rapidly, using one of these two criteria:
1) Places where the first derivative of the intensity is larger in magnitude than some threshold
2) Places where the second derivative of the intensity has a zero crossing
Edge provides several derivative estimators, each of which implements one of these
definitions. For some of these estimators, we can specify whether the operation should be
sensitive to horizontal edges, vertical edges, or both. Edge returns a binary image
containing 1's where edges are found and 0's elsewhere.
The most powerful edge-detection method that edge provides is the Canny
method. The Canny method differs from the other edge-detection methods in that it uses two
different thresholds (to detect strong and weak edges), and includes the weak edges in the
output only if they are connected to strong edges. This method is therefore less likely than the
others to be affected by noise, and more likely to detect true weak edges.

PROGRAM

I = imread('coins.png');
imshow(I)
BW1 = edge(I,'sobel');
BW2 = edge(I,'canny');
figure;
imshowpair(BW1,BW2,'montage')
title('Sobel Filter Canny Filter');

5
PROCEDURE
Open MATLAB version 7.0 →Select file →Select new script.
1. Type the program.
2. Read image and display it.

3. Apply both the Sobel and Canny edge detectors to the image and
display them forcomparison.

4. Give file name → Save the program.


5. Run the program and execute.
6. Finally, output will be displayed in the command window and the graph will be displayed.

OUTPUT

INPUT IMAGE

6
OUTPUT IMAGE

RESULT
Thus, edge detection is performed in an image using sobel and canny edge detector.

7
Experiment No. Date:
GEOMETRIC TRANFORMATION OF IMAGES
AIM
To perform rotation, translation and shearing of images.

APPARATUS REQUIRED

PC with MATLAB software

THEORY
A set of image transformations where the geometry of image is changed without
altering its actual pixel values are commonly referred to as “Geometric” transformation. In
general, we can apply multiple operations on it, but, the actual pixel values will remain
unchanged. In these transformations, pixel values are not changed, the positions of pixel
values are changed.
Translation
Translation basically means shifting the object’s location. It means shifting the object
horizontallyor vertically by some defined off-set (measured in pixels).
x’ = x + A
y’ = y +
B

Here, let’s say, A = 100, B = 80 (Translation in x and y-direction respectively)


(x, y) – point in input image
(x’, y’) – point in output
image

It means that each pixel is shifted 100 pixels in the x-direction and each pixel is shifted 80
pixels inthe y-direction.

8
PROGRAM
I=imread(‘cameraman.tif’);
Imread figure
imshow(I) title('Original Image’)
size(I) J = imtranslate(I,[15,
25]);figure imshow(J)
title('Translated Image')
k = imtranslate(I,[15, 25],'OutputView','full');size(k)
figure imshow(k)
title('Translated Image', Unclipped')

PROCEDURE

1. Display the image. The size of the image is 256-by-256 pixels. By default, imshow displays
the image with the upper right corner at (0,0).
2. Translate the image shifting the image by 15 pixels in the x-direction and 25 pixels in the y-
direction. Note that, by default, imtranslate displays the Translated image within the
boundaries (or limits) of the original 256-by-256 image. This results in some of the
Translated image being clipped.
3. Display the Translated image. The size of the image is 256-by-256 pixels.
4. Use the 'OutputView' parameter set to 'full' to prevent clipping the Translated image. The
size of the new image is 281-by-271 pixels.
OUTPUT

INPUT IMAGE

9
OUTPUT IMAGE 1

OUTPUT IMAGE 2

10
ROTATION
this technique rotates an image by a specified angle and by the given axis or point. It
performs ageometric transform which maps the position of a point in current image to the
output image byrotating it by the user-defined angle through the specified axis.
The points that lie outside the boundary of an output image are ignored. Rotation is basically
usedfor improvised visual appearance.
It’s a little different from other

transforms.X’ = x*cos(⊖) – y*sin(⊝)


(Eq. 5)
Y’ = y*sin(⊝) + x*cos(⊝) (Eq. 6)
Here, (x, y) is the position of pixel in input image.
(x’, y’) is the position of pixel in output
image.

⊝ is the angle by which an image to be rotated.


PROGRAM
I = imread('circuit.tif');
J = imrotate(I,35,'bilinear');
figure
imshowpair(I,J,'montage')

PROCEDURE

1) Read an image into the workspace.


2) Rotate the image 35 degrees counterclockwise. In this example,
specify bilinearinterpolation.
3) Display the original image and the rotated image.

11
OUTPUT
OUTPUT IMAGE

SHEARING

In two dimensions, a simple shear transformation that maps a pair of input coordinates [u v] to
apair of output coordinates [x y] has the form

where a is a constant.
Any simple shear is a special case of an affine transformation. You can easily verify that

yields the values for x and y that you received from the first two equations.

PROGRAM

a = 0.45;
T = maketform('affine', [1 0 0; a 1 0; 0 0 1] );

12
A = imread('football.jpg');
h1 = figure; imshow(A);
title('Original Image');
orange = [255 127 0]';
R =
makeresampler({'cubic','nearest'},'fill'); B =
imtransform(A,T,R,'FillValues',orange); h2
= figure;
imshow(B);
title( ‘ Sheared Image'')

PROCEDURE

1. Setting a = 0.45, we construct an affine transform struct using maketform.


2. We select, read, and view and Image to transform.
3. We choose a shade of orange as our fill value.
4. Use T to transform A and display the output.

OUTPUT

INPUT IMAGE

13
OUTPUT IMAGE

RESULT
Thus, rotation, translation and shearing operations are performed in images.

14
Experiment No. Date:

INTERPOLATION OF IMAGES

AIM
To perform Bilinear, Bicubic and Nearest Interpolation of Images.

APPARATUS REQUIRED

PC with MATLAB software

THEORY
For easy transmission of images in digital computing various processes such as
compression, decompression, image enhancement, etc are performed so that images can be
used in applications easily and very efficiently. Almost all of these techniques use the process
of interpolation. The interpolation is the process of finding out the unknown pixels of the
image. There are various methods for interpolation. Image interpolation algorithms directly
affect the quality of image magnification. Image interpolation is the process of transferring
image from one resolution to another without losing image quality

Needs of Interpolation:
 We want BIG images: When we see a video clip on a PC, we like to see it in the full
screen mode .Digital zooming (resolution enhancement) is possible.
 We want GOOD images: If some block of an image gets damaged during the
transmission, we want to repair it. Image inpainting (error concealment) is done.[
 We want COOL images: Manipulate images digitally can render fancy artistic effects as
we often see in movies. A geometric transformation enables us to get enhanced
images.
 To extract useful minor details from the image.

Bilinear Interpolation:

Bilinear interpolation takes a weighted average of the 4 neighborhood pixels to


calculate its final interpolated value. The result is much smoother image than the original
image. When all known pixel distances are equal, then the interpolated value is simply their

15
sum divided by four. Here, the position of pixel P in the magnified image was converted into

16
the original image, then the Influence of the four pixel points A, B, C and D was calculate. The
nearer distance to the point P, the value is greater, which indicates the greater effects. The
diagram of bilinear interpolation is shown in Fig.

Figure: Bilinear interpolation

Bicubic Interpolation:

Bicubic interpolation is best among all non-adaptive techniques. Though,Bicubic


interpolation is quite similar to bilinear interpolation algorithm. Here,for the unknown pixel P
in amplified image, its influence sphere is expanded to its 16 adjacent pixels, then the color
value of P is calculated by these 16 pixels according to their distance to P. The diagram of
bicubic interpolation algorithm is shown in Fig. The interpolated surface is smoother than
corresponding surfaces obtained by methods like bilinear interpolation and nearest-
neighbour interpolation.

Figure: Bicubic interpolation

17
PROGRAM
clc ;
clear ;
close ;
I = imread ( ’C: \ Users\ Anbu \ Desktop \img. jpg ’ ) ; // s i z e 256 x256
[m , n ] = size ( I ) ;
for i = 1: m
for j =1: n
// S c a l i n g
J (1.5* i ,1.5* j ) = I(i , j ) ; // 512 x512 Image
end
end
I_nearest = imresize (J ,[256 ,256]) ; // ’ nearest ’ − neares t –neigbor interpolation
I_bilinear = imresize (J ,[256 ,256] , ’ b i l i n e a r ’ ) ; // Bilinear interpolation
I_bicubic = imresize (J ,[256 ,256] , ’ b i c u b i c ’ ) ; //Bicubic interpolation
figure
ShowImage ( uint8 ( I_nearest ) , ’ n e are s t −n e i g b o r i n t e r p o l a t i o n ’ )
; figure
ShowImage ( uint8 ( I_bilinear ) , ’ b i l i n e a r − b i l i n e a r i n t e r p o l a t i o n ’ ) ;
figure
ShowImage ( uint8 ( I_bicubic ) , ’ b i c u b i c − b i c u b i c i n t e r p o l a t i o n ’ ) ;

PROCEDURE
Open MATLAB version 7.0 →Select file →Select new script.
1. Type the program.
2. Read image and display it.

3. Give file name → Save the program.


4. Run the program and execute.
5. Finally, output will be displayed in the command window and the graph will
be displayed.

18
OUTPUT

RESULT

Thus the Bilinear, Bicubic and Nearest Interpolation of Images was done and output is
verified.

19
Experiment No. Date:

IMAGE FUSION USING DISCRETE WAVELET TRANSFORM (DWT)

AIM
To fuse MRI and CT images using Discrete Wavelet Transform (DWT).

APPARATUS REQUIRED

PC with MATLAB software

THEORY
The image fusion process is characterized as collecting all the significant data from
various pictures and their consideration into fewer pictures, typically a single one. This
single picture is more instructive and exact than any single source picture, and it
comprises of all the vital data. The motivation behind picture fusion isn't just to lessen the
measure of information yet additionally to build pictures that are increasingly proper and
justifiable for the human and machine perception. In Computer vision, Multi-sensor
Image fusion is the way toward joining important data from at least two pictures into a
solitary image. The subsequent picture will be more educational than any of the input
pictures.
A good image fusion strategy has the accompanying properties. In the first place, it
can save the majority of the valuable data of various images. Second, it doesn't deliver
artifacts that can distract or delude a human observer or any consequent image handling
steps. Third, it must be dependable and powerful. At last, it ought not to dispose of any
striking data contained in any of the information images. Image fusion has become a
typical term utilized inside medicinal diagnostics and treatment. The term is utilized
when various images of a patient are enlisted and overlaid or converged to give extra
data. Intertwined images might be made from various images from a similar imaging
modality, or by consolidating data from different modalities, for example, magnetic
resonance image (MRI), computed tomography (CT), positron emission tomography
(PET), [3] and single-photon emission computed tomography (SPECT). In radiology and
radiation oncology, theseimages fill various needs. For instance, CT images are utilized all
the more regularly to discover contrasts in tissue thickness while MRI images are
commonly used to analyze cerebrum tumors. For precise analysis, radiologists must
coordinate data from numerous image designs. Melded, anatomically steady images are
particularly useful in diagnosing and treating malignant growth. With the appearance of
these new advancements, radiation oncologists can exploit force balanced radiation
treatment (IMRT). Having the option to overlay analytic images into radiation arranging
images brings about increasingly exact IMRT target tumor volumes.
Computational imaging performs a significant job in the medical field, anyway
however conventional medical image fusion strategies just consider the fusion of two
sorts of images, for example, CT-MRI, CT-PET, and MRI-PET. Hence, multimodal medical
image fusion can make the life systems and physiology characteristics of tissues simple to
perceive and pass judgment, which is fundamental to restorative image investigation,
20
clinical conclusion, treatment arranging and so on.

PROGRAM
close all;
clear all;
home;
[Image,PathName,FilterIndex] = uigetfile('*.jpg;*.png;*.tif;*.bmp', 'Select the PET imagefile');
TestImage=strcat(PathName,Image);
im1 = (imread(TestImage));
[Image,PathName,FilterIndex] = uigetfile('*.jpg;*.png;*.tif;*.bmp', 'Select the MRI imagefile');
TestImage=strcat(PathName,Image);
im2 = (imread(TestImage));
im1=imresize(im1,[256 256]);
im2=imresize(im2,[256 256]);
figure(1);imshow(im2,[]);
title('MRI image');
figure(2);imshow(im1,[]);
title('CT image');
[cA1,cH1,cV1,cD1]=dwt2(im1,'sym3');
[cA2,cH2,cV2,cD2]=dwt2(im2,'sym3');
cAf=0.5*(cA1+cA2);
D=(abs(cH1)-abs(cH2))>=0;
cHf=D.*cH1+(~D).*cH2;
D=(abs(cV1)-abs(cV2))>=0;
cVf=D.*cV1+(~D).*cV2;
D=(abs(cD1)-abs(cD2))>=0;
cDf=D.*cD1+(~D).*cD2;
imf=idwt2(cAf,cHf,cVf,cDf,'sym3');
imf=imf/140; figure(3);imshow(imf,[]);
title('Fused image');

PROCEDURE

1) Read the input images(CT and MRI)


2) Resize the input images into 256×256.
3) Using DWT fuse the input images and display the output.

21
OUTPUT

INPUT IMAGE 1(CT IMAGE)

INPUT IMAGE 2(MRI IMAGE)

22
FUSED IMAGE

RESULT
Thus, input CT and MRI images are fused using Discrete Wavelet Transform (DWT).

23
Experiment No. Date:

SEGMENTATION OF LUNGS FROM 3D- CHEST SCAN

AIM

To do segmentation of lungs from 3d- chest scan

APPARATUS REQUIRED

PC with MATLAB software

PROGRAM

V =
im2single(V);
volumeViewer(V)
XY =
V(:,:,160);
XZ = squeeze(V(256,:,:));
figure imshow(XY,
[],"Border","tight")
imshow(XZ,[],"Border","tight")
imageSegmenter(XY)
BW = imcomplement(BW);
BW =
imclearborder(BW); BW
= imfill(BW, "holes");
radius = 3;
decomposition = 0;
se =
strel("disk",radius,decomposition);
BW = imerode(BW, se);
maskedImageXY = XY;
maskedImageXY(~BW) = 0;
imshow(maskedImageXY)
BW = imbinarize(XZ);
BW = imcomplement(BW);
BW = imclearborder(BW);
BW = imfill(BW,"holes")
;

24
radius = 13;
decomposition = 0;
se =
strel("disk",radius,decomposition);
BW = imerode(BW, se);
maskedImageXZ = XZ;

25
maskedImageXZ(~BW) =
0;
imshow(maskedImageXZ)
mask = false(size(V));
mask(:,:,160) = maskedImageXY;
mask(256,:,:) = mask(256,:,:)|reshape(maskedImageXZ,[1,512,318]);
V = histeq(V);
BW = activecontour(V,mask,100,"Chan-Vese");
segmentedImage = V.*single(BW);

OUTPUT

RESULT
Thus the segmentation of lungs from 3D chest scan was done.

26

You might also like