Image Processing
Image Processing
DATE :
Aim:
To perform image smoothing and sharpening of an eight bit colour image.
Software required:
MATLAB 9.5(R2018b)
Theory:
Image smoothing is a digital image processing technique that reduces image
noises and blurring of image. Blurring is used in preprocessing steps, such as
removal of small details from an image prior to (large) object extraction, and
bridging of small gaps in lines or curves. Noise reduction can be accomplished
by blurring with a linear filter and also by non- linear filtering.
It works first by selecting a kernel (3x3, 5x6 etc) and then convolving with image.
The intensity of center pixel is determined by pixel intensities of neighborhood
pixel. Depending upon kernel values and the type of aggregation you get several
smoothing effects.
Image sharpening is an effect applied to digital images to give them a sharper
appearance. Sharpening enhances the definition of edges in an image. The dull
images are those which are poor at the edges. There is net much difference in
background and edges. On the contrary, the sharpened image is that in which the
edges are clearly distinguishable.
Source code:
Image Smoothing
K=imread(‘8bit.jpg’);
K=rgb2gray(k);
W=zeros(7,7);
For i=1:7
For j=1:7
w(i,j)=1/70;
Output: Image Smoothing
y= imfilter(k,w);
end
end
subplot(2,2, 1 ); imshow(k);
title(‘original image’);:
subplot(2,2,2); imshow(y);
title(‘smoothing image’);
Source code:
Image Sharpening
clc;
close all;
a =rgb2gray(imread(‘8bit.jpg’);
lap = [1 1 1; 1-8 1;1 1 1];
subplot(1,3,1 );
imshow(a):
title(‘Original image’);
subplot(1,3,2):
imshow(resp);
imshow(sharpened);
title(‘Sharpened image’);
Output: Image Smoothing
Result:
Thus performed image smoothing and sharpening of an eight bit color image
using MATLAB.
EXP NO: EDGE, LINE AND BOUNDARY DETECTION
DATE:
Aim:
To implement mat lab function
(a) Edge detection
(b) Line detection
(c) Boundary extraction
Software Required:
MATLAB 9.5(R2018b)
Theory:
Edge detection is an image processing technique for finding the boundaries of
objects within images. It works by detecting discontinuities in brightness. Edge
detection is used for image segmentation and data extraction in areas such as
image processing, computer vision, and machine vision.
In image processing, line detection is an algorithm that takes a collection of n
edge points and finds all the lines on which these edge points lie. The most
popular line detectors are the Hough transform and convolution-based
techniques.Extracting the boundary is the important process to gain the
information and understand the feature of an image. Boundary extraction is the
first process in preprocessing in order to present the features of the image.
Source Code:
Edge Detection:
clc
clear all
close all
a=imread(‘8bit.jpg’);
subplot(2,2, 1 ):
imshow(a);
title(“Original Input Image’);
a=double(a);
Output: Edge Detection
W1= [-1 -2 -1;0 0 0;1 2 1];
w2=[-1 0 1;-2 0 2;-1 0 1];
[row col]-size(a);
for x-2:1 :row-1
for y-2:1:col-1
a1(x,y)=w1(1)*a(x-1,y-1 )+w 1(2)*a(x-1,y)+w 1(3)*...
a(x-1,y+1)+w1(4)* a(x,y- 1 )+w1(5)* a(x,y) +w1 (6)*...
a(x,y+1 )+w1 (7)*a(x+1,y-1)+w1(8)*a(x+1,y) +w1 (9)*...
a(x+1,y+1);
a(x-1,y+1l)+w2(4)*a(x,y-1 ) +w2(5)*a(x,y)+w2(6)*...
a(x,y+1 )+tw2(7)*a(x+1,y-1)+w2(8) *a(x+ 1l,y)+w2(9)*...
a(x+1,y+1);
end
end
subplot(2,2,2);
imshow(uint8(a1 );
title(‘Edge Detection in y direction’);
subplot(2,2,3 );
imshow(uint8(a2);
Title(‘Edge Detection in x direction’);
output-sqrt(a1.^2+a2.^2);
subplot(2,2,4);
imshow(uint&(output));
title(‘Edge Detection’);
Output: Line Detection
Line Detection:
I = imread('circuit.jpg');
rotI = imrotate(I,33,'crop');
imshow(rotI)
BW = edge(rotI,'canny');
imshow(BW);
[H,T,R] = hough(BW);
figure
imshow(H,[],'XData' ,T, 'YData',R,...
'InitialMagnification','fit');
xlabel('\theta (degrees)');
ylabel('\rho');
axis on, axis normal ,hold on
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2));
y = R(P(:,1));
plot(x,y,'s','color','white');
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
figure, imshow(rotI), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
end
Boundary Detection:
I= imread(‘moon.jpg’);
imshow(I);
BW = im2bw(I);
Output: Boundary Detection
imshow(BW);
dim = size(BW)
col = round(dim(2)2)-90;
row = min(find(BW(:,col)))
Boundary = bwtraceboundary (BW.[row, col],’ N’);
imshow(l):
hold on:
plot(boundary(:2),boundary(:1),’g’,’LineWidth’,3);
BW_filled = imfill(BW;holes’);
boundaries = bwboundaries(BW_filled);
For k=1:10
B=boundaries {K},:
plot(b(:,2),b(:,1),’g’,’LineWidth’,3);
end
Result:
Implementation of edge detection, line detection and boundary extraction was
analysed successfully.
EXP NO:
DATE: ARITHMETIC AND GEOMETRIC MEAN FILTERS
Aim:
To implement a mat lab function for Arithmetic mean filter Geometric mean
filter.
Software Required:
MATLAB 9.5(R2018b)
Theory:
Arithmetic mean filter is commonly used for noise reduction. The filter computes
the value of each output pixel by finding the statistical mean of the neighborhood
of the corresponding input pixel.
In the geometric mean method, the color value of each pixel is replaced with the
geometric mean of color values of the pixels in a surrounding region. A larger
region (filter size) yields a stronger filter effect with the drawback of some
blurring.
The geometric mean filter is better at removing Gaussian type noise and
preserving edge features than the arithmetic mean filter. The geometric mean
filter is very susceptible to negative outliers.
Source Code:
Arithmetic Mean:
I= imread(‘download.jpg’);
N=imnoise(l,’salt & pepper’, 0.03);
Kr-3;
Kc-3:
W=ones(kr,kc)(kr*kc);
Sf=imfilter(N,w,’replicate’,’ same’);
figure
subplot(1,3,1);
imshow(l);
title(‘original image’);
Output: Arithmetic Mean Filtering
Result:
Thus the implementation of arithmetic mean filter and geometric mean filter
using Matlab software was successfully analysed.
EXP NO: IMAGE RESTORATION
DATE:
Aim:
To perform image restoration.
Software Required:
MATLAB 9.5(R2018b)
Theory:
Image restoration is the process of recovering an image from a degraded
version- usually a blurred and noisy image. Image restoration is a fundamental
problem in image processing. Image restoration is the operation of taking a
corrupt/noisy image and estimating the clean, original image. Corruption may
come in many forms such as motion blur, noise and camera mis-focus.
Key issues that must be addressed are the quality of the restored image, the
computational efficiency of the algorithm, and the estimation of necessary
parameters such as the point-spread function (PSF).
A conventional technique for image restoration is deconvolution. Various
methods available for image restoration such as inverse filter, Weiner filter,
Gaussian filter, mean filter and median filter, etc.
Source Code:
Weiner Filtering:
loriginal= imread(‘dog.jpg’);
figure,imshow(loriginal);
title(‘Original Image’);
PSF = fspecial(‘motion’,21, 11);
Idouble = im2double(loriginal);
blurred = imfilter(ldouble, PSF,’ conv’,’ circular’ );
figure,imshow (blurred);
title(‘Blurred Image’);
wnrl = deconvwnr(blurred, PSF);
figure,imshow(wnr1 );
Output: Weiner Filtering
title(‘Restored Blurred Image’);
noise_mean = 0;
noise_var = 0.0001:;
Blurred_noisy=imnoise(blurred,’gaussian’,noise_mean,noise_var);
figure,imshow(blurred noisy);
title(‘Blurred and Noisy Image’);
Wnr2 = deconvwnr(blurred_noisy,PSF,NSR);
figure,imshow(wnr2);
title(‘Restoration of Blurred Noisy Image (NSR =0)’);
Signal_var = var(Idouble(:));
NSR =noise var/ signal var;
Wnr3 = deconvwnr(blurred noisy, PSF,NSR);
figure,imshow(wnr3);
Title(‘Restoration of Blurred Noisy Image (Estimated NSR)’);
Median Filtering:
I = imread('coins.png');
figure, imshow(I)
J = imnoise(I,'salt & pepper',0.02);
K = medfilt2(J);
K = medfilt2(J);
imshowpair(J,K,'montage');
Output: Median Filtering
imshow(b);
title(‘Image after filtering’)
err =immse(esl, es);
fprintf(\n The mean-squared error is %0.4fn’, err);
[peaksnr, snr] = psnr(es1, es);
fprintf(\n The Peak-SNR value is %0.4f, peaksnr);
fprintf(\n The SNR value is %0.4f \n’, snr):
[ssimval,ssimmap] = ssim(esl,es);
Imshow(ssimmap, [])
title(“Local SSIM Map with Global SSIM Value: “+num2str(ssimval))
Mean Filtering:
clc;
clear;
close all;
I1= imread(‘dog.jpg’) ;
figure
imshow(I1)
noise=imnoise(I1,’salt & pepper’,0,03 );
figure
imshow(noise)
K=1;
I2-padarray(noise,[k k,’ replicate’);
[m,n]= size(noise);
for i=2:(m-1)
for j=2:(n-1)
V=noise(i-1:i+1,j-1:j+1);
Output: Mean Filtering
R=(sum(v(:)));
c(i-1,j-l)= uint8(ceil(r));
end
end
figure
imshow(c);
err = immse(noise, Il);
fprintf(\n The mean-squared error is %0.4fn’, err);
[peaksnr, snr] = psnr(noise, I1);
fprintf(\n The Peak-SNR value is %0.4f, peaksnr);
fprintf(\n The SNR value is %0.4f\n’, snr);
[ssimval,ssimmap] = ssim(noise,I1 );
Imshow(ssimmap, [])
title(“Local SSIM Map with Global SSIM Value: “+num2str(ssimval))
Inverse Filtering:
Ioriginal= imread(‘cameraman.jpg’):
figure,imshow(loriginal);
title(‘Original Image’);
subplot(1,3,1);
imshow(loriginal);
title(‘original image’);
a=im2double(Ioriginal);
F=fspecial(‘motion’,21,11);
blur-imfilter(loriginal,F,’Conv’,’circular’):
subplot(1,3,2);
imshow(blur);
DD=deconvreg(blur,F);
Output: Inverse Filtering
subplot(1,3,3 );
imshow(DD);
title(‘restored image’);
err = immse(blur, Ioriginal);
fprintf(n The mean-squared error is %0.4fn’, err);
[peaksnr, snr] psnr(blur, loriginal);
fprintf(\n The Peak-SNR value is %0.4f, peaksnr):
fprintf(\n The SNR value is %0.4f \n’, snr);
[ssimval,ssimmap] =ssim(blur, Ioriginal);
figure
imshow(ssimmap.[])
title(“Local SSIM Map with Global SSIM Value: “+num2str(ssimval));
Gaussian Filtering:
loriginal = imread(‘cameraman.jpg’);
an=imnoise(loriginal,’gaussian’,0.01 );
figure,imshow(an);
title(‘img with gaussian noise’) sigma=3;
Cutoff-ceil(3*sigma);
h=fspecial(‘gaussian’,2*cutoff+1,sigma);
out=conv2(loriginal,h, ‘same’ );
figure,imshow(out);
title(‘Original img after gaussian filtering’);
figure,imshow(out/2 56);
Outl=conv2 (an,h,’same’);
figure,imshow(out1 /256);
title(‘img with gaussian noise after filtering’);
err = immse(an, loriginal);
Output: Gaussian Filtering
fprintf(n The mean-squared error is %0.4f\n’, err);
[peaksnr, Snr] = psnr(an, Ioriginal);
fprintf(n The Peak-SNR value is %0.4f, peaksnr);
fprintf(n The SNR value is %0.4f\n’, snr);
[ssimval,ssimap] = ssim(an, Ioriginal);
Imshowíssimmap.[])
title(“Local SSIM Map with Global SSIM Value: “+num2str(ssimval));
Result:
Thus, implemented image restoration using different restoration filter in
MATLAB.
EXP NO IMAGE SEGMENTATION
DATE
Aim:
To perform advanced image segmentation.
Software required:
MATLAB 9.5(R2018b)
Theory:
In digital image processing and computer vision, image segmentation is the
process of partitioning a digital image into multiple image segments, also known
as image regions or image objects (sets of pixels). The goal of segmentation is to
simplify and/or change the representation of an image into something that is more
meaningful and easier to analyze.
Image segmentation is typically used to locate objects and boundaries
(lines, curves, etc.) in images. More precisely, image segmentation is the process
of assigning a label to every pixel in an image such that pixels with the same label
share certain characteristics.
The result of image segmentation is a set of segments that collectively
cover the entire image, or a set of contours extracted from the image . Each of the
pixels in a region are similar with respect to some characteristic or computed
property , such as color, intensity, or texture. Adjacent regions are significantly
different color respect to the same characteristics.
Source code:
Based on watershed
Clear all
Close all
Warning off
X=imbinarize(rgb2gray (imread(‘dog.jpg’);
Subplot(2,2, 1 );
Imshow(x);
title(Original Image' );
Output: Watershed
a=x;
X=~X;
ms=bwdist(x);
ms=255-uint8(ms);
subplot(2,2,2);
imshow(ms);
title('Image after applying Distance Transformation');
hs=watershed(ms);
ws=hs=0;
subplot(2,2,3);
imshow(a | ws);
title('Watershed Segmentation of the image');
subplot(2,2,4);
imshow(label2rgb(hs));
title('Visualization of different segments with different color');
Based on threshold
close all
clear all
clc
%Gray leyel Thresolding
a=imread('dog.jpg');
level-graythresh (a);
c= im2bw(a,level);
subplot(1,2, 1), imshow(a), title('original image');
subplot(1,2,2), imshow(c), title('threshold image');
Output: Threshold
Based on graph cut
clc
clear all
close alI
warning off
RGB=imread('yellowlily.jpg');
subplot( 1,3,1);
imshow(RGB);
title(Original Image );
[BW,maskedImage ] = segmentImage(RGB);
subplot(1,3,2):
imshow(BW);
title('Segmented Binary Image');
subplot(1,3,3):
imshow(maskedImage);
title('Segmented Color Image')
%Function;
function [BW,maskedlmage] = segmentImage(RGB)
X= rgb2lab (RGB);
foregroundlnd = [305810 362952]:
backgroundlnd = [233 12 26611]:
L= superpixels(X, 10088, IslnputLab',true);
scaledX = prepLab(X):
BW = lazysnapping(scaledX,L,foregroundInd, backgroundlnd);
maskedImage = RGB;
maskedlmage(repmat(-BW,[1 1 3])) = 0;
end
function out = prepLab(in)
Output: Graph cut
out = in;
out(:,:,1 ) = in(:,:, 1) / 100;
out(:,.:,2:3) = (in(:,:,2:3) + 100) / 200;
end
Based on clustering
clc
clear all
close all
warning off
rgblmage=imread('peppers.png');
Subplot(1,2,1);
Imshow(rgblmage);
redChannel=rgblmage(:, :, 1):
greenChannel=rgblmnage(:, :, 2);
blueChannel=rgblmage(:, :, 3);
data=double([red Channel(:), greenChannel(:), blueChannel(:)]);
numberOfClasses=5;
[m n]=kmeans(data,numberOfClasses);
m=reshape(m,size(rgblmage, 1),size(rgblmage,2));
n=n/255;
Clusteredlmage=label2rgb(m,n);
Subplot(1,2,2);
imshow(clusteredlmage);
Output: Clustering
Result:
Thus performed advanced image segmentation using MATLAB.
EX.NO:
DATE: IMAGE MORPHOLOGY
Aim:
To perform image morphology in Matlab software to analyse the shape
details of image structures.
Software required:
MATLAB 9.5(R20 18b)
Theory:
Morphology is a broad set of image processing operations that process
images based on shapes. Morphological operations apply a structuring element to
an input image, creating an output image of the same size. In a morphological
operation, the value of each pixel in the output image is based on a comparison
of the corresponding pixel in the input image with its neighbors.
The most basic morphological operations are dilation and erosion. Dilation
adds pixels to the boundaries of objects in an image, while erosion removes pixels
on object boundaries.
The number of pixels added or removed from the objects in an image
depends on the size and shape of the structuring element used to process the
image.
In the morphological dilation and erosion operations, the state of any given
pixel in the output image is determined by applying a rule to the corresponding
pixel and its neighbors in the input image.
Dilation:
The valke of the output pixel is the maximum value of all pixels in the
neighborhood. In a binary image, a pixel is set to 1 if any of the neighboring
pixels have the value 1.
Morphological dilation makes objects more visible and fills in small holes in
objects. Lines appear thicker, and filled shapes appear larger.
Output: Dilation
Erosion:
The value of the output pixel is the minimum value of all pixels in the
neighborhood. In a binary image, a pixel is set to 0 if any of the neighboring
pixels have the value 0.
Morphological erosion removes floating pixels and thin lines so that only
substantive objects remain. Remaining lines appear thinner and shapes appear
smaller.
Source code:
Dilation:
A=imread( ‘download.jpg’ );
A=im2bw(A);
Figure,imshow(A)
B=[1 1 1 1 1 1 1:];
C=padarray(A,[0 3]);
D=false(size(A));
for i-1:size(C,1)
for j=1:size(C,2)-6
D(i,j)=sum(B&C(i,j:j+6));
end
end
figure,imshow(D);
Erosion:
clc
clear all
close all
warning off
a=im2bw(imread(‘download.jpg’));
imshow(a);
Output: Erosion
title(‘Original Image’);
[r cl=size(a);
w=ones(3,3);
output=[];
for x=2:l:r-1
for y= 2:1:c-1
g=[w(1) *a(x-1,y-1) w(2) *a(x-1,y) w(3) *a(x-1,y+ 1 )...
w(4) *a(x,y-1) w(5) *a(x,y) w(6) *a(x,y+ 1 )...
w(7) *a(x+1,y-1) w(8)*a(x+1 ,y) w(9) *a(x+l,y+ 1 )];
output(x,y)=min(g):
end
end
figure;
imshow(output);
title(‘Image after Erosion’);
Morphological opening:
clc
clear all
a=im2bw(imread(‘download.jpg’));
imshow(a);
title(‘Origina Image’);
[rc]=size(a);
w=ones(3,3);
output=[]:
for x=2:l:r-1
for y= 2:1:c-1
Output: Morphological Opening
g[w(1)*a(x-1,y-l ) w(2)*a(x-1,y) w(3)*a(x-1,y+1 )... w(4)*a(x,y-l) w(5) *a(x,y)
w(6)*a(x,y+ 1)…
w(7)*a(x+l,y-1 ) w(8) *a(x+ 1,y) w(9) *a(x+l,y+1 )];
output(x,y)=min(g);
end
end
figure;
imshow(output);
title(‘Image after Erosion ‘);
B=[1 1 1 1 1 1 1;];
C=padarray( output,[0 3]);
D=false(size( output));
for i=1:size(C, 1 )
for j=1:size(C,2)-6
D(i,j)=sum(B&C(i,j:j+6));
end
end
Morphological closing
A=imread( ‘download.jpg’ );
A=im2bw(A);
figure,imshow (A)
%Structuring element
B=[1 1 1 1 1 1 1;];
C=padarray(A, [0 3]);
D=false(size(A));
for i=l:size(C,1)
for j=1:size(C,2)-6
Output: Morphological Closing
D(i.j)=sum( B&C(i,j:j+6));
end
end
figure,imshow(D);
[r c]=size(D):
w=ones(3,3);
output=[];
for x=2:1:r-1
for y= 2:1:c-1
g=[w(1)*D(x-1,y-1 ) w(2)*D(x-1,y) w(3)*D(x-1,y+1 )... w(4)*D(x,y-l)
w(5)*D(x,y) w(6)*D(x,y+1)…
w(7)*D(x+1,y-1) w(8)*D(x+1,y) w(9)*D(x+l,y+1)];
output(x,y)=min(g);
end
end
figure;
imshow(output);
Perimeter
a=imread(‘down load.jpg’);
figure;
imshow(a):
title(‘Original Image’);
k=im2bw(imread(‘download.jpg));
imshow(k);
title(‘Input Image’);
sto=[];
[a b]=size(k);
Output: Perimeter Detection
output=zeros(a,b);
for i=2:a-1
for j-2:b-1
sto=[k(i-1,j-1),k(i-1,j).k(i-1,j+1),k(i,j-1),k(i,j)...
,k(i,j+1),k(i+1,j-1),k(i+1,j),k(i+1,j+1)];
es=sum(sto);
if(es<=7 && k(ij)==1)
output(i,j)==l;
end
sto=[];
end
end
figure;
imshow(output);
title(‘Perimeter Detected Image’);
Result:
Thus performed image morphological operations in MATLAB analysed
the shape Details of image structures.
EX.NO:7
DATE: SPATIAL ENHANCEMENT ON BITMAP IMAGES
Aim:
To implement a spatial enhancement function on a bitmap image using
Matlab Software.
Software required:
MATLAB 9.5(R2018b)
Theory:
A bit map (often spelled “bitmap”) defines a display space and the color
for each pixel or “bit” in the display space. A Graphics Interchange Format and a
JPEG are examples of graphic image file types that contain bit maps. A bit map
does not need to contain a bit of color-coded information for each pixel on every
row.
Histogram stretching involves modifying the brightness (intensity) values
of pixels in the image according to some mapping function that specifies an
output pixel brightness value for each input pixel brightness value. For a
grayscale digital image, this process is straightforward.
There are further two transformation is power law transformations, that
include nth power and nth root transformation. These transformations can be
given by the expression: S=cr^y. This symbol y is called gamma, due to which
this transformation is also known as gamma transformation. Variation in the
value of y varies the enhancement of the images. Different display devices /
monitors have their own gamma correction, that’s why they display their image
at different intensity.
Source code:
Histogram equalization
clc
clear all
close all
Output: Histogram Equalization
warning off;
x=imread(‘lena.bmp’);
subplot(3,2,1);
imshow(x);
title( ‘Original Image’);
subplot(3,2,2);
imhist(x);
title(‘Histogram of Original Image’);
axis tight;
h=zeros(1,256);
[r c]=size(x);
total_no_of_pixels=r*c;
n=0:255;
for i=1:r
for j=1:c
h(x(i,j)+1)=h(x(i,j)+1)+1;
end
end
for i=1:256
h(i)=h(i)/total_no_of_pixels;
end
temp=h(1);
for i=2:256
temp=temp+h(i);
h(i)=temp;
end
for i=1:r
for j=1:c
Output: Power Law Transformation
x(i,j)=round(h(x(i,j)+ 1)*255);
end
end
subplot(3,2,5);
imshow(x);
title(‘Histogram Equalized image using own code’);
subplot(3,2,6);
imhist(x); axis tight;
title(‘Histogram Equalization using own code’);
Power Law Transformation
I= imread (‘lena. bmp’);
Id = im2double (I);
outputl = 2*(Id.^0.5);
output2 = 2(Id.^1.5);
output3 = 2*(1d.^3.0);
subplot(2,2,1), imshow(I);
subplot(2,2,2), imshow(output1);
subplot(2,2,3), imshow(output2);
subplot(2,2,4), imshow(output3);
RESULT:
Thus the implementation of spatial enhancement function on bit map image
was successfully implemented by using matlab software.
EXP NO: VIDEO SEGMENTATION AND PROCESSING
DATE:
Aim:
To perform video segmentation and process each individual frame using various
image processing techniques.
Software required:
MATLAB 9.5(R2018b)
Theory:
Video (temporal) segmentation is the process of partitioning a video sequence
into disjoint sets of consecutive franmes that are homogeneous according to some
defined criteria.. In the most common types of segmentation, video is partitioned
into shots, camera-takes, or Scenes. A camera take is a sequence of frames
captured by a video camera from the moment It starts capturing to the moment it
stops.
During montage, camera takes are trimmed, split, and inserted one after the other
to compose an edited version of a vide0. The basic element of an edited video is
called shot. A shot is a contiguous sequence of frames belonging to a single
camera take in an edited video. Content-wise, shots usually possess some degree
of visual uniformity. A scene is a group of contiguous shots that form a
semantically meaningful.
Source code:
V= VideoReader(‘fle.mp4’);
totalFrames =NumberOfFrames;
NFP = ceilsqrt(totalFrames));
for i-1:total Frames frame=read(v, i);
subplot(NFP,NFP, i) imshow(frame)
end
framel = read(v, 1 );
figure imshow(frame 1 )
framel = read(v, I 0):
figure imshow(frame 1 )
framel = read(v,20);
figure imshow(framel )
framel = read(v,30);
figure
Imshow(framel )
framel = read(v,40);
Figure imshow(framel)
framel = read(v,50);
Figure imshow(frame 1 )
framel = read(v.60); figure
Imshow(frame l )
Framel = read(v,63); figure
Imshow(frame l )
I= imread(‘vidfl.jpg’);
N=imnoise(1,’salt & pepper’, 0.03);
F= ones(3,3)/9;
Noise free = imfilter(N, f);
Figure
Imshowpair(N,noise free,’montage’)
I= imread(‘vidf2.jpg’);
N=imnoise(I,’salt & pepper, 0.03);
f= ones(3,3)/9;
Noise free = imfilter(N, f);
Figure
Imshowpair(N,noise free-montage’)
In= imread(‘vidf3,jpg); In=rgb2grayh):
Kr=3;
Kc=3;
Gaussian noise=imnoise(ln, gaussian’,0,0.01);
Figure
Imshow(gaussian noise)
Title(‘img with noise’)
G= im2double(gaussian noise);
F= exp(imfilter( log(g),ones(kr,ke), replicate’).(1/(kr*ke);
Figure
Subplot( 1,2,1)
Imshow(In)
Title(‘original image’)
Subplot(122) imshow(F)
Title(‘filtered image’) In= imread(‘vidf4.,ipg’);
In= rgb2gray(In):
Kr=3;
Kc=3;
Gaussian noise- imnoise(ln,’gaussian’,0,0,01 ):
figure
Imshow(gaussian noise)
Title(‘img with noise’)
G= im2double(gaussian noise);
F= exp(imfilter(log(g),ones(kr, kc), ‘replicate’).(/(kr*ke): figure
Subplot(121)
Imshow(In) title(‘original image’) subplot(122)
Imshow(F)
Title(‘filtered image’) I= imread(‘vidf5. Jpg’); ld = im2double(l): outputl
=2*(ld.^0.5); outpt2 = 2*(1d.^1.5); output3 =2*(Id.^3.0); figure
Subplot(2,2, 1 ),
Imshow(lI);
Imshow(output1 );
Imshow(output3);
Subplot(2,2, 1),
Imshow(I);
Imshow(output2):
Subplot(2,2,4), imshow(output3);
53
Level-graythresh(a): c= im2bw(a,level);
Figure
15
Result
Viva-vvoce
10
Record
50 4 S
Total
Thus, performed video segmentation and processed each individual frame using
various image processing techniques in MATLAB,
Result: