0% found this document useful (0 votes)
21 views25 pages

IMP Module 4 Notes

Image processing notes

Uploaded by

Sophia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views25 pages

IMP Module 4 Notes

Image processing notes

Uploaded by

Sophia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

MODULE 4

CG & FIP
Dr. T. THIMMAIAH INSTITUTE OF TECHNOLOGY
(Estd. 1986) Oorgaum, Kolar Gold Fields, Karnataka – 563120
(Affiliated to VTU, Belgaum, Approved by AICTE - New Delhi)
NAAC Accredited 'A' Grade, NBA Accredited for CSE, ECE & Mining Engg Programs

DEPARTMENT OF COMPUTER SCEINCE AND ENGINEERING

Table of Contents

MODULE:4
Introduction to Image Processing
Topics Page No.

4.1 Introduction 4-1

4.1.1 Overview of Image Processing. 4-1

4.1.2 Nature op Image Processing 4-1

4.1.3 Image processing and Related Fields 4-2

4.1.4 Digital Image Representation 4-3

4.1.5 Types of Images 4-4

4.2 Digital Image Processing Operation 4-6

4.2.1 Basic Relation and Distance Metrics 4-6

4.2.2 Classification of Image Processing Operation 4-8

Question Bank 4-23


21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

MODULE 5: Introduction to Image Processing


4.1 Introduction
4.1.1 Overview of Image Processing

1. What is an Image?

An image may be defined as a two-dimensional function, f(x,y), where x and y are spatial
(plane) coordinates, and– the amplitude of f at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point.

2. Explain different ways of acquiring image?

i) Reflective mode imaging represents the simplest form of imaging and uses a
sensor toacquire the digital image. All video cameras, digital cameras, and
scanners use some types of sensors for capturing the image
ii) Emissive type imaging acquired from self-luminous objects without the help of a
radiation source. In emissive type imaging, the objects are self-luminous. The
radiation emitted by the object is directly captured by the sensor to form an
image. Thermal imaging is an example of emissive type imaging.
iii) Transmissive imaging is where the radiation source illuminates the object. The
absorption of radiation by the objects depends upon the nature of the
material.Some of the radiation passes through the objects. The attenuated
radiation is sensedinto an image.

4.1.2 Nature of Image Processing


3. Explain three types of image processing?

• Optical image processingis an area that deals with the object, optics, and how processes
are applied to an image that is available in the form of reflected or transmitted
• Analog image processingis an area that deals with the processing of analog electrical
signals using analog circuits. The imaging systems that use film for recording images are
also known as analog imaging systems.
• Digital image processingis an area that uses digital circuits, systems, and software
algorithms to carry out the image processing operations. The image processing operations
may include quality enhancement of an image, countingof objects, and image analysis.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-1
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

4. What are advantages of digital image processing over analog images?

 It is easy to post-process the image. Small corrections can be made in the captured
image using software.
 It is easy to store the image in the digital memory.
 It is possible to transmit the image over networks. So sharing an image is quite easy.
 A digital image does not require any chemical process. So it is very environment
friendly, as harmful film chemicals are not required or used.
 It is easy to operate a digital camera.

4.1.3 Image Processing and Its Related Fields.


5. Explain image processing related fields?

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-2
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Image processing deals with raster data or bitmaps, whereas computer graphics
primarily deals with vector data.
 Digital signal processing, in domain of image processing, one deals with visual
information that is often in two or more dimensions.
 Machine vision is to interpret the image and to extract its physical, geometric, or
topological properties.
 Video processing is an extension of image processing. Images are strongly related to
multimedia, as the field of multimedia broadly includes the study of audio, video,
images, graphics, and animation.
 Optical image processing deals with lenses, light, lighting conditions, and associated
optical circuits. The study of lenses and lighting conditions has an important role in
the study of image processing.
 Image analysis is an area that concerns the extraction and analysis of object
information from the image. Imaging applications involve both simple statistics such
as counting and menstruation.

4.1.4 Digital Image Representation


6. Explain Digital image represented?

An image can be defined as a 2D signal that varies over the spatial coordinates x and y,
and can be written mathematically as f (x, y)

 Image f(x,y) is divided into X rows and Y rows. Where { x=0,1,…..,X-1} (y=0,1,….,Y-
1}.At the interaction of row & column pixel is present.
 The value of the function f (x, y) at every point indexed by a row and a column is called
grey value or intensity of the image.
 Resolution is the ability of the imaging system to produce the smallest discernable
details, i.e., the smallest sized object clearly, and differentiate it from the neighboring
small objects that are present in the image.
 Image resolution depends on two factors—optical resolution of the lens and spatial
resolution.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-3
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Spatial resolution show the object and its separation from other spatial objects present in
image. It depends on two factor
• The number of pixels of the image
• The number of bits necessary for adequate intensity resolution, referred to as the
bit depth.
• The number of bits necessary to encode the pixel value is called bit depth. Bit
depth is a power of two; it can be written as powers of 2.
 So the total number of bits necessary torepresent the image is
Number of rows = Number of columns *Bit depth

4.1.5 Types of Image.


7. Explain types of images?

i. Based on Nature
a) Natural Images
Produced byCameras and Scanners
b) Synthetic Images

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-4
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

Produced by Computer Programs


ii. Based on attributes
a) Raster Images
Pixel based Images
b) Vector Images
Produced by Geometrical attributes like Lines, circles etc
iii. Based on Color
a) Grey scale
 Images are different from binary images as they have many shades of grey
between black and white.
 These images are also called monochromatic as there is no color
component in the image, like in binary images.
 Grey scale is the term that refers to the range of shades between white and
black or vice versa.
b) binary images,
 The pixels assume a value of 0 or 1.
 One bit is sufficient to represent the pixel value. Binary images are also
called bi-level images
c) True color images
 The pixel has a color that is obtained by mixing the primary colors red,
green, and blue.
 Each color component is represented like a grey scale image using eight
bits.true color images use 24 bits to represent all the colors.
d) Indexed image
 The full range of colors is not used.
 So it is better to reduce the number of bits by maintaining a color map,
gamut, or palette with the image.
e) Pseudo color image
 Images which are captured by satellites contain many bands.
 These images, not visible to human observer, so color is artificially added
to these bands to distinguish the bands
iv. Based on Dimension
a) 2D
-2d rectangular array of pixels
b) 3D

v. Based on Data Types


a) Single,
b) float, double,
c) Signed ,
d) Logical or
Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-5
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

e) Unsigned
vi. Based on Domain
a) Range Images
 Image encounter in computer vision
 Pixel value denotes the distance betweencamera and object
b) Multispectral Images
 Image encounter in remote sensing application.
 Many band images encountered in remote sensing

4.2 Digital Image Processing Operation


4.2.1 Basic Relationships and Distance Metrics.

8. Define Image topology. Explain 4, Diagonal & 8 Neighborhoods?

Image topology is a branch of image processing that deals with the fundamental properties of
the image such as image neighbourhood. Neighborhood of given references pixels with
are those pixel shares its edge and corner.

i) 4 neighbourhood
In N4(p),the reference pixels p (x, y) at the coordinate position (x, y) has two
horizontal and two vertical pixels as neighbor.

4-Neighbourhood N4(p)
ii) Diagonal element ND(p)
Pixel may have four diagonal neighbor they are (x-1,y-1),(x+1,y+1),(x-1,y+1),
(x+1,y-1) for reference pixel p(x,y).

Diagonal element ND(p)

iii) 8 neighbourhood

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-6
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 4 neighbourhood and ND collectively called 8-neighbourhood.


 The set of pixels N8(x)=ND(x) ∪N4(x)

8-Neighbourhood N8(p)

9. Define Pixel Connectivity. Explain its types?

The relationship between two or more pixels is defined as pixel connectivity.

a) 4- Connectivity:
• The pixels p and q have same values as specified by the set V.
• If q is said to be in set N 4(p) means any path from p to q on which
every other pixel is 4-connected to next pixel.
b) 8-connectivity: p and q share a common grey scale value if q is in set N8(p).
c) Mixed connectivity :
• q is in N4(p) or
• q is in ND(p) and intersection of N4(p) and N4(q) is empty.
10. Explain Euclidean Distance Measure.
 Distance between pixels p and q in image can be given by distance measure –
Euclidean distance.
 The Euclidean distance between the pixels p and q with coordinates(x,y) and
(s,t) can be defined as

De(p,q)= √ (x−s)2 +( y−t)2

D4 distance D4(p,q) =│x-s│+│y-t│

D8 distance D8(p,q)=(│x-s│,│y-t│)

4.2.2 Classification of Image Processing Operations.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-7
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

11. Explain classification of image operation based on neighborhood and properties.

i) Based on Neighbourhood
a) Point operation
 Output values at a specific coordinate are dependent only on the input
value.
b) Local operation.
 Output value at specific coordinate is dependent on the input values in
the neighborhood of that pixel
c) Global operation
 Output value at specific coordinate is dependent on all the values in
the input image.
ii) Based on properties.
a) Linear operation: obeys the rules of additive and homogeneity.
 Property of additivity
H(a1f1(x, y) + a2f2(x, y)) = H(a1f1{(x, y))+ H(a2 f2(x, y))
=a1H(f1{(x, y))+a2 H(f2(x, y))
=a1 x g1(x, y)+a2xg2(x, y)

 Property of homogeneity:

H(kf1(x, y))= k H(f1(x, y)) = kg1(x, y)


b) Nonlinear operation:
 Does not obey the rules of additive and homogeneity.
Image operations are array operation pixel by pixels basis.

12. Explain arithmetic image operation in details.

i. Image addition:

 Two image are added in direct manner


g(x ,y)=f1(x ,y)+f2(x ,y)
 f1(x , y) and f2 (x ,y) are input image and g(x ,y) is output image.
 Sum of images should not cross the allowed range.

 It is possible to add constant value to image.


g (x ,y)=f1(x ,y)+k, k is constant

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-8
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Application of image addition is to create double exposure and to increase


brightness of an image.

ii. Image subtraction


 Subtraction of two image are done as
g(x ,y)= |f1(x ,y)-f2(x ,y)|
 f1(x, y) and f2 (x, y) are input image and g(x, y) is output image.
 It is possible to subtract constant value from image.
g(x ,y)=f1(x ,y)-k, k is constant
 Application of image subtraction is
*Background elimination
* Brightness reduction
* Change detection
iii. Image multiplication:
 Multiplication of two image are done as
g(x ,y)= f1(x ,y) * f2(x ,y)
 f1(x ,y) and f2 (x ,y) are input image and g(x ,y) is output image.
 it is possible to multiply constant value to image.
g(x ,y)=f1(x ,y) * k, k is constant
Where k>1, Contrast increases & k<1, Contrast decreases.
 The brightness and contrast can be manipulated together as
g(x ,y)= af1(x ,y) * k
 Application of image subtraction is
*To increase contrast
* Designing filter Mask
* highlight the area of interest

iv. Image division:


 division of two image are done as
g(x ,y)= f1(x ,y)
f2(x, y)
 f1(x, y) and f2 (x , y) are input image and g(x ,y) is output image.
g(x ,y)= f1(x ,y) ,where k is constant
k

 Application of division
*Change detection
* Separation of luminance and reflectance components
Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-9
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

* Contrast reduction

13. Explain different logical operation used in image processing.

i. AND/NAND:
 The operator AND and NAND take two image as input and
produce one output image.
 Truth table of AND & NAND

A B C(AND) C(NAND)
0 0 0 1
0 1 0 1
1 0 0 1
1 1 1 0

 Application of AND/NAND are


*Computation of the intersection of images.
*Design of filters
* Slicing of grey scale images.
ii. OR/NOR:
 The Truth table of OR & NOR

A B C(OR C(NOR)
)
0 0 0 1
0 1 1 0
1 0 1 0
1 1 1 0

 Application of OR/NOR are


* OR is used as the union operator of two images.
* OR can be used as a merging operator.

iii. XOR/XNOR:
 The Truth table of XOR & XNOR

A B C(XOR) C(XNOR)
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-10
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Application of XOR/ XNOR is


* Change detection.

* XOR for identifying inputs is Zero

iv. Invert/Logical Not:


For grey scale values, the inversion operation is g(x,y)=255 - f(x,y).
 Truth tableis

- A C(Not)
0 1
1 0
 Application of the inversion operator is
* Obtaining the negative of an image.
* making feature clear to the observer
* Morphological processing.

14. Explain Geometric operations used in image processing.


i. Translation:
 Translation is the movement of an image to a new position
 Assume the point at the coordinate position X=(x,y) of matrix F is moved to new
position X' whose coordinate position is (x',y').
 The translation is represented as
x'=x+δ x
y'=y+δ y
ii.Scaling:
 Means enlarging and shrinking.
 The scaling of the point (x,y) of the image F to new point (x',y') of the image F’ is
x'=x*Sx
y'=y*Sy ,

Sx and Sy are scaling Factor.


 If Scale factors are 1 then object appears larger.
 If scale factors are fraction then object would shrink.
 If scale factors are equal then scaling is uniform also called as isotropic scaling or
differential scaling?
iii. Mirror or reflection:
 Reflection of the object in a plane mirror
 It is useful in creating an image in the desired order and for making comparisons.
 Also describe as rotation by 180˚.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-11
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Reflection along x-axis is given by

 Reflection along y-axis is given by

 Reflection about line y=x is given as

 Reflection about y= -x is given as

iv. Shearing:
 Transformation that produces a distortion of shape, that can be applied
either in the x- direction or y- direction
 In this transformation the parallel and opposite of the object.
 Shearing done using following method and represented in matrix form as

 Yshear is gven as

v. Rotation
 An image can be rotated by various degree such as 90˚,180˚ ,or 270˚.
 Matrix form as

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-12
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 θ is the angle of rotation wrt x- axis, the value of θ is positive means


counter clockwise rotation otherwise clockwise rotation.

vii. Affine transform:


 Maps the pixel at the coordinates (x,y) to a new coordinate position is given
as a pair of transformation equation.
 Mathematically describe as

Tx and Ty are polynomial

 The linear equation gives an affine transform.

viii. Inverse transformation :


 Restore the transformed object to its original form and
position.
 Inverse or backward transform matrices given by

ix. 3D transforms:
 Medical images such as computerized tomography (CT) and magnetic
resonance imaging (MRI) images are three dimensional images.
 To apply translation, rotation and scaling on 3D image,3D
transformations are required.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-13
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

15. What is the need of interpolation techniques?


 Affine transforms produce pixels of the resultant image that cannot be fit as some
of the pixel values are non-integers and often go beyond the acceptable range.
This results in gaps (or holes) and issues related to number of pixels and range. So
interpolation techniques are required to solve these issues.
 Interpolation is the method of calculating the expected values for a function with
knownpixels.
 Some popular interpolation techniques are

1. Nearest neighbor technique/Zero order


 This technique determines the closest pixel and assigns pixel in
the new image matrix, that is, the brightness of the pixels is
equal to the closest neighbour.
 This may result in pixel blocking and can degrade the resulting
image, which may appear spatially distorted. These distortions
are called aliasing.

2. Bilinear technique/first order.


 Four neighbours of the transformed original pixels that
surround the new pixel are obtained and are used to
calculate the new pixel value.
 Linear interpolation is used in both the directions. Weights
are assigned based on the proximity.
 Then the process takes the weighted average of the
brightness of the four pixels that surround the pixels of
interest.
g(x, y) = (1 − a)(1 − b) f(x', y') + (1 − a) b f(x', y' + 1)
+a (1 − b) f(x' + 1, y') + a b f(x' + 1, y' + 1)

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-14
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

3. Bicubic technique/Second order


 It uses a neighborhood of 16 pixels. Then it fits two
polynomials to the 16 pixels of the transformed original
matrix and the new image pixel.
 -This technique is very effective and produces images that
are very close to the original.

16. Explain set operation?


 An image can be visualized as a set.

 Binary image can be visualized as a set A, where A = {(0, 0), (0, 2), (2, 2)}.
The coordinates' values represent the value of 1.
 Set operators can then be applied to the set to get the resultant, which is
useful for image analysis.
The complement of set A can be defined as the set of pixels that does not belong
to the set A

The reflection of the set is defined as

}
The union of two sets, A and B, can be represented as

Where the pixel c belongs to A, B, or both.

The intersection of two sets is given as

The pixel c belongs to A, B, or both.


The difference can be expressed aswhich is equivalent to A ∩ Bc

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-15
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

 Morphology is a collection of operations based on set theory, to accomplish various tasks


such as extracting boundaries, filling small holes present in the image, and removing
noise present in the image.
 Mathematical morphology is a very powerful tool for analyzing the shapes of the objects
that are present in the images.
 Dilation is one of the two basic operators applied to binary as well as grey scale images.
• It gradually increases the boundaries of the region, while the small
holes that are present in the images become smaller.
• Let us assume that A and B are a set of pixel coordinates. The dilation
of A by B can be denoted as
A → B = {(x, y) + (u, v): (x, y) = A, (u, v) ∈ B}
Where x and y correspond to the set A, and u and v correspond to
the set B. The coordinates are added and the union is carried out
to create the resultant set.

17. Consider the following binary image. Show the results of the dilation and erosion
operations. Let the structured element S be [1 1] with coordinates {(0, 0), (0, 1)}. Show the
results of dilation and erosion.

Solution: The image F can be written as


F = {(0, 2), (1, 2), (2, 1), (2, 2)}
S = {(0, 0), (0, 1)}
The dilation operation is done as follows:
First add the coordinates (0, 0) of S to all the coordinate points of the image set F,
followed by the second point of the set S.
F Dilation S = {(0, 2), (1, 2), (2, 1), (2, 2),
(0, 3), (1, 3), (2, 2), (2, 3)}
Remove the repetitions; the union of these sets results in dilation. These results in
S = {(0, 2), (0, 3), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3)}
The erosion is the intersection of these sets.
First subtract the coordinates (0, 0) of S from all the coordinate points of the image
set F, followed by the second point of the set S.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-16
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

F Erosion S = {(0, 2), (1, 2), (2, 1), (2, 2),


(0, 1), (1, 1), (2, 0), (2, 1)}
The erosion is the intersection operation. Find the common element. This results in
S= (2, 1)
If the coordinates are (x, y) and (s, t), the result would be (x+s, y+t) for dilation and
(x-s, y -t) for erosion.
The result of this numerical calculation is shown in Fig.(a).

18. What is mean, median, mode, standard deviation and variance of the image?

. Statistical operations can be applied to the image to get the desired results such as
manipulation of brightness and contrast.
.
i) Mean: Mean is the average of all the values in the sample (population) and is
denoted as

The overall brightness of the grey scale image is measured using the mean.
This calculated by summing all the values of the pixels of an image and
dividing it by number of pixels in the image.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-17
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

ii) Median: Median is the value where the given X, is divided into two equal halves,
with he of the values being lower than the median and the other half higher.
.
iii) Mode: Mode is the value that occurs most frequently in the dataset. The
procedure finding the mode is to calculate the frequencies for all of the values in
the data.
Based on the mode, the data’s is classified as unimodal, bimodal, and trimodal.
Dataset that has two modes is called bimodal.
iv) Percentile:Percentiles are data that are less than the coordinate by some
percentage of the total value.
v) Standard deviation and variance: commonly used measures of dispersion are
variance and standard deviation.
vi) Standard deviation is the average distance from the mean of the dataset to each
point. The formula for standard deviation is given by

vii) Variance is another measure of the spread of the data. It is the square of standard
deviation.
viii) Entropy: Measure of the amount of orderliness that is present in the image.

The entropy can be calculated by assuming that the pixels are totally
uncorrelated.. Entropy also indicates the average global information content. Its
unit is bits per pixel. It can be computed using the formula

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-18
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-19
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

19. Explain convolution and correlation operation of image.


i) Convolution is the process of shifting and adding the sum of the products
of mask coefficient and the image to give the center value
 Consider the process of convolution of two sequences F,
whose dimension is 1 x 5 and a kernel or template T,
whose dimension is 1 x 3.
 Let F= {0, 0, 2, 0, 0} and the kernel be {7 5 1}.

Step1. The template or mask is first rotated by 180°.

The rotated mask of this original mask [7 5 1] is a convolution template


Whose dimension is 1 x 3 with value {1, 5, 7 }.
Step 2: process of zero padding should be carried out. Zero padding is the
process of creating more zeros. Added zeros are underlined.

157 1 57

00 0 0 2 0 0 00

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-20
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

The output produced is [0 0 14 10 2 0 0].

ii) Correlation is very useful in useful in recognizing the basic shapes in the
image.
 Correlation reduces to convolution if the kernels are symmetric.
 Difference between the correlation and convolution is that the
mask or template is applied directly without any prior rotation.
Step 1:
75 1 7 5 1

0 0 0 2 0 0 0 0

Step 2:

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-21
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

The output produced is [0 0 2 10 14 0 0].

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-22
21CS63 COMPUTER GRAPHICS AND FUNDAMENTALS OF IMAGE PROCESSING 23-24

Question Bank
1) Define connectivity.
2) What is the difference between 8-connectivity and m-connectivity?
3) Define Euclidean, D4, and Dg distances.
4) What are the image arithmetic operations?
5) List some of the applications of image arithmetic operations.
6) What is the need for interpolation techniques?
7) What is the significance of image entropy?
8) What is the difference between image convolution and correlation? 9. What is the role of
a data structure in imaging applications?
9) Consider two pixels p and q whose coordinates are (0, 0) and (6, 3).
Calculate the De, D4, and Dg distances between the pixels p and q.
10) Consider the following adjacent regions
1100 0100
0000 1000
1111 0111
1100 0100
R1 R2
What is the connectivity between these two regions?
11) For the values of a1 =-1 and a2 = 1, check whether the median operationis a linear
operation or not.
12) Consider the following images of unit 8 type.
10 40 30
F1= 40 100 90
90 80 70
40 140 90
F2 = 140 100 90
90 80 190
Perform the image addition, multiplication and division operations. Suppose the image is
changed to type uint16, will the results change?
13) Find the convolution and correlation of the following streams of data.
A= {1 7 9 6} and {1 3 5}
B= {1 2 3 4} and {1 1}
14) Consider following image
130 11 67
F= 10 10 50
80 90 100
What is mean, median, mode, standard deviation and variance of the image.

Prof Sophia S, Asst. Prof., Department of CSE, Dr. TTIT, KGF 4-23

You might also like