Image Processing Lab Manual 2017

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

SEMESTER 3

a) Image Processing:
Introduction: Fundamental Steps in Image Processing, Elements of Image Processing Systems.

Digital Image Representation - Gray Scale and Color Images.

Image Sampling and Quantization – Uniform & Non-Uniform.

Relationships between Pixels – Neighbours, Connectivity, Distance Measures, Arithmetic & Logic
Operations.

Basic Transformations – Translation, Rotation, Concatenation and Perspective Transformation.

Two Dimensional Orthogonal Transforms - DFT, FFT, WHT, Haar transform, KLT, DCT.

Image Enhancement - Filters in spatial and frequency domains, histogram-based processing,


Homomorphic filtering.

Image Restoration - Degradation Model, Discrete Formulation, Circulant and Block Circulant Matrices,
Restoration using Inverse Filtering, Removal of blur caused by uniform linear motion, LMS Wiener
Filter.

Image Compression – Lossless and Lossy Coding, Transform Coding, JPEG, MPEG.

Edge detection – Detection of point, line, discontinuities. Gradient Operators, Laplacian, LoG Filters,
Global Processing via Hough Transform.

Mathematical morphology - Binary Morphology, Dilation, Erosion, Opening and Closing, Duality
Relations, Gray Scale Morphology, Hit-and-Miss Transform, Thinning and Shape Decomposition.

Computer Tomography - Radon transform, Back-Projection Operator, Fourier-slice theorem, CBP and
FBP methods.

Textbooks / References
1. Digital Image Processing by Rafael C. Gonzalez and Richard E. Woods, Pearson, 2009
2. Fundamentals of Digital Image Processing by Anil K. Jain, Prentice Hall of India, 1989.
3. Digital image processing by W. K. Pratt, Prentice Hall, 1989.
4. Sonka, Hlavac, Boyle, Image Processing, Analysis and Machine Vision, Thomson, 2001
Table of Contents
Experiment No. 1

Aim: Write a program to perform basic arithmetic and logical operations on images.

Software Tools: MATLAB

Theory :

Image arithmetic applies one of the standard arithmetic operations or a logical operator to two
or more images. The operators are applied in a pixel-by-pixel way, i.e. the value of a pixel in the
output image depends only on the values of the corresponding pixels in the input images.
Hence, the images must be of the same size. Although image arithmetic is the most simple form
of image processing, there is a wide range of applications. A main advantage of arithmetic
operators is that the process is very simple and therefore fast.

Logical operators are often used to combine two (mostly binary) images. In the case of integer
images, the logical operator is normally applied in a bitwise way.

Various arithmetic such as subtraction and averaging as well as logic operations such as Not,
And, and OR are performed on images.

A. Arithmetic operations on images:

1. Image Addition:
Adding two images: H(x,y) = I(x,y) + J(x,y)
Applications Brightening an image, Image Compositing, (Additive) Dissolves

2. Image Subtraction:
Subtracting two images: H(x,y) = I(x,y) - J(x,y)
Applications
Motion Detection, Frame Differencing for Object Detection, Digital Subtraction
Angiography

3. Image Multiplication:
Multiplying two images: H(x,y) = I(x,y) × J(x,y)
Applications:
Masking image parts

4. Image Division:
Dividing two images: H(x,y) = I(x,y) / J(x,y)
Applications:
Masking image parts
B. Logical operations on images:
1. And:
H(x,y) = I(x,y) AND J(x,y)

2. Or:
H(x,y) = I(x,y) OR J(x,y)

3. Not:
H(x,y) = NOT( I(x,y))

Applications of logical operations


Masking, Antialising, Blue Screening.

Algorithm:

1. Read 1st input image.


2. Read 2nd input image.
3. Add two images.
4. Subtract two images.
5. Multiply two images
6. Divide two images
7. Perform ANDing on two images
8. Perform ORing on two images
9. Perform NOT operation on any one of the images
10. Display all the resultant images.

Hint: Use of commands in image processing toolbox: 'imread', 'imshow'.

Program:

To be performed by students.

Expected Output:

Display of Input images and output images of each operation

Conclusion:

1. Basics of image processing


2. Different types of images
3. Significance of basic arithmetic and logical operations on images
Detailed MATLAB Code with explanation:
Experiment No. 2

Aim: Write a program to perform following gray level transformation

a) Negative transformation
b) Log transformation
c) Power law transformation
d) Contrast stretching

Software Tools: MATLAB

Theory :

Enhancing an image provides better contrast and a more detailed image as compared to non
enhanced image. Image enhancement has many applications. It is used to enhance medical
images, images captured in remote sensing, images from satellite etc. The transformation
function has been given below
s=T(r)

where r is the pixels of the input image and s is the pixels of the output image. T is a
transformation function that maps each value of r to each value of s. Image enhancement can
be done through gray level transformations

There are three basic gray level transformation.


1. Linear

2. Logarithmic
3. Power – law
The overall graph of these transitions has been shown below:
1. Identity transformation:
Identity transition is shown by a straight line. In this transition, each value of the input image is
directly mapped to each other value of output image. That results in the same input image and
output image.

2. Negative transformation:
This is invert of identity transformation. In negative transformation, each value of the input
image is subtracted from the L-1 and mapped onto the output image. The result is somewhat
like this.

(a) (b)
Figure: Negative Transformation: a) Input Image b)Output images

In this case the following transition has been done.


s = (L – 1) – r
since the input image of Einstein is an 8 bpp image, so the number of levels in this image are
256. Putting 256 in the equation, we get this
s = 255 – r
So what happens is that, the lighter pixels become dark and the darker picture becomes light.
And it results in image negative.
It has been shown in the graph below.

3. Logarithmic transformations:
Logarithmic transformation further contains two type of transformation. Log transformation
and inverse log transformation.

Log transformation:
The log transformations can be defined by this formula
s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the minimum
value at least 1.

During log transformation, the dark pixels in an image are expanded as compare to the higher
pixel values. The higher pixel values are kind of compressed in log transformation. This result
in following image enhancement. The value of c in the log transform adjust the kind of
enhancement you are looking for.

(a) (b)
Figure: Log Transformation: a) Input Image b)Output images

The inverse log transform is opposite to log transform.


4. Power – Law transformations:
There are further two transformation is power law transformations, that include nth power
and nth root transformation. These transformations can be given by the expression:
s=c×r^γ
This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation. Variation in the value of γ varies the enhancement of the images. Different
display devices / monitors have their own gamma correction, that’s why they display their
image at different intensity. For example Gamma of CRT lies in between of 1.8 to 2.5, that
means the image displayed on CRT is dark.

Correcting gamma.
s=cr^γ
s=cr^(1/2.5)
The same image but with different gamma values has been shown here.

For example

(a) (b) (c)


Figure: Power law Transformation: a) Gamma = 10 b) Gamma = 8 c) Gamma = 6
5. Contrast stretching :

The contrast of an image is a measure of its dynamic range, or the "spread" of its histogram.
The dynamic range of an image is defined to be the entire range of intensity values contained
within an image, or put a simpler way, the maximum pixel value minus the minimum pixel
value. Contrast stretching (often called normalization) is a simple image enhancement
technique that attempts to improve the contrast in an image by `stretching' the range of
intensity values it contains to span a desired range of values.
Algorithm:

1. Read input image to be enhanced.


2. Perform negative transformation using respective equation.
3. Perform log transformation using respective equation.
4. Perform power law transformation using respective equation.
5. Perform contrast stretching using respective equation.
6. Display input image
7. Display all output result images

Program:

To be performed by students.

Expected Output:

Display of Input images and output images of each transformation

Conclusion:

1. Image enhancement
2. Different Gray level transformations and their applications
Detailed MATLAB Code with explanation:
Experiment No. 3

Aim: Write a program to plot histogram of an image and to perform histogram Equalization.

Software Tools: MATLAB

Theory :

Histogram equalization automatically determines a transformation function seeing to


produce an output image with a uniform histogram.
Linear stretching is good, but it does not change the histogram.
In applications where we need a flat histogram, a linear stretching fails. To get a flat
Histogram, we go for equalization.
Transfer function should satisfy the following conditions :
1) T(r) should be single valued function & it should be continuously incrasing.
2) 0<= T(r)<=1 for 0<=r<=1

If P(r) = probability density function of input gray level. P(s) = probability density function of
output gray level.
As per probability theory,

CDF of an image

Taking differentiation on both sides

Substitute in eq.( 1)

Algorithm:

1. Read input image


2. For a input intensity r, calculate n, i.e. number of times particular intensity occurrence.
3. Calculate probability of an input gray level rk; {Pr(rk)}
4. Plot the histogram.
5. Calculate cumulative probability distributive function of an input gray level
6. Calculate (L-1)S and round off the value
7. Map the old gray level value to new equalized gray level.
8. Plot Equalized Histogram

Program:

To be performed by students.

Expected Output:

Display of Input images, plot histogram of that image, display histogram equalized image and
plot histogram of that equalized image

Conclusion:

1. Significance of image histogram


2. Role of histogram equalization in image enhancement
Detailed MATLAB Code with explanation:
Experiment No. 4

Aim: Write a program to perform bit Plane Slicing

Software Tools: MATLAB

Theory :

Instead of highlighting gray level images, highlighting the contribution made to total image
appearance by specific bits might be desired. Suppose that each pixel in an image is
represented by 8 bits. Imagine the image is composed of 8, 1-bit planes ranging from bit
plane1-0 (LSB) to bit plane 7 (MSB).

In terms of 8-bits bytes, plane 0 contains all lowest order bits in the bytes comprising the pixels
in the image and plane 7 contains all high order bits.

Separating a digital image into its bit planes is useful for analyzing the relative importance
played by each bit of the image, implying, it determines the adequacy of numbers of bits used
to quantize each pixel, useful for image compression.

In terms of bit-plane extraction for a 8-bit image, it is seen that binary image for bit plane 7 is
obtained by proceeding the input image with a thresholding gray-level transformation function
that maps all levels between 0 and 127 to one level (e.g 0)and maps all levels from 129 to 253
to another (eg. 255)

Algorithm:

1. Read input image.


2. convert intensity value at (x,y) location into binary.
3. Separate each bit from binary representation of intensity value and store it in bit planes.
4. Do step 2 and 3 for all pixels in image (i.e. all rows and all columns).
5. Display input and output images.
6. Combine some planes and reconstruct the image.
7. Display the reconstructed image.

Program:

To be performed by students.

Expected Output:

Input images, Bit plane images and reconstructed images.

Figure: Results of considering plane 0,1,2,3,4,5,6,7

Figure: Image reconstructed using 8 and 7 planes and 5,6,7 and 8 planes

Conclusion:

1. Bit plane slicing.


2. Significance and applications of bit plane slicing.
Detailed MATLAB Code with explanation:
Experiment No. 5

Aim: Write a program to perform spatial domain filtering


a) Low pass filter
b) High pass filter

Software Tools: MATLAB

Theory :

Filtering is a technique for modifying or enhancing an image. Spatial domain operation or


filtering (the processed value for the current pixel processed value for the current pixel
depends on both itself and surrounding pixels). Hence Filtering is a neighborhood operation, in
which the value of any given pixel in the output image is determined by applying some
algorithm to the values of the pixels in the neighborhood of the corresponding input pixel. A
pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel.

Linear filtering of an image is accomplished through an operation called convolution.


Convolution is a neighborhood operation in which each output pixel is the weighted sum of
neighboring input pixels. The matrix of weights is called the convolution kernel, also known as
the filter. A convolution kernel is a correlation kernel that has been rotated 180 degrees.

Following steps are used to compute the output pixel at position (x,y):
1. Rotate the correlation kernel 180 degrees about its center element to create a convolution
kernel.
2. Slide the center element of the convolution kernel so that it lies on top of the (x,y) element
of A.
3. Multiply each weight in the rotated convolution kernel by the pixel of A underneath.
4. Sum the individual products from step 3.

w(-1,-1) w(-1,0) w(-1,1)

a b
w(0,-1) w(0,0) w(0,1) g( x, y )    ω( s,t ) f ( x  s, y  t )
s  a t  b

w(1,-1) w(1,0) w(1,1)


g  ω f
Types of filters:

Low pass filter:

The basic idea is replace each pixel by the average of the pixels in a square window surrounding the
pixel. This is also called as average filtering or smoothing operation. The results can be observed as
shown in figure:

Figure: From top row to bottom: a) Original Image b) Filtering with filter size 3×3 c) Filter size
5×5 d) Filter size 9×9 e) Filter size 15×15 f) Filter size 35×35

Advantages Disadvantages

Larger objects get smeared to become blob-


like. If the image is padded with a black
The irrelevant details i.e. the border, then smoothing operation can blend
pixel regions which are this black color within the image boundary if
smaller than the size of the a large mask is used. This problem can be
mask are filtered out. avoided by doing averaging with a truncated
filter mask for processing the pixels near the
image border.

False contours in the image


Edges get blurred
get smoothened out.
High Pass Filter:

These are also called as sharpening filters. The main idea of sharpening is the enhance line
structures of other details in an image. Thus, the enhanced image contains the original image
with the line structures and edges in the image emphasized.

The first order derivative at an image location is approximated using intensity difference values
around a pixel. Examples of operators used for computing intensity differences are the Robert’s
cross gradient, Prewitt, and Sobel operators. Robert’s cross gradient operators use an even
sized mask and hence lack in symmetry. Prewitt operator uses a 3×3 mask, but performs poorly
in the presence of noise. Commonly used gradient operator is the sobel operator. It provides
for smoothing against noise while performing the differentiation.

Convolving the sobel mask or a robert’s mask approximates the x and y components of gradient
using linear operations. Computation of the gradient is a non-linear operation because
computing the gradient magnitude involves taking the square root of the sum of the square and
computing the gradient angle involves taking atan.

Behavior 1st order derivative 2nd order derivative


f (x) is constant zero zero

non-zero at the onset and end of


f (x) is ramp or
non-zero at the onset and end of the the ramp; zero along the ramp
step
ramp; non-zero along the ramp The sign of the derivative
changes
at the onset and end of the
ramp.
For a step transition a line
joining
these two values crosses the
horizontal axis (i.e. crosses zero)
midway between the two
extremes.

Formulation
To compute 1st derivative at location
x we subtract the value of the To compute 2nd derivative at
function at that location from the location x we use the previous
next point. and the next points in the
computation.
1st order derivative 2nd order derivative

The Laplacian operator Δ2f is a 2nd order


derivative. The Laplacian for 2 variables
is formulated as follows:

Denoting we

have The magnitude of vector

Taking and

Robert’s cross gradient operator mask: The filter mask to implement the
Laplacian:

Prewitt operator mask

Sobel operator mask

The sobel mask approximates the gx and gy using a 3 × 3


neighborhood. These masks are called as the sobel operators.
The Sobel operator has the added advantage that it can do
noise smoothing (because of the factor 2 in the center term)
while detecting the edges.

The formulated masks give a zero response in areas of The formulated masks give a zero
constant intensity. response in areas of constant intensity.

Note that the masks are used to compute the


gx and gy components. The gradient magnitude M(x,y) is then Laplacian does not involve computing
computed using their absolute values or squaring and absolute values, squares or square-
summing them. The latter steps constitute non-linear root. It is a linear operator.
operations.

The Laplacian operator is isotropic. It is


rotation invariant. The final result is the
Isotropic properties are preserved only for rotational
same whether we first rotate the image
increments of 90o.
and then apply the filter, or apply the
filter first and then rotate the image.
The Laplacian’s response is not as
The gradient gives a lower response compared to Laplacian to strong as the gradient’s response in
noise and fine details. The gradient has a stronger average areas of intensity ramps and steps. The
response in areas of significant intensity transitions. Hence Laplacian is much more sensitive to
the gradient is better suited to enhance prominent edges in noise and fine details. Hence Laplacian
the image. is better suited to enhance fine details
in the image.

The results of high pass filtering can be observed in figure below:

Applications:

1. Image denoising
2. Image enhancement (i.e., “make the image more vivid”)
3. Edge detection, etc.

Algorithm:

1. Read input image.


2. For each pixel compute the filtered value according to the filter/mask applied.
3. Do this for whole image.
4. Display the input image and filtered images.

Program:

To be performed by students.

Expected Output:

Input images, Output images of low pass and high pass filtering.

Conclusion:

1. Filtering in image processing


2. Low pass and high pass filters, their significance and applications
Detailed MATLAB Code with explanation:
Experiment No. 6

Aim: Write a program to perform Morphological operations on images

Software Tools: MATLAB

Theory :

Morphology is a broad set of image processing operations that process images based on
shapes. Morphological operations apply a structuring element to an input image, creating an
output image of the same size. In a morphological operation, the value of each pixel in the
output image is based on a comparison of the corresponding pixel in the input image with its
neighbors. By choosing the size and shape of the neighborhood, you can construct a
morphological operation that is sensitive to specific shapes in the input image.
The most basic morphological operations are dilation and erosion. Dilation adds pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries. The
number of pixels added or removed from the objects in an image depends on the size and
shape of the structuring element used to process the image. In the morphological dilation and
erosion operations, the state of any given pixel in the output image is determined by applying a
rule to the corresponding pixel and its neighbors in the input image. The rule used to process
the pixels defines the operation as a dilation or an erosion.
Dilation:

Dilation is an operation that “grows” or “thickens” objects in a binary image. The specific
manner and extend of this thickening is controlled by a shape referred to as structuring
element.
Mathematically, dilation is defined in terms of set operations. The dilation of A by B, denoted
by is defined as:
27

Erosion:
The process is also known as “shrinking”. The manner and extent of shrinking is controlled by
a structuring element. Simple erosion is a process of eliminating all the boundary points from
an object, leaving the object smaller in area by one pixel all around its perimeter.
The erosion of A by B, denoted as A Θ B, is defined as

AΘB=
In other words, erosion of A by B is the set of all structuring element origin locations where
the translated B has no overlap with the background of A

Opening:
Opening generally smoothens the contour of an image, eliminates thin protrusions. The
opening of set A by structuring element B, denoted A◦B, is defined as
A◦B= (A Θ B) B -(3)
In other words, opening of A by B is simply the erosion of A by B, followed by dilation of the
result by B.
The opening operation satisfies the following properties:
1) A◦B is a subset of A.
2) If C is a subset of D, then C◦B is a subset of D◦B.
3) (A ◦ B) ◦ B = A◦B.

Department of Electronics and Telecommunication Engineering Page 27


28

Effect of opening using a 3×3 square structuring element


Closing:
Closing smoothens sections of contours but, as opposed to opening, it generally fuses narrow
breaks and long thin guffs, eliminates small holes, and fills gaps in the contour. The closing of
set A by structuring element B, denoted A•B is defined as
A•B = (A B) Θ B -(4)
In other words, closing of A by B is simply the dilation of A by B, followed by erosion of the
result by B.
The closing operation satisfies the following properties:
1. A•B is a subset of A.
2. If C is a subset of D, then C•B is a subset of D•B.
3. (A • B) • B = A • B.

Algorithm:

1. Read input image


2. Apply dilation, erosion, opening, closing.

Department of Electronics and Telecommunication Engineering Page 28


29

3. Display input image and output images

Program:

To be performed by students.

Expected Output:

Input image, Output images of dilation, erosion, opening, closing.

Conclusion:

1. Morphological operations, significance and applications.

Department of Electronics and Telecommunication Engineering Page 29


30

Detailed MATLAB Code with explanation:

Department of Electronics and Telecommunication Engineering Page 30


31

Experiment No. 7

Aim: Write a program to perform Image compression using Block Truncation Coding (BTC)

Software Tools: MATLAB

Theory :

Block Truncation Coding (BTC) is a type of lossy image compression technique


for grayscale images. It divides the original images into blocks and then uses a quantizer to
reduce the number of grey levels in each block whilst maintaining the same mean and standard
deviation.

Using sub-blocks of 4×4 pixels gives a compression ratio of 4:1 assuming 8-bit integer values are
used during transmission or storage. Larger blocks allow greater compression ("a" and "b"
values spread over more pixels) however quality also reduces with the increase in block size
due to the nature of the algorithm

A pixel image is divided into blocks of typically 4×4 pixels. For each block
the Mean and Standard Deviation of the pixel values are calculated; these statistics generally
change from block to block. The pixel values selected for each reconstructed, or new, block are
chosen so that each block of the BTC compressed image will have (approximately) the same
mean and standard deviation as the corresponding block of the original image. A two level
quantization on the block is where we gain the compression and is performed as follows:

.......................................(1)
Here x(i,j) are pixel elements of the original block and y(i,j) are elements of the compressed
block. In words this can be explained as: If a pixel value is greater than the mean it is assigned
the value "1", otherwise "0". Values equal to the mean can have either a "1" or a "0" depending
on the preference of the person or organization implementing the algorithm.
This 16-bit block is stored or transmitted along with the values of Mean and Standard
Deviation. Reconstruction is made with two values "a" and "b" which preserve the mean and
the standard deviation. The values of "a" and "b" can be computed as follows:

.................................................(2)

Department of Electronics and Telecommunication Engineering Page 31


32

Where σ is the standard deviation, m is the total number of pixels in the block and q is the
number of pixels greater than the mean 𝑥̅ .
To reconstruct the image, or create its approximation, elements assigned a 0 are replaced with
the "a" value and elements assigned a 1 are replaced with the "b" value.

.................................................(3)
This demonstrates that the algorithm is asymmetric in that the encoder has much more work to
do than the decoder. This is because the decoder is simply replacing 1's and 0's with the
estimated value whereas the encoder is also required to calculate the mean, standard deviation
and the two values to use.

Algorithm:

1. Read input image.


2. Divide it into 4×4 block.
3. For each block compute, mean of intensities under block.
4. Compare each pixel under block with mean.
5. Apply eq. 1. If pixel value > mean then assign 1 in binary/quantized image, else assign 0.
6. Compute a and b using eq 2.
7. To reconstruct image; apply eq. 3.
8. Display compressed image and reconstructed image.

Program:

To be performed by students.

Expected Output:

Input image, compressed image, reconstructed image.

Conclusion:

1. Image compression
2. Block truncation coding technique.

Department of Electronics and Telecommunication Engineering Page 32


33

Detailed MATLAB Code with explanation:

Department of Electronics and Telecommunication Engineering Page 33


34

Experiment No. 8

Aim: Write a program to perform Vector Quantization

Software Tools: MATLAB

Theory :

Vector quantization (VQ) is a lossy data compression method. In the earlier days, the design of
a vector quantizer (VQ) is considered to be a challenging problem due to the need for multi-
dimensional integration. In 1980, Linde, Buzo, and Gray (LBG) proposed a VQ design algorithm
based on a training sequence. The use of a training sequence bypasses the need for multi-
dimensional integration.

Following are the steps of LBG algorithm:

1. Determine the number of codevectors N


2. Select N codevectors at random to be the initial codebook
3. Using the Euclidean distance measure cluster the vectors around each codevector
4. Compute the new set of codevectors (codebook)
5. Repeat Steps 2 and 3 until the either the representative codevectors do not change.

Algorithm:

Let the codebook size be NC and the training vectors be { x(n)| n=1,…,M}

1. Let the initial codebook C={ y(i) | i = 1,…,NC} be randomly selected from { x(n)| n =
1,…,M}
2. Cluster the training vectors into NC groups G(i),i =1,…,NC, where
G(i)={x(k) | d(x(k), y(i)) < d (x(k), y(j)); j ≠ i and d(p, q)
denotes the distance between p and q}
3. If the distortion decreases, then go to Step 4 ; otherwise stop

4. New , where |G(i)| = the number of vector in G(i) ; go to Step 2

Department of Electronics and Telecommunication Engineering Page 34


35

Program:

To be performed by students.

Expected Output:

Input image, Vector quantization output

Conclusion:

1. Image compression
2. Vector quantization, significance and applications.

Department of Electronics and Telecommunication Engineering Page 35


36

Detailed MATLAB Code with explanation:

Department of Electronics and Telecommunication Engineering Page 36


37

Experiment No. 9

Aim: Write a program to obtain Gray level co-occurrence matrix (GLCM) for texture representation

Software Tools: MATLAB

Theory :

A statistical method of examining texture that considers the spatial relationship of pixels is the
gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence
matrix. The GLCM functions characterize the texture of an image by calculating how often pairs
of pixel with specific values and in a specified spatial relationship occur in an image, creating a
GLCM, and then extracting statistical measures from this matrix. A GLCM is a histogram of co-
occurring greyscale values at a given offset over an image. The gray-level co-occurrence matrix
can reveal certain properties about the spatial distribution of the gray levels in the texture
image. For example, if most of the entries in the GLCM are concentrated along the diagonal, the
texture is coarse with respect to the specified offset.

Algorithm:

1. Read input image


2. Generate a glcm matrix with all initial entrys as 0.
3. Consider two neighbor pixels in horizontal direction; (i,j) and (i+1,j+1). Get their intensity
values.
4. Increase the value in glcm matrix at position with the intensity values obtained in step 2.
5. Repeat step 3 and 4 for all pixels in image.
6. Display GLCM result.

Program:

To be performed by students.

Department of Electronics and Telecommunication Engineering Page 37


38

Expected Output:

Input image, GLCM matrix

Conclusion:

1. Texture and texture analysis


2. GLCM, its need, significance and applications

Detailed MATLAB Code with explanation:

Department of Electronics and Telecommunication Engineering Page 38


39

Department of Electronics and Telecommunication Engineering Page 39


40

Department of Electronics and Telecommunication Engineering Page 40

You might also like