0% found this document useful (0 votes)
25 views

Image Processing Unit 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Image Processing Unit 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

DIGITAL IMAGE PROCESSING

UNIT – 1
Why do we need Image Processing?
➢ To improve the Pictorial information for human interpretation
1) Noise Filtering
2) Content Enhancement
a) Contrast enhancement
b) Deblurring
3) Remote Sensing
➢ Processing of image data for storage, transmission and representation for autonomous
machine perception
What is Image?
An image is a two dimensional function f(x,y), Where x and y are spatial(plane)
coordinates and the amplitude of ‘ f ‘ at any pair of coordinates (x,y) is called intensity or gray
level of the image at that point. When x, y and the intensity values of ‘f ‘are all finite, discrete
quantities then the image is called Digital Image
Analog Image- An analog image is mathematically represented as a continuous range of values
that give the position and intensity.
Digitization –it’s the process of transforming images such as analog image into digital image or
digital data.
PIXELS:
A digital image is composed of a finite number of elements, each of which has a particular
location and value. These elements are called Picture elements or Image elements, pels and
Pixels.
What is Digital Image Processing?
Digital image processing is a method to perform some operations on an image. In order to
get an enhanced image or to extract some useful information from it. It is a type of signal
processing in which input is an image and output may be image or characteristics / features
associated with that image.
(or)
Digital image processing is defined as the process of analyzing and manipulating images
using computer
The main advantage of DIP:
• It allows wide range of algorithms to be applied to the input data.
• It avoids noise and signals distortion problems.
1.1 Fundamentals of Digital Imaging:

1.1.1 Image Acquisition:


Image acquisition is the process of acquiring or getting an image. The entire processing has been
done on images so that, the images are first needed to be loaded to the digital computer. Eg:
Digital camera, Scanner…etc.,
1.1.2 Image Enhancement:
Image enhancement techniques have been widely used in many applications of image
processing, where the subjective quality of image is important for human interpretation. Image
enhancement is the process of manipulating an image so that the result is more suitable than the
original for a specific application.
• It accentuates or sharpens image features such as edges boundaries or contrast to make a
graphic display more helpful for display and analysis.
• The enhancement doesn’t increase the inherent information content of the data, but it
increases the dynamic range of the chosen features so that they can be detected easily.
• The greatest difficulty in image enhancement is quantifying the criterion for enhancement
and therefore, a large number of image enhancement techniques are empirical and require
interactive procedures to obtain satisfactory results
• Image enhancement method can be based on either spatial or frequency domain
techniques, some examples of image enhancement techniques are,
▪ Point operations
▪ Spatial operations
▪ Transform operations
▪ Pseudo coloring
1.1.3 Image Restoration:
In many applications (e.g., satellite imaging, medical imaging, astronomical imaging, poor-
quality family portraits) the imaging system introduces a slight distortion. Often images are
slightly blurred and image restoration aims at deblurring the image.However, image
enhancement which is subjective, Image restoration is objective, in the sense that restoration
techniques tend to be based on mathematical or probabilistic models of image degradation.

, = 𝐻[ , +Ƞ , ………

1.1.4 Color Image processing:


Color image processing is an area that has been gaining in importance because of the significant
increase in the use of digital images over the internet. The use of color image processing is
motivated by two principle factors
1) Color is a powerful descriptor that often simplifies object identification and extraction
from a scene
2) Human can distinguish thousands of color shades and intensities compare to about only
two dozen shades of gray
1.1.5 Wavelets:
Wavelets is a powerful tool in image processing, It’s a mathematical function used for
representing images in various degrees of resolution. It was very useful in Image compression
and removal of noise.
1) The wavelet compressed image can be as small as about 25% the size of the similar
quality image
2) The wavelets are used to remove the noises present in the image with greater efficiency
when compared to other filtering techniques
Wavelets can be combined using a reverse, shift, multiply and integrate techniques called
convolution with portions of a known signal to extract information from the unknown signal.
1.1.6 Image Compression:
Image compression is a technique used for reducing the storage required to store/save an image
or the bandwidth required to transmit an image. Image compression algorithms are basically
classified into
1) Lossy compression –loss of information’s present in the image during compression
2) Lossless compression –no loss of information’s present in the image during compression
Image compression algorithms may take advantage of visual perception and the statistical
properties of image data to provide superior results compared with generic compression methods
1.1.7 Morphological Processing:
Morphological processing is a tool for extracting image components that are useful in the
representation and description of shape (extracting and describing image component regions)
1.1.8 Image Segmentation:
Segmentation is the process of partitioning a digital image into multiple segments. The goal of
segmentation is to simplify and/or change the representation of an image into something that is
more meaningful and easier to analyze.
1) Threshold based segmentation
2) Edge based segmentation
3) Region based segmentation
4) Clustering techniques
5) Matching
1.1.9 Representation and Description:
Representation- deals with compaction of segmented data into representation that facilitate the
computation of descriptors
Description- deals with extracting attributes that result in some quantitative information of
interest or basic for differentiating one class of objects from another
1.1.10 Object Recognition:
Object recognition is the process that assigns a label to an object based on its descriptors
1.1.11 Knowledge Base:
Knowledge about a problem domain is coded into an image processing system in the form of a
knowledge data base. This knowledge may be as simple as detailing regions of an image where
the information of interest is known to be located, thus limiting the search that has to be
conducted in seeking that information.

1.2COMPONENTS OF IMAGE PROCESSING SYSTEM:


1.2.1 Image sensors:
Image sensors are used to acquire a digital image, two elements are required to acquire a
digital image
1) Physical device - It’s sensitive to the energy radiated by the object we wish to image
2) Digitizer – A device for converting output of physical sensing device into digital form.
1.2.2 Image processing software:
The software for image processing has specialized modules which perform specific tasks.
Some software packages have the facility for the user to write ode using the specialized modules
Eg: MATLAB Software
1.2.3 Specialized image processing hardware:
Image processing hardware performs mostly primitive operations such as an arithmetic
logic unit(ALU), that performs arithmetic and logical operations in parallel on entire images. For
example, ALU is used as averaging images as quickly as they are digitized for the purpose of
noise reduction. This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed.
1.2.4 Computer:
The computer is an image processing system is a general purpose computer and can range
from a PC to a supercomputer. In dedicated applications, sometimes custom computers are used
to achieve a required level of performance. In these systems, almost any well-equipped PC-type
machine is suitable for off-line image processing tasks.
1.2.5 Software:
Software for image processing consists of specialized modules that perform specific
tasks. A well designed package also induces the capability for the user to write a code that, as a
minimum, utilizes the specialized modules. More sophisticated software packages allow the
integration of those modules and general-purpose software commands from at least one
computer language.
1.2.6 Mass Storage:
Mass storage capability is a must in image processing applications. For example, an image
size of 1024X1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one
megabyte of storage space. When dealing with thousands or even millions of images, providing
an adequate storage for image processing can be a challenge. Digital storage for image
processing applications falls into three principal categories
1) Short term storage – during processing (Computer memory or buffers)
2) On-line storage – for relatively fast recall (magnetic discs or optical-media storage)
3) Archival storage- infrequent access (magnetic discs or optical disks housed in
“jukeboxes”)
1.2.7 Image Displays:
Image displays are used for displaying images (eg; color TV monitors). Monitors are
driven by the outputs of image and graphics display cards that are an integral part of the
computer system. For image display applications, display cards are required and it’s a part of the
computer system.
1.2.8 Hardcopy:
Hardcopy devices for recording images include laser printers, film cameras, heat-
sensitive devices, inkjet units and digital units such as optical and CD-ROM disks. Film provides
the highest possible resolution, but paper is the obvious medium of choice for written material.
For presentations, images are displayed on film transparencies or in a digital medium if image
projection equipment is used.
1.2.9 Networking:
Networking is a default function in image processing application, because of the large
amount of data inherent in image processing applications the key consideration in image
transmission is bandwidth. In dedicated networks the bandwidth is not a problem, but
communication with remote sites via the internet are not always as efficient. With the help of
optical fibers and broadband technologies improving the results.

1.3ELEMENTS OF VISUAL PERCEPTION:


Vision is the most advances human sense. So images play the most important role in
visual perception and also the human visual perception is very important because the selection of
image processing techniques is based only on visual judgements.
STRUCTURE OF HUMAN EYE:
The human eye is nearly in the shape of a sphere. Its average diameter is approximately 20mm.
The eye, called the optic globeis enclosed by three membranes known as,
1) The Cornea and Sclera outer cover
2) The Choroid and
3) The Retina
1.3.1 The Cornea and Sclera outer cover:
• The Cornea is a tough, transparent tissue that covers the anterior (Front surface of the
eye)
• The Sclera is an opaque (not Transparent) membrane that is continuous with the cornea
and encloses the remaining portion of the eye.
1.3.2 The Choroid:
• The choroid is located directly below the sclera.
• It has a network of blood vessels which are the major nutrition source to eye. Slight
injury to choroid can lead to severe eye damage as it causes restriction of blood flow.
• The outer cover of the choroid is heavily pigmented (colored). This reduces the amount
of light entering the eye from outside and backscatter within the optical globe.
• The choroid is divided into two at its anterior extreme as,
1) The Ciliary body
2) The Iris Diaphragm

FIG:Human eye – cross section


1.3.2.1 The Iris Diaphragm
• It contracts and expands to control the amount of light enters the eye. The central opening
of iris is known as pupil, whose diameter varies from 2 to 8 mm.
• The front of the iris contains the visible pigment of the eye and the back has a black
pigment.
1.3.2.2 Lens:
• The lens is made up of many layers of fibrous cells. Its suspended (hang up by the fibers
attached to the ciliary body) and also it contains 60 to 70% water, 6% fat and more
protein.
1.3.2.3Cataracts:
• The lens is colored by a slightly yellow pigmentation. This coloring increases with age,
which leads to clouding of lens. Excessive clouding of lens happens in extreme cases
which is known as “Cataracts”.
• This leads to poor color discrimination and loss of clear vision
1.3.3 The Retina:
• The retina is a innermost layer or membrane of the eye. It covers the inside of the walls
entire posterior (back portion).
• The central part of the retina is called fovea, it’s a circular indentation with a diameter of
1.5mm.
• Light Receptors: When the eye is properly focused, light from an object outside the eye
is imaged on the retina. Light receptors provide this “pattern vision” to the eye. These
receptors are distributed over the retina and these receptors are classified into two classes,
known as
a) Cones
b) Rods
1.3.3.1 Cones:
• In each eye there are 6 to 7 million cones are present. They are highly
sensitive to color and are located in the fovea
• Each cone is connected with its own nerve end. Therefore, humans can
resolve fine details with the use of cones. Cone vision is called photopic or
bright-light vision
1.3.3.2 Rods:
• The number of rods in each eye rages from 75 to 159 million. They are
sensitive to low level illumination (lightings and are not involved in color
vision).
• Many number of rods are connected to a common, single nerve. Thus the
amount of detail recognizable is less. Therefore, the rods provide only a
general, overall picture of the field of view.
• Rods vision are called scotopic or dim-light vision (Due to stimulation of
rods, the objects that appear with bright color in daylight, will appear colorless
in moonlight. This phenomenon is called as “scotopic or dim-light vision”)
1.4 DIGITAL CAMERA:
A digital camera that produces digital images that can be stored in a computer, displayed
on a screen and printed. The functioning of digital camera is very simple; it allows to take
unlimited photographs.
1.4.1 Working Principle: The basic mechanism of digital camera is the technology of
converting analog information to digital information. As the smallest unit of an image called a
pixel consists of 1’s and 0’s a digital image is composed of such a sting of 1’s and 0’s

FIG:Working of Digital Camera


In a digital camera there are some silicon chips containing light sensitive sensors. These
sensors gather light that comes into the camera through the aperture and then convert the data
into electrical impulses. These impulses are actually the information about the images. Thus the
light is converted into electrons by these sensors and each light sensitive spot on the sensor
determines the brightness of the image. But digital cameras have three separate sensors: Red,
Green and Blue. These three colors are combined in different ratios to form a full color space
1.4.2 Exposure to Light:
Exposure is the duration for which the shutter in a digital camera remains open to
allow the light to enter through the aperture. Exposure of the aperture determines how much
light will reach the sensor. Also shutter speed or exposure can be controlled manually or can be
automatically. The higher the shutter speed, the lesser the light will reach the sensor and vice
versa. In order to take picture in bright light, exposure should be less as more light will blur the
image and if in dark, exposure should be more in order to allow more light to reach sensors
1.4.3 Focus:
In digital camera focusing helps an image to have better clarity. The focus is based the
quality of the lens, because the lens of the camera controls the way of the light is directed
towards the sensors. By using a combination of lenses, distance image can be magnified for a
better picture.
1.4.4 Photo Storage – Memory:
Digital camera has an internal memory chip that is used to store images that are captured.
Internal chips can be supplemented by a removable memory chip for extended storage pace. The
memory chip stores the digital information about an image that has been collected within the
camera. The storage space required is directly proportional to the size of the image.
1.4.3 Resolutions:
Resolution is defined as the amount of detail present in an image. In digital camera
resolutions determines the amount of detail it can capture. Each digital camera has its own
particular resolution. If the resolution of the camera is high, the depth, the clarity and minute
details of the picture will be better. eg: 256 x 256 or 4064 x 2704 pixels

1.5 IMAGE THROUGH SCANNER:


A scanner is a device that is used for producing an exact digital image replica of a photo,
text written in paper, or even an object. This digital image can be saved as a file to your
computer and can be used to alter/enhance the image or apply it to the web.
1.5.1 Types of Scanners:
• Drum Scanners - This scanner is mainly used in the publishing industry. The technology
used behind the scanning is called a photomultiplier tube (PMT).
• Flatbed scanners - Flatbed scanner is the most commonly used scanning machine
nowadays. They are also called desktop scanners. They use Charge-coupled device
(CCD) to scan the object
• Hand-Held Scanners - to scan documents by dragging the scanner across the surface of
the document. This scanning will be effective only if with a steady hand technique, or
else the image may seem distorted
• Film Scanners – to scan positive and negative photographic images. The film will be
inserted into the carrier. It will be moved with a stepper motor and the scanning process
will be done with a CCD sensor
1.5.2 Working of Flatbed Scanner
Charge-coupled device [CCD] is used in flat bed scanner. A CCD sensor is used to
capture the light from the scanner and then convert it into the proportional electrons. The charge
developed will be more if the intensity of light that hits on the sensor is more
Any flatbed scanner will have the following devices.
• Charge-coupled device (CCD) array
• Scan head
• Stepper motor
• Lens
• Power supply
• Control circuitry
• Interface ports
• Mirrors
• Glass plate
• Lamp
• Filters
• Stabilizer bar
• Belt
• Cover
Glass plate, Cover:
A scanner consists of a flat transparent glass bed under which the CCD sensors, lamp, lenses,
filters and also mirrors are fixed. The document has to be placed on the glass bed. There will also
be a cover to close the scanner. This cover may either be white or black in color. This color helps
in providing uniformity in the background. This uniformity will help the scanner software to
determine the size of the document to be scanned.
Lamp:
The lamp brightens up the text to be scanned. Most scanners use a cold cathode fluorescent
lamp (CCFL).
Stepper Motor:
A stepper motor under the scanner moves the scanner head from one end to the other. The
movement will be slow and is controlled by a belt.
Scan Head, CCD, Lens, Stabilize bar:
The scanner head consists of the mirrors, lens, CCD sensors and also the filter. The scan head
moves parallel to the glass bed and that too in a constant path. As deviation may occur in its
motion, a stabilizer bar will be provided to compromise it. The scan head moves from one end of
the machine to the other. When it has reached the other end the scanning of the document has
been completed. For some scanners, a two-way scan is used in which the scan head has to reach
its original position to ensure a complete scan.
As the scan head moves under the glass bed, the light from the lamp hits the document
and is reflected back with the help of mirrors angled to one another. According to the design of
the device there may be either 2-way mirrors or 3-way mirrors. The mirrors will be angled in
such a way that the reflected image will be hitting a smaller surface. In the end, the image will
reach a lens which passes it through a filter and causes the image to be focussed on CCD sensors.
The CCD sensors convert the light to electrical signals according to its intensity.

FIG: Working of Scanner

The electrical signals will be converted into image format inside a computer. This
reception may also differ according to the variation in the lens and filter design. A method called
three pass scanning is commonly used in which each movement of the scan head from one end to
another uses each composite color to be passed between the lens and the CCD sensors. After the
three composite colors are scanned, the scanner software assembles the three filtered images into
one single-color image.
There is also a single pass scanning method in which the image captured by the lens will
be split into three pieces. These pieces will pass through any of the color composite filters. The
output will then be given to the CCD sensors. Thus, the single-color image will be combined by
the scanner
1.5 IMAGE SAMPLING AND QUANTIZATION:
In order to become suitable for digital processing, an image function f(x,y) must
be digitized both spatially and in amplitude. Typically, a frame grabber or digitizer is used to
sample and quantize the analogue video signal. Hence in order to create an image which is
digital, we need to covert continuous data into digital form. There are two steps in which it is
done:
• Sampling
• Quantization
The sampling rate determines the spatial resolution of the digitized image, while
the quantization level determines the number of grey levels in the digitized image. A magnitude
of the sampled image is expressed as a digital value in image processing. The transition between
continuous values of the image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of
fine shading details in the image. The occurrence of false contours is the main problem in image
which has been quantized with insufficient brightness levels.
• Sampling: Process of digitizing the coordinate values is called sampling
• Quantization: Process of digitizing the amplitude values is called quantization
The Basic concepts of image sampling and quantization can be explained with the
example given below.
Example:
Consider a continuous image f(x,y) shown in figure (a) which is needed to be
converted into digital form. Its gray level plot along line AB is given in figure (b). This image is
continuous with respect to the x and y coordinates as well as in amplitude. i.e. gray level values.
Therefore, to convert into digital form, both the coordinates and amplitude values should be
sampled.
To sample this function, equally spaced samples are taken along the line AB. The
samples are shown as small squares in figure (c). The set of these discrete locations give the
sampled functions.
Even after sampling, the gray level values of the samples have a continuous range.
Therfore, to make it discrete, the samples are needed to be quantized. For this purpose, a gray
level scale shown at the figure (c) right side is used. It is divided into eight discrete levels,
ranging from black to white. Now, by assigning one of the eight discrete gray levels to each
sample, the continuous gray levels are quantized
a b

c d

Generating a digital image. (a) Continuous image. (b) A scaling line from A to B in the
continuous image, used to illustrate the concepts of sampling and quantization. (c)
Sampling and quantization. (d) Digital scan line.
1.7 RELATIONSHIP BETWEEN PIXELS:
A relation of pixels plays an important role in
digital image processing. Where the pixels relations (x-1, y-1) (x-1, y) (x-1, y+1)
are used for finding the differences of images and
also in its sub images.
(x, y-1) (x, y) ‘p’ (x, y+1)
1.7.1 Neighbors of a Pixel:
A pixel, p can have three types of neighbors known (x+1, y-1) (x+1, y) (x+1, y+1)
as,
1. 4 – Neighbors, N4(p)
2. Diagonal Neighbors, ND(p)
3. 8 – Neighbors, N8(p)

a. 4 - Neighbors, N4(p)
The neighbors of a pixel ‘p’ at coordinates (x, y) induces two horizontal and two
vertical neighbors. The coordinate of these neighbors is given by,
(x+1, y), (x-1, y), (x, y+1), (x, y-1)
Here, each pixel is at unit distance from (x,y) as shown in figure. If (x,y) is on the border of the
image, some of the neighbors of pixel ‘p’ lie outside the digital image.

b. Diagonal Neighbors, ND(p)


The coordinates of the four diagonal neighbors of ’p’ are given by
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
Here also, some of the neighbors lie outside the image if (x,y) is on the border of the image
c. 8 - Neighbors, N8(p)
The diagonal neighbors together with the 4-neighbors are called the 8-neighbors of
the pixel ‘p’. It’s denoted by N8(p).
(x+1, y), (x-1, y), (x, y+1), (x, y-1), (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
1.7.2 Adjacency:
Let {V} be the set of intensity values used to define adjacency. In a binary image V={1}
if we are referring to adjacency of pixels with value 1. In a gray-scale image, the idea is the
same, but set {V} typically contains more elements. For example, in the adjacency of pixels with
a range of possible intensity values 0 to 255, set V could be any subset of these 256 values. The
adjacency has been classified into three types,
1. 4-Adjacency
2. 8-Adjacency
3. m-Adjacency (or) Mixed-Adjacency
Let {V} be the set of gray levels used to define adjacency
a. 4-Adjacency
Two pixels p and q with values from {V} are 4-adjacent if q is in the set N4(p).
b. 8-Adjacency
Two pixels p and q with values from {V} are 8-adjacent if q is in the set N8(p).
c. m-Adjacency
Mixed adjacency is a modification of 8-adjacency. It is used to remove the
ambiguities present in 8-adjacency.
Two pixels p and q with values from {V} are m-adjacent if the following
conditions are satisfied.
• q is in N4(p).
• q is in ND(p) and the set [N4(p) ∩ N4(q)] is empty (has no pixels whose
values are from V).
1.7.3 Connectivity:
Two pixels p and q are said to be connected if
• they are neighbors and
• their gray levels satisfy a specified similarity criterion (E.g: if their gray levels are
equal)
The connectivity has been classified into three types,
1. 4-Connectivity
2. 8-Connectivity
3. m-Connectivity (or) Mixed-Connectivity

a. 4-Connectivity
Two pixels p and q, both having values from a set V are 4-connected if q is from
the set N4(p).

b. 8-Connectivity
Two pixels p and q, both having values from a set V are 4-connected if q is from
the set N8(p).

c. m-Connectivity
Mixed connectivity is a modification of 8-adjacency. It is used to remove the
ambiguities present in 8-connectivity.
Two pixels p and q with values from {V} are m-connectivityif the following
conditions are satisfied.
• q is in N4(p).
• q is in ND(p) and the set [N4(p) ∩ N4(q)] is empty (has no pixels whose
values are from V).

1.7.4 Paths and Path length:


A path is also known as digital path or curve. A path from pixel, p with coordinates (x,y)
to pixel q with coordinates (s,t) is defined as the sequence of different pixels with coordinates.
(X0, Y0), (X1, Y1),……… (Xn, Yn)
Where, (X0, Y0) = (x, y) and (Xn, Yn) = (s, t); (Xi, Yi) and (Xi-1, Yi-1) are adjacent for 1 ≤ i ≤ n
• Path Length:
Path length is the number of pixels present in a path. It’s is given by the value of
‘n’ here.

• Closed Path:
In a path, if (X0, Y0) = (Xn, Yn) i.e. the first and last pixel are the same, it’s known as
a closed path
According to the adjacency present, paths can be classified as:
1. 4 – path
2. 8 – path
3. m – path
1.7.5 Region, Boundary and Edges:
• In an image I of pixels, a subset R of pixels in an image I is called a Region of the image
if R is a connected set.
• Boundary is also known as border or contour. The boundary of the region R is the set of
pixels in the region that have one or more neighbors that are notin R. if R is an entire
image, its boundary is defined as the set of pixels in the first and last rows and columns
of the image.
• An edge can be defined as a set of contiguous pixel positions where an abrupt change of
intensity (gray or color) values occur
1.7.6 Distance Measure:
Distance measures are used to determine the distance between two different pixels in a
same image. Various distance measures are used to determine the distance between different
pixels.
Conditions: Consider three pixels p, q and z, p has coordinates (x, y), q has coordinates
(s, t) and z has coordinates (v, w). For these three pixels D is a Distance function or metric if
• D(p, q) ≥ 0, [D(p, q)=0 if p = q]
• D(p, q) = D(q, p) and
• D(p, z) ≤ D(p, q) + D(q, z)
Types:
• Euclidean Distance
• City – Block (or) D4 Distance
• Chessboard (or) D8 Distance
• Quasi-Euclidean Distance
• Dm Distance

a. Euclidean Distance: The Euclidean distance is the straight-line distance between two
pixels.

De(p, q) = √ − 𝟐 + − 𝟐

b. City – Distance: The city block distance metric measures the path between the pixels
based on a 4-connected neighborhood. Pixels whose edges touch are 1 unit apart;
pixels diagonally touching are 2 units apart.
D4(p, q) =| − | + | − |
c. Chessboard Distance: The chessboard distance metric measures the path between
the pixels based on an 8-connected neighborhood. Pixels whose edges or corners
touch are 1 unit apart
D8(p, q) = 𝒎𝒂 | − |, | − |

d. Quasi – Euclidean Distance: The quasi-Euclidean metric measures the total


Euclidean distance along a set of horizontal, vertical, and diagonal line segments.

Dqe(p, q) = √𝟐-1{(x-s) + (y-t)}


1.8 CONCEPTS OF GRAYLEVELS:

Gray level resolution refers to the predictable or deterministic change in the shades or
levels of gray in an image. In short gray level resolution is equal to the number of bits per
pixel. The number of different colors in an image is depends on the depth of color or bits per
pixel.
The mathematical relation that can be established between gray level resolution and bits
per pixel can be given as.
L= k
In this equation L refers to number of gray levels. It can also be defined as the shades of
gray. And k refers to bpp or bits per pixel. So the 2 raise to the power of bits per pixel is
equal to the gray level resolution.

1.8.1 Gray level to binary conversion:

THRESHOLD METHOD

The threshold method uses a threshold value which converts the grayscale image
into binary image. The output image replaces all pixels in the input image with luminance
greater than the threshold value with the value 1 (white) and replaces all other pixels with the
value 0 (black).

You might also like