0% found this document useful (0 votes)
8 views46 pages

Mod 4-Complete PDF

Digital image processing involves manipulating digital images through a computer, where images are represented as two-dimensional functions. Key applications include remote sensing, medical imaging, and robotics, with fundamental steps such as image acquisition, enhancement, and recognition. The document also discusses image models, sampling, quantization, and pixel relationships, emphasizing the importance of adjacency and connectivity in image processing.

Uploaded by

xocije3152
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views46 pages

Mod 4-Complete PDF

Digital image processing involves manipulating digital images through a computer, where images are represented as two-dimensional functions. Key applications include remote sensing, medical imaging, and robotics, with fundamental steps such as image acquisition, enhancement, and recognition. The document also discusses image models, sampling, quantization, and pixel relationships, emphasizing the importance of adjacency and connectivity in image processing.

Uploaded by

xocije3152
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Mod 4-part 1

DIGITAL IMAGE FUNDAMENTALS:

• The field of digital image processing refers to processing


digital images by means of digital computer. Digital image is
composed of a finite number of elements, each of which
has a particular location and value.
• These elements are called picture elements, image
elements, pels and pixels. Pixel is the term used most
widely to denote the elements of digital image.

• An image is a two-dimensional function that represents a


measure of some characteristic such as brightness or color
of a viewed scene. An image is a projection of a 3- D scene
into a 2D projection plane.
• An image may be defined as a two-
dimensional function f(x,y), where x and y are
spatial (plane) coordinates, and the amplitude
of f at any pair of coordinates (x,y) is called
the intensity of the image at that point
• The term gray level is used often to refer to the intensity of
monochrome images. Color images are formed by a
combination of individual 2-D images.For example: The RGB
color system, a color image consists of three (red, green
and blue)individual component images.
• For this reason many of the techniques developed for
monochrome images can be extended to color images by
processing the three component images individually.
• An image may be continuous with respect to the x- and y-
coordinates and also in amplitude. Converting such an
image to digital form requires that the coordinates, as well
as the amplitude, be digitized.
Applications Of Digital Image
Processing
Digital image processing has a broad spectrum
of applications, such as
• Remote sensing via satellites and other space
crafts
• Image transmission and storage for business
applications
• Medical processing
• RADAR (Radio Detection and Ranging)
• SONAR(Sound Navigation and Ranging)
• Acoustic image processing (The study of underwater sound
is known as underwater acoustics or hydro acoustics.)
• Robotics and automated inspection of industrial parts.
Images acquired by satellites are useful in tracking of
• Earth resources;
• Geographical mapping;
• Prediction of agricultural crops,
• Urban growth and weather monitoring
• Flood and fire control and many other environmental
applications
Space image applications include:
• Recognition and analysis of objects contained in
images obtained from deep space- probe
missions.
• Image transmission and storage applications
occur in broadcast television
• Teleconferencing
• Transmission of facsimile images(Printed
documents and graphics) for office automation
• Closed-circuit television based security
monitoring systems and
• In military communications.
Medical applications:
• Processing of chest X- rays
• Cine-angiograms
• Projection images of trans axial tomography and
• Medical images that occur in radiology nuclear
magnetic resonance(NMR)
• Ultrasonic scanning
• IMAGE PROCESSING TOOLBOX (IPT) is a collection
of functions that extend the capability of the
MATLAB numeric computing environment.
• These functions, and the expressiveness of the
MATLAB language, make many image-processing
operations easy to write in a compact, clear
manner, thus providing a ideal software
prototyping environment for the solution of
image processing problem.
Components of Image processing
System:

Figure : Components of Image processing System


• Image Sensors: With reference to sensing, two
elements are required to acquire digital image. The
first is a physical device that is sensitive to the energy
radiated by the object we wish to image and second is
specialized image processing hardware.
• Specialize image processing hardware: It consists of
the digitizer plus hardware that performs other
primitive operations such as an arithmetic logic unit,
which performs arithmetic such addition and
subtraction and logical operations in parallel on
images.
• Computer: It is a general purpose computer and can
range from a PC to a supercomputer depending on the
application. In dedicated applications, sometimes
specially designed computer are used to achieve a
required level of performance
• Software: It consists of specialized modules that
perform specific tasks a well designed package also
includes capability for the user to write code, as a
minimum, utilizes the specialized module. More
sophisticated software packages allow the integration
of these modules.
• Mass storage: This capability is a must in image processing
applications. An image of size 1024 x1024 pixels, in which
the intensity of each pixel is an 8- bit quantity requires one
Megabytes of storage space if the image is not compressed.
Image processing applications falls into three principal
categories of storage
a)Short term storage for use during processing
b)Online storage for relatively fast retrieval
c)Archival storage such as magnetic tapes and disks
• Image display: Image displays in use today are mainly color
TV monitors. These monitors are driven by the outputs of
image and graphics displays cards that are an integral part
of computer system.
• Hardcopy devices: The devices for recording
image includes laser printers, film cameras,
heat sensitive devices inkjet units and digital
units such as optical and CD ROM disk.
• Networking: It is almost a default function in
any computer system in use today because of
the large amount of data inherent in image
processing applications. The key consideration
in image transmission bandwidth.
Fundamental Steps in Digital Image
Processing

Fig: Fundamental Steps in Digital Image Processing


There are two categories of the steps involved in the image processing

– Methods whose outputs are input images.
– Methods whose outputs are attributes extracted from those images.
• Image acquisition: It could be as simple as being given an image
that is already in digital form. Generally the image acquisition stage
involves processing such scaling.
• Image Enhancement: It is among the simplest and most appealing
areas of digital image processing. The idea behind this is to bring
out details that are obscured or simply to highlight certain features
of interest in image. Image enhancement is a very subjective area
of image processing.
• Image Restoration: It deals with improving the
appearance of an image. It is an objective approach,
in the sense that restoration techniques tend to be
based on mathematical or probabilistic models of
image processing.
• Color image processing: Color image processing
deals with basically color models and their
implementation in image processing applications.
• Wavelets and Multi-resolution Processing: These
are the foundation for representing image in
various degrees of resolution.
• Compression: It deals with techniques reducing
the storage required to save an image, or the
bandwidth required to transmit it over the
network. It has to major approaches a) Lossless
Compression b) Lossy Compression.
• Morphological processing: It deals with tools for
extracting image components that are useful in the
representation and description of shape and
boundary of objects. It is majorly used in automated
inspection applications.
• Representation and Description: It always follows
the output of segmentation step that is, raw pixel
data, constituting either the boundary of an image or
points in the region itself.
• Recognition: It is the process that assigns label to
an object based on its descriptors. It is the last
step of image processing which use artificial
intelligence of software.
• Knowledge base: Knowledge about a problem
domain is coded into an image processing system
in the form of a knowledge base. This knowledge
may be as simple as detailing regions of an image
where the information of the interest is known to
be located.
Mod 4-part 2
A Simple Image Model:

• An image is denoted by a two dimensional function of


the form f{x, y}. The value or amplitude of f at spatial
coordinates {x,y} is a positive scalar quantity whose
physical meaning is determined by the source of the
image.
• When an image is generated by a physical process, its
values are proportional to energy radiated by a physical
source. As a consequence, f(x,y) must be nonzero and
finite; that is o<f(x,y) <co The function f(x,y) may be
characterized by two components- The amount of the
source illumination incident on the scene being
viewed.
• The amount of the source illumination reflected back
by the objects in the scene These are called
illumination and reflectance components and are
denoted by i(x,y) an r (x,y) respectively.
• The functions combine as a product to form
f(x,y). We call the intensity of a monochrome
image at any coordinates (x,y) the gray level (l)
of the image at that point l= f (x, y)
• L min ≤ l ≤ Lmax
• Lmin is to be positive and Lmax must be finite
• Lmin = imin rmin
Lmax = imax rmax
• The interval [Lmin, Lmax] is called gray scale.
Common practice is to shift this interval
numerically to the interval [0, L-l] where l=0 is
considered black and l = L-1 is considered
white on the gray scale. All intermediate
values are shades of gray of gray varying from
black to white.
A Simple Image Model:
• An image may be defined as a two-
dimensional function f(x,y), where x and y are
spatial (plane) coordinates, and the amplitude
of f at any pair of coordinates (x,y) is called
the intensity of the image at that point
SAMPLING AND QUANTIZATION:
• To create a digital image, we need to convert the
continuous sensed data into digital form.
• To convert a continuous image f(x, y) into digital
form,we have to sample the function in both
co-ordinates and amplitude.

This process includes 2 processes:


• Sampling: Digitizing the co-ordinate value is called
sampling.
• Quantization: Digitizing the amplitude value is called
quantization.
Sampling
• Sampling is the reduction of a continuous-time
signal to a discrete-time signal.
• A common example is the conversion of a sound
wave (a continuous signal) to a sequence of
samples (a discrete-time signal). A sample is a
value or set of values at a point in time and/or
space.
Sampling
Quantization
Spatial and Gray level resolution:

• Spatial resolution is the smallest discernible


details of an image.
• Suppose a chart can be constructed with vertical
lines of width w with the space between the also
having width W, so a line pair consists of one such
line and its adjacent space thus. The width of the
line pair is 2w and there is 1/2w line pair per unit
distance resolution is simply the smallest number
of discernible line pair unit distance.
• Gray levels resolution refers to smallest discernible
change in gray levels. Measuring discernible change in
gray levels is a highly subjective process reducing the
number of bits R while repairing the spatial resolution
constant creates the problem of false contouring.
• It is caused by the use of an insufficient number of gray
levels on the smooth areas of the digital image . It is
called so because the rides resemble top graphics
contours in a map. It is generally quite visible in image
displayed using 16 or less uniformly spaced gray levels
Mod 4-part 3
RELATIONSHIP BETWEEN PIXELS:
• A pixel p at coordinates (x,y) has four
horizontal and vertical neighbors whose
coordinates are given by:
(x+1,y), (x-1, y), (x, y+1), (x,y-1)
• This set of pixels, called the 4-neighbors or p,
is denoted by N4(p).
• Each pixel is one unit distance from (x,y) and
some of the neighbors of p lie outside the
digital image if (x,y) is on the border of the
image.
The four diagonal neighbors of p have
coordinates and are denoted by ND (p).
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)

These points, together with the 4-neighbors, are called the 8-


neighbors of p, denoted by N8 (p).
As before, some of the points in ND (p) and N8 (p) fall
outside the image if (x,y) is on the border of the image.
ADJACENCY AND CONNECTIVITY
• Let v be the set of gray –level values used to
define adjacency, in a binary image, v={1}. In a
gray-scale image, the idea is the same, but V
typically contains more elements, for example,
V = {180, 181, 182, …, 200}.
If the possible intensity values 0 – 255, V set can
be any subset of these 256 values. if we are
reference to adjacency of pixel with value.
• Three types of adjacency
• 4- Adjacency – two pixel P and Q with value
from V are 4 –adjacent if A is in the set N4(P)
• 8- Adjacency – two pixel P and Q with value
from V are 8 –adjacent if A is in the set N8(P)
• M-adjacency –two pixel P and Q with value
from V are m – adjacency if (i) Q is in N4(p) or
(ii) Q is in ND(q) and the set N4(p) ∩ N4(q) has
no pixel whose values are from V.
• Mixed adjacency is a modification of 8-
adjacency. It is introduced to eliminate the
ambiguities that often arise when 8-adjacency
is used.
• For example:

Fig:1.8(a) Arrangement of pixels; (b) pixels that are 8-adjacent


(shown dashed) to the center pixel; (c) m-adjacency.
Types of Adjacency:
• In this example, we can note that to connect between
two pixels (finding a path between two pixels):
– In 8-adjacency way, you can find multiple paths between
two pixels
– While, in m-adjacency, you can find only one path between
two pixels
• So, m-adjacency has eliminated the multiple path
connection that has been generated by the 8-
adjacency.
• Two subsets S1 and S2 are adjacent, if some pixel in S1 is
adjacent to some pixel in S2. Adjacent means, either 4-,
8- or m-adjacency.
Connectivity:Let S represent a subset of pixels in an
image, two pixels p and q are said to be connected
in S if there exists a path between them consisting
entirely of pixels in S.
– For any pixel p in S, the set of pixels that are
connected to it in S is called a connected
component of S. If it only has one connected
component, then set S is called a connected set.

You might also like