0% found this document useful (0 votes)
0 views

Module 1_IP

The document outlines the fundamental steps in Digital Image Processing, including image acquisition, enhancement, restoration, color processing, compression, morphological processing, segmentation, representation, description, object recognition, and knowledge base. It also explains the human visual system's role in image formation and details various image sensing and acquisition methods, such as single imaging sensors, line sensors, and array sensors. Additionally, it covers the processes of sampling and quantization necessary for converting analog images into digital format.

Uploaded by

Sujatha JSSATE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Module 1_IP

The document outlines the fundamental steps in Digital Image Processing, including image acquisition, enhancement, restoration, color processing, compression, morphological processing, segmentation, representation, description, object recognition, and knowledge base. It also explains the human visual system's role in image formation and details various image sensing and acquisition methods, such as single imaging sensors, line sensors, and array sensors. Additionally, it covers the processes of sampling and quantization necessary for converting analog images into digital format.

Uploaded by

Sujatha JSSATE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Fundamental Steps in Digital Image Processing

Qn: What are the fundamental steps in Digital Image Processing? Discuss briefly with a neat
block diagram.

Image acquisition: This is the first stage which mainly deals with sensing the data and how actually the image
is being acquired. It also deals with image formation that depends on illumination and reflectance. It also deals
with image sampling and quantization and involves preprocessing such as scaling.

Image enhancement: This is a process of manipulating an image so that the result is more suitable than the
original for the specific application. Basically, the idea behind enhancement techniques is to bring out detail
that is obscured, or simply to highlight certain features of interest in an image. Image enhancement is a very
subjective area of image processing. It deals with various enhancement techniques like smoothing and
sharpening of images to improve the visual quality of an image.

Image restoration: It is an area that also deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective. The restoration techniques tend to be based
on mathematical or probabilistic models of image degradation.

Color image processing: This is an area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet. It includes color modeling and processing in a digital
domain.

Wavelets and Multiresolution Processing: These are the foundation for representing images in various
degrees of resolution.
Compression: This stage deals with techniques for reducing the storage required to save an image, or the
bandwidth required to transmit an image. There are various types of compression and decompression
techniques which are applied over the images.

Morphological processing: This stage deals with tools for extracting image components that are useful in the
representation and description of shape. The various operations such as erosion, dilation, open/close or hit-
miss transformation can be performed over the images.

Segmentation: This stage partitions an image into its constituent parts or objects. Segmentation is one of the
most difficult tasks in digital image processing. It extracts the required portion of an image. If segmentation is
accurate, then the recognition is more likely to succeed.

Representation and description: It follows the output of a segmentation stage, which usually is raw pixel
data, constituting either the boundary of a region or all the points in the region itself. In either case, converting
the data to a form suitable for computer processing is necessary. Choosing a representation is only part of the
solution for transforming raw data into a form suitable for subsequent computer processing. A method must
also be specified for describing the data so that features of interest are highlighted. Description, also called
feature selection, deals with extracting attributes that result in some quantitative information of interest or are
basic for differentiating one class of objects from another.

Object Recognition: Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its
descriptors.

Knowledge Base: It is like a repository with complete details about the regions of an image, where the
information of interest is known to be located, thus limiting the search that has to be conducted in seeking that
information. When there are interrelated list of information to be accessed about an image, then the knowledge
base can also be quite complex.
Digital Image Processing : Module 1:

Basic Relationship between Pixels


The basic relationship between pixel p,q of the given image f(x,y) is given by the
following features:

 Neighbors of a pixel
 Adjacency
 Path
 Connectivity
 Region of an image
 Boundary, border or contour of a region
 Edge
 Distance measures
 Edge
Edge is a feature which distinguishes between the foreground and background object in
the image. When there is a contrast between the pixel intensities, it forms an edge. Edge
can be the region boundary (in binary images).
********************************************************
Human Visual System
*** Explain how an image is formed in Human Visual System.

Eyes are the organs of visual system which provide vision. Eyes convert the image, which is
formed from the light which enters through its lens and falls on the retina. The neurons then
transfer these signals through neural pathways to the visual cortex and other regions of the brain
via optic nerve.

The retina functions much like the film in a camera and receives the image that the cornea
focuses through the eye’s internal lens. The lens focuses light from objects onto the retina. The
retina is covered with light receptors called cones (6-7 million) and rods (75-150 million). The
light which enters the eye through cornea, falls on retina which has photoreceptor cells that are
meant for different sorts of vision. Because the cornea is irregular and cone shaped, light rays
enter the eye at different angles and do not focus on one point of the retina, but on many different
points causing a blurred, distorted image.

Cones are more concentrated in the front part of the eyes whereas rods are more concentrated at
the margins or corners. The Cones are concentrated around the fovea and are very sensitive to
colour. Rods are rod-shaped cells which are meant for dim light. They can’t perceive colors.
Rods are more spread out and are sensitive to low levels of illumination. On the other hand,
Cones are cone-shaped cells and are meant for Bright Light and Colored Vision and are
Photopic whereas Rods are Scotopic that helps in Dim Light Vision. These two photoreceptors
convert the visual impulse into electro-chemical impulses which are then transferred to the optic
nerve. The optic nerve carries these impulses to the brain so that it can interpret it easily. The
point from where the optic nerve enters the retina is known as blind spot. There are no rods and
cones at this place. Hence, no image is formed at this point. So, it is referred to as blind spot.

Muscles within the eye can be used to change the shape of the lens allowing us to focus on
objects that are near or far away. An image is focused onto the retina causing rods and cones to
become excited which ultimately send signals to the brain.

The human visual system can perceive approximately 1010 different light intensity levels.
However, at any one time we can only discriminate between brightness adaptations. Light is just
a particular part of the electromagnetic spectrum that can be sensed by the human eye. Similarly,
the perceived intensity of a region is related to the light intensities of the regions surrounding it.
Image Sensing and Acquisition

The types of images are generated by the combination of an “illumination” source and the
reflection or absorption of energy from that source by the elements of the “scene” being imaged.
Depending on the nature of the source, illumination energy is reflected from, or transmitted
through the objects. A principal sensor arrangement is used to transform illumination energy into
digital images. Here, the incoming energy is transformed into a voltage by the combination of
input electrical power and sensor material that is responsive to the particular type of energy
being detected. The output voltage waveform is the response of the sensors and a digital quantity
is obtained from each sensor by digitizing its response.

There are three principal sensor arrangements:


1: Single Imaging Sensors
2: Line Sensors
3: Array Sensors
1: Image Acquisition using a Single Imaging Sensor
The sensor used is a photodiode which is constructed of silicon materials and whose output
voltage waveform is proportional to light. The use of a filter in front of a sensor improves
selectivity. An arrangement where a film negative is mounted onto a drum whose mechanical
rotation provides displacement in one dimension. The single sensor is mounted on a lead screw
that provides motion in the perpendicular direction. This mechanical motion is an inexpensive
but slow way to obtain high-resolution images.
2: Image Acquisition using Sensor Strips
This arrangement consists of an in-line arrangement of sensors in the form of a sensor strip. The
strip provides imaging elements in one direction. The motion perpendicular to the strip provides
imaging in the other direction. This is the type of arrangement used in most flat bed scanners.
3: Image Acquisition using Sensor Arrays
Individual sensors can be arranged in the form of a 2-D array. The imaging system collects the
incoming energy and focuses it onto an image plane. If the illumination is light, the front end of
the imaging system is a lens, which projects the viewed scene onto the lens focal plane. The
sensor array, which is coincident with the focal plane produces outputs proportional to the
integral of the light received at each sensor. Digital and analog circuitry sweeps these outputs
and converts them to a voltage signal, which is then digitized by another section of the imaging
system. This output is a digital image.
Sampling and quantization
Sampling and quantization are the two important processes used to convert continuous analog image
into digital image.

The process of digitizing the co-ordinate values is called Sampling.


The process of digitizing the amplitude values is called Quantization.

Consider a continuous image f(x, y) which is to be converted into digital form. To convert it into digital
form, we need to sample the function in both the coordinates and in amplitude.

 Consider the image in fig (a) which contains continuous values f(x, y).
 The one dimensional function shown in fig (b) is a plot of amplitude (gray level) values of the
continuous image along the line segment AB in fig (a).
 The random variation is due to the image noise. To sample this function, we take equally spaced
samples along line AB as shown in fig (c).
 In order to form a digital function, the gray level values also must be converted (quantized) into
discrete quantities. The right side of fig (c) shows the gray level scale divided into eight discrete
levels, ranging from black to white. The result of both sampling and quantization are shown in fig (d).

You might also like