0% found this document useful (0 votes)
23 views18 pages

DIP Unit 1

Uploaded by

abhi gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views18 pages

DIP Unit 1

Uploaded by

abhi gowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-1:DIGITAL IMAGE FUNDAMENTALS: Steps in Digital Image Processing- Components-

Image sensing and Acquisition- Image Sampling and Quantization- Relationships between
pixels- color image fundamentals- RGB, HIS models.

DIGITAL IMAGE FUNDAMENTALS:

The field of digital image processing refers to processing digital images by means of digital
computer. Digital image is composed of a finite number of elements, each of which has a
particular location and value. These elements are called picture elements, image elements,
pels and pixels.
Pixel is the term used most widely to denote the elements of digital image.
An image is a two-dimensional function that represents a measure of some characteristic
such as brightness or color of a viewed scene. An image is a projection of a 3-D scene into a 2D
projection plane.
An image may be defined as a two-dimensional function f(x,y), where x and y are spatial
(plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is called the
intensity of the image at that point.

The term gray level is used often to refer to the intensity of monochrome images. Color
images are formed by a combination of individual 2-D images.
For example: The RGB color system, a color image consists of three (red, green and blue)
individual component images. For this reason many of the techniques developed for
Monochrome images can be extended to color images by processing the three component
images individually.

An image may be continuous with respect to the x- and y- coordinates and also in
amplitude. Converting such an image to digital form requires that the coordinates, as well as
the amplitude, be digitized.

APPLICATIONS OF DIGITAL IMAGE PROCESSING

Since digital image processing has very wide applications and almost all of the technical fields
are impacted by DIP, we will just discuss some of the major applications of DIP.
Digital image processing has a broad spectrum of applications, such as
• Remote sensing via satellites and other spacecrafts
• Image transmission and storage for business applications
• Medical processing,
• RADAR (Radio Detection and Ranging)
• SONAR(Sound Navigation and Ranging)
• Acoustic image processing (The study of underwater sound is known as underwater

Halesh T G , Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
• acoustics or hydro acoustics.)
• Robotics and automated inspection of industrial parts.
• Images acquired by satellites are useful in tracking of
• Earth resources;
• Geographical mapping;
• Prediction of agricultural crops,
• Urban growth and weather monitoring
• Flood and fire control and many other environmental applications.
Space image applications include:
• Recognition and analysis of objects contained in images obtained from deep
• space-probe missions.
• Image transmission and storage applications occur in broad cast television
• Teleconferencing
• Transmission of facsimile images(Printed documents and graphics) for
• office automation
Communication over computer networks:
• Closed-circuit television based security monitoring systems and
• In military communications.

Medical applications:
• Processing of chest X-rays
• Cine angiograms
• Projection images of transaxial tomography and
• Medical images that occur in radiology nuclear magnetic resonance (NMR)
• Ultrasonic scanning

IMAGE PROCESSING TOOLBOX (IPT)


IPT is a collection of functions that extend the capability of the MATLAB numeric
computing environment. These functions, and the expressiveness of the MATLAB language,
make many image-processing operations easy to write in a compact, clear manner, thus
providing a ideal software prototyping environment for the solution of image processing
problem.

Components of Image Processing System

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
Image Sensors: With reference to sensing, two elements are required to acquire digital image.
The first is a physical device that is sensitive to the energy radiated by the object we wish to
image and second is specialized image processing hardware.

Specialize image processing hardware: It consists of the digitizer just mentioned, plus
hardware that performs other primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical operations in parallel on images.

Computer: It is a general purpose computer and can range from a PC to a supercomputer


depending on the application. In dedicated applications, sometimes specially designed
computer are used to achieve a required level of performance.

Software: It consists of specialized modules that perform specific tasks a well designed
package also includes capability for the user to write code, as a minimum, utilizes the
specialized module. More sophisticated software packages allow the integration of these
modules.

Mass storage: This capability is a must in image processing applications. An image of size 1024
x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires one Megabytes
of storage space if the image is not compressed .Image processing applications falls into three
principal categories of storage
• Short term storage for use during processing
• On line storage for relatively fast retrieval
• Archival storage such as magnetic tapes and disks.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
Image display: Image displays in use today are mainly color TV monitors. These monitors are
driven by the outputs of image and graphics displays cards that are an integral part of
computer system.

Hardcopy devices: The devices for recording image includes laser printers, film cameras, heat
sensitive devices inkjet units and digital units such as optical and CD ROM disk. Films provide
the highest possible resolution, but paper is the obvious medium of choice for written
applications.

Networking: It is almost a default function in any computer system in use today because of the
large amount of data inherent in image processing applications. The key consideration in
image is transmission bandwidth.

Fundamental Steps in Digital Image Processing:


There are two categories of the steps involved in the image processing:
1. Methods whose outputs are input are images.
2. Methods whose outputs are attributes extracted from those images.

Image acquisition:
The image is captured by a camera and digitized (if the camera output is not digitized
automatically) using an analogue-to-digital converter for further processing in a computer.

Image Enhancement:
In this step, the acquired image is manipulated to meet the requirements of the specific task
for which the image will be used. Such techniques are primarily aimed at highlighting the
hidden or important details in an image, like contrast and brightness adjustment, etc. Image

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
enhancement is highly subjective in nature. Image enhancement is a very subjective area of
image processing.

Image Restoration:
It deals with improving the appearance of an image. This step deals with improving the
appearance of an image and is an objective operation since the degradation of an image can
be attributed to a mathematical or probabilistic model. For example, removing noise or blur
from images.

Color image processing:


It is an area that is been gaining importance because of the use of digital images over the
internet. Color image processing deals with basically color models and their implementation in
image processing applications. This step aims at handling the processing of colored images
(16-bit RGB or RGBA images), for example, peforming color correction or color modeling in
images.

Wavelets and Multi resolution Processing:


Wavelets are the building blocks for representing images in various degrees of resolution.
Images subdivision successively into smaller regions for data compression and for pyramidal
representation.

Compression:
For transferring images to other devices or due to computational storage constraints, images
need to be compressed and cannot be kept at their original size. This is also important in
displaying images over the internet;
for example, on Google, a small thumbnail of an image is a highly compressed version of the
original. Only when you click on the image is it shown in the original resolution. This process
saves bandwidth on the servers.
It has to major approaches a) Lossless Compression b) Lossy Compression

Morphological processing:
Image components that are useful in the representation and description of shape need to be
extracted for further processing or downstream tasks. Morphological Processing provides the
tools (which are essentially mathematical operations) to accomplish this. For example, erosion
and dilation operations are used to sharpen and blur the edges of objects in an image,
respectively.

Image Segmentation:
This step involves partitioning an image into different key parts to simplify and/or change the
representation of an image into something that is more meaningful and easier to analyze.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
Image segmentation allows for computers to put attention on the more important parts of the
image, discarding the rest, which enables automated systems to have improved performance.

Representation and Description:


mage segmentation procedures are generally followed by this step, where the task for
representation is to decide whether the segmented region should be depicted as a boundary
or a complete region. Description deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects from
another.

Object detection and Recognition:


After the objects are segmented from an image and the representation and description phases
are complete, the automated system needs to assign a label to the object—to let the human
users know what object has been detected, for example, “vehicle” or “person”, etc.

Knowledge base:
Knowledge about a problem domain is coded into an image processing system in the form of a
knowledge base. This knowledge may be as simple as detailing regions of an image where the
information of the interest in known to be located. Thus limiting search that has to be
conducted is in seeking the information. The knowledge base also can be quite complex such
interrelated list of all major possible defects in a materials inspection problems or an image
database containing high resolution satellite images of a region in connection with change
detection application.

A Simple Image Model:

An image is denoted by a two dimensional function of the form f{x, y}. The value or amplitude
of f at spatial coordinates {x,y} is a positive scalar quantity whose physical meaning is
determined by the source of the image.
When an image is generated by a physical process, its values are proportional to energy
radiated by a physical source. As a consequence, f(x,y) must be nonzero and finite; that is
o<f(x,y) <co The function f(x,y) may be characterized by two components-

• The amount of the source illumination incident on the scene being viewed.
• The amount of the source illumination reflected back by the objects in the scene.

These are called illumination and reflectance components and are denoted by i(x,y) an r (x,y)
respectively.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
The functions combine as a product to form f(x,y). We call the intensity of a monochrome
image at any coordinates (x,y) the gray level (l) of the image at that point l= f (x, y.)
L min ≤ l ≤ Lmax
Lmin is to be positive and Lmax must be finite
Lmin = iminrmin
Lmax = imaxrmax
The interval [Lmin, Lmax] is called gray scale. Common practice is to shift this interval
numerically to the interval [0, L-l] where l=0 is considered black and l= L-1 is considered white
on the gray scale. All intermediate values are shades of gray of gray varying from black to
white.

IMAGE SENSING AND ACQUISITION:


Image acquisition is defined as the action of retrieving an image from some source, usually a
hardware based source for processing. It is the first step in the workflow sequence because,
without an image, no processing is possible.
The image that is acquired is completely unprocessed.

The images are generated by the combination of an “illumination” source and the reflection or
absorption of energy from that source by the elements of the “scene” being imaged.
Depending on the nature of the source, illumination energy is reflected from, or transmitted
through, objects.
Incoming energy is transformed into a voltage by the combination of input electrical power
and sensor material that is responsive to the particular type of energy being detected. The
output voltage waveform is the response of the sensor(s), and a digital quantity is obtained
from each sensor by digitizing its response.

1. Image Acquisition using a Single Sensor element:


The image shows the single sensor element where we have a filter to filter out or
give weight to some kind of frequencies the energy represents the radiation
reflected by the object the inside sensing material is sensitive to light and gives
photodiode behavior and produces the output voltage and at the end, the power
supply and housing has provided.

In order to capture the 2D image we need this single sensor to have displacement
in both the x and y direction of the image to sense for that we have a mechanical
motion-based setup shown in the second figure.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
As the image shows the film is mounted onto the drum and the drum rotates
around its own axis which provides a motion in one direction the single sensor is
also connected through a long rod which provides the motion perpendicular to
the film motion. Finally, the light inside the drum passes throughout the film and
the sensor detects that light in different waves/frequencies which gets converted
into the voltage which ultimately results in the 2D image using single sensors.

2. Image Acquisition using Sensor Strip:


In sensor strip we have a series of sensors placed in line in a horizontal direction
that appears to be a row of sensors this helps in reducing the time required by the
single sensor to sense the object in the images because of the way the motion of
the sensor and drum. In the case of the strip, we only need motion in one
direction that is perpendicular to the sensors to read the image and sense the
object in the image by using the same method as used by the single-sensor
sensing method.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
Sensor strips mounted in a ring configuration are used in medical
and industrial imaging to obtain cross-sectional (“slice”) images of 3-Dobjects.

3. Image Acquisition using a Sensor Arrays:


The individual sensors arranged in the form of a 2-D array. Numerous
electromagnetic and some ultrasonic sensing devices frequently are arranged in
an array format. This is also the predominant arrangement found in digital
cameras. A typical sensor for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties and can be packaged in
rugged arrays of elements or more. CCD sensors are used widely in digital
cameras and other light sensing instruments.
The response of each sensor is proportional to the integral of the light energy
projected onto the surface of the sensor, a property that is used in astronomical
and other applications requiring low noise images. Noise reduction is achieved by
letting the sensor integrate the input light signal over minutes or even hours.

The sensor array consists of a 2D arrangement of sensors shown in the image


below in a plane where we do not require any movement of the sensor to be
done.

Sensor arrays

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
IMAGE SAMPLING AND QUANTIZATION:

In Digital Image Processing, signals captured from the physical world need to be
translated into digital form by “Digitization” Process. In order to become suitable
for digital processing, an image function f(x,y) must be digitized both spatially
and in amplitude. This digitization process involves two main processes called

1. Sampling: Digitizing the co-ordinate value is called sampling.

2. Quantization: Digitizing the amplitude value is called quantization.

Typically, a frame grabber or digitizer is used to sample and quantize the


analogue video signal.

1.Sampling

Since an analogue image is continuous not just in its co-ordinates (x


axis), but also in its amplitude (y axis), so the part that deals with the
digitizing of co-ordinates is known as sampling. In digitizing sampling is
done on independent variable. In case of equation y = sin(x), it is done on x
variable.

When looking at this image, we can see there are some random variations in
the signal caused by noise. In sampling we reduce this noise by taking
samples. It is obvious that more samples we take, the quality of the image

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
would be more better, the noise would be more removed and same happens
vice versa. However, if you take sampling on the x axis, the signal is not
converted to digital format, unless you take sampling of the y-axis too which
is known as quantization.

Sampling has a relationship with image pixels. The total number of pixels in
an image can be calculated as Pixels = total no of rows * total no of columns.
For example, let’s say we have total of 36 pixels, that means we have a
square image of 6X 6. As we know in sampling, that more samples
eventually result in more pixels. So it means that of our continuous signal,
we have taken 36 samples on x axis. That refers to 36 pixels of this image.
Also the number sample is directly equal to the number of sensors on CCD
array.

Here is an example for image sampling and how it can be represented using
a graph.

Quantization
Quantization is opposite to sampling because it is done on “y axis”
while sampling is done on “x axis”. Quantization is a process of transforming
a real valued sampled image to one taking only a finite number of distinct

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
values. Under quantization process the amplitude values of the image are
digitized. In simple words, when you are quantizing an image, you are
actually dividing a signal into quanta(partitions).

Now let’s see how quantization is done. Here we assign levels to the values
generated by sampling process. In the image showed in sampling explanation,
although the samples has been taken, but they were still spanning vertically to a
continuous range of gray level values. In the image shown below, these vertically
ranging values have been quantized into 5 different levels or partitions. Ranging
from 0 black to 4 white. This level could vary according to the type of image you
want.

There is a relationship between Quantization with gray level resolution. The


above quantized image represents 5 different levels of gray and that means the
image formed from this signal, would only have 5 different colors. It would be a
black and white image more or less with some colors of gray.

When we want to improve the quality of image, we can increase the levels
assign to the sampled image. If we increase this level to 256, it means we have a
gray scale image. Whatever the level which we assign is called as the gray level.
Most digital IP devices uses quantization into k equal intervals. If b-bits per pixel
are used,

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
The number of quantization levels should be high enough for human
perception of fine shading details in the image. The occurrence of false contours is
the main problem in image which has been quantized with insufficient brightness
levels. Here is an example for image quantization process.

RELATIONSHIP BETWEEN PIXELS:


An image is denoted by f(x,y) and p,q are used to represent individual pixels of the image.
NEIGHBORS OF A PIXEL
• A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose
coordinates are given by: (x+1,y), (x-1, y), (x, y+1), (x,y-1)

This set of pixels, called the 4-neighbors of p, is denoted by N4(p). Each pixel is one unit
distance from (x,y) and some of the neighbors of p lie outside the digital image if (x,y) is on the
border of the image. The four diagonal neighbors of p have coordinates and are denoted by
ND (p).
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by
N8 (p).

As before, some of the points in ND (p) and N8 (p) fall outside the image if (x,y) is on the
border of the image.

ADJACENCY BETWEEN PIXEL:

Let V be the set of intensity values used to define adjacency.

In a binary image, V ={1} if we are referring to adjacency of pixels with value 1. In


a gray-scale image, the idea is the same, but set V typically contains more
elements.

For example, in the adjacency of pixels with a range of possible intensity values 0
to 255, set V could be any subset of these 256 values.

We consider three types of adjacency:

a) 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in


the set N4(p).

b) 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in


the set N8(p).

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
c) m-adjacency(mixed adjacency): Two pixels p and q with values from V are
m-adjacent if

1. q is in N4(p), or

2. q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose values are
from V.

Mixed adjacency is a modification of 8-adjacency. It is introduced to


eliminate the ambiguities that often arise when 8-adjacency issued.

For example:

Fig:1.8(a) Arrangement of pixels; (b) pixels that are 8-adjacent (shown


dashed) to the center pixel; (c) m-adjacency

In this example, we can note that to connect between two pixels (finding
a path between two pixels):

– In 8-adjacency way, you can find multiple paths between two pixels

– While, in m-adjacency, you can find only one path between two pixels

• So, m-adjacency has eliminated the multiple path connection that has
been generated by the 8-adjacency.

• Two subsets S1 and S2 are adjacent, if some pixel in S1 is adjacent to


some pixel in S2.

Adjacent means, either 4-, 8- or m-adjacency.

CONNECTIVITY BETWEEN PIXEL:

It is an important concept in digital image processing.

It is used for establishing boundaries of objects and components of regions in an


image.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
Two pixels are said to be connected:

• if they are adjacent in some sense(neighbour pixels,4/8/m-adjacency)

• if their gray levels satisfy a specified criterion of similarity(equal


intensity level)

There are three types of connectivity on the basis of adjacency. They are:

a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-


adjacent with each others.

b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-


adjacent with each others.

c) m-connectivity: Two or more pixels are said to be m-connected if they are


m-adjacent with each others.

COLOR MODELS:

There are different ways to model color.

Two very popular models used in color image processing:

– RGB (Red Green Blue)

– HSI (Hue Saturation Intensity)

1.RGB MODEL:

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
In the RGB model each color appears in its primary spectral components of red,
green and blue.
The model is based on a Cartesian coordinate system.
RGB values are at 3 corners. Cyan magenta and yellow are at three other corners.
Black is at the origin. White is the corner furthest from the origin.
Different colors are points on or inside the cube represented by RGB vectors.

Images represented in the RGB color model consist of three component images. – one
for each primary color. When fed into a monitor these images are combined to create a
composite color image. The number of bits used to represent each pixel is referred to as
the color depth. A 24-bit image is often referred to as a fullcolor image as it allows
16,777,216 colors.

Generating RGB image:

2.HSI MODEL:

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar
The characteristics generally used to distinguish one color from another are brightness,
hue, and saturation. RGB and CMY model are suitable for hardware implementation.
These model are easily perceptive to human eye. But they are suitable for describing
color in terms that are practical for human representation.
Ex: one doesn’t refer to the color of an object by giving the percentage of each of the
primaries composing its color.

The HSI (hue, saturation, intensity) color model, decouples the intensity component
from the color-carrying information (hue and saturation) in a color image.
The HIS model is an ideal tool for developing image processing algorithms based on
color descriptions that are natural and intuitive to humans.
a. Hue: A color attribute that describes a pure color (pure yellow, orange or red).
b. Saturation: Gives a measure of how much a pure color is diluted with white light.
c. Intensity: Brightness is nearly impossible to measure because it is so subjective.
Instead we use intensity. Intensity is the same achromatic notion that we have
seen in grey level images.

Halesh T.G, Dept. of BCA


Smt. Indira Gandhi Govt. First Grade Women’s College, Sagar

You might also like