0% found this document useful (0 votes)
27 views79 pages

Chapter2 CV

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 79

Layout…

Basic concept of image


Digital image Representation
Digital image acquisition process
Image sampling and quantization
Representation of different image type’s
Mathematical Tools used in Digital Image Processing
Digital image processing
Con’t…
Out of all these signals , the field that deals with the type of
signals for which the input is an image and the output is also
an image is done in image processing.
As it name suggests, it deals with the processing on images.
The basic signal operations
Adder
Constant multiplier
Delay
Time window or modulation
General steps in signal processing
Signal generation:
Signal amplification and processing
Signal transmission
Signal perception
Signal storage
Analog image processing and Digital image processing
Analog image processing is done on analog signals
Analog signals have continuous electrical signals.
It includes processing on two dimensional analog
signals
They include such things as photographs, paintings,
TV images, and all of our medical images recorded on
film or displayed on various display devices, like
computer monitors.
It is generally continuous and not broken into many
small individual pieces.
DIP
The digital image processing deals with
developing a digital system that performs
operations on a digital image
Digital signals have non-continuous electrical signals
Digital images are recorded as many numbers.
The image is divided into a matrix or array of small
picture elements, or pixels.
Each pixel is represented by a numerical value
Con’t…
Picture elements, Image elements, pels, and
pixels
A digital image is composed of a finite
number of elements, each of which has a
particular location and value.
These elements are referred to as picture
elements, image elements, pels, and pixels.
Pixel is the term most widely used to denote
the elements of a digital image.
Con’t…
The general aim of any image acquisition is to
transform an real-world image into an array of
numerical data which could be later manipulated
on a computer
Image acquisition is achieved by suitable
cameras.
We use different cameras for different
applications.
If we need an X-ray image, we use a camera that is
sensitive to X-rays.
Con’t…

If we want an infrared image, we use


cameras that are sensitive to infrared
radiation.
For normal images (family pictures, etc.),
we use cameras that are sensitive to the
visual spectrum.
Visual spectrum
The visible light spectrum is the segment
of the electromagnetic spectrum that the
human eye can view.
More simply, this range of wavelengths is
called visible light.
Typically, the human eye can detect
wavelengths from 380 to 700 nanometers.
Con’t…
Con’t…

what human see what computer see


Image
example of digital image
But actually , this image is nothing
but a two dimensional array of
numbers ranging between 0 and 255.

128 230 123


232 123 221

123 77 89
80 255 255
Con’t…
Image-digital image
An image is a two-dimensional function f(x,y), where x
and y are the spatial (plane) coordinates, and the
amplitude of f at any pair of coordinates (x,y) is called
the intensity of the image at that level.
If x,y and the amplitude values of f are finite and discrete
quantities, we call the image a digital image.
 A digital image is composed of a finite number of
elements called pixels, each of which has a particular
location and value.
Con’t…
Digital Image is an image or picture represented
digitally, specifically called pixels
Digital Image Processing is the technology of
manipulating these groups of bits( or pixels) to
enhance the quality of the image or create
different perspectives or to extract information
from the image digitally, with the help of
computer algorithms.
Relationship between a digital image and a signal
If the image is a two dimensional array then what
does it have to do with a signal?
Signal: In physical world, It can be measured
through its dimensions and time over space.
A signal is a mathematical function, and it
conveys (contain) some information.
A signal can be one dimensional or two
dimensional or higher dimensional signal.
One dimensional signal is a signal that is
measured over time. E.g. is a voice signal.
Con’t…
The two dimensional signals are those that
are measured over some other physical
quantities. E.g. is a digital image.
Relationship: Since anything that conveys
information or broadcast a message in
physical world between two observers is a
signal.
That includes speech or (human voice) or an
image as a signal.
Con’t…
Since when we speak , our voice is converted to a
sound wave/signal and transformed with respect
to the time to person we are speaking to.
Digital camera works, as while acquiring an
image from a digital camera involves transfer of a
signal from one part of the system to the other.
Basic concept of image
Dimension
Channel
Pixel
Basic concept of image

Dimension of image
Image dimensions have the length and
width of a digital image
It does not have depth.
An image is only of 2-dimensional that is
why an image is defined as a 2-
dimensional signal.
Con’t…
Dimension of image
Any object which has length and height comes
under 2-dimension signal
It has two independent variables.
F (x, y) = Object
Con’t…
Channel of image
It is color of images
A digital color image is formed by the combination of 3
color channels e.g. Red/Green/Blue
In other words, each 2D color image can be represented
as a combination of 3 separate grayscale images, each
with information about a separate color code.
Images are usually represented as Height x Width x
#Channels where #Channels is 3 for RGB images and 1
for grayscale images.
Width x Height x #Channels, but the third dimension is
the “channels.”
Con’t…
An RGB image has three channels
The red channel is an array of values that
specifies the intensity of red for each pixel in
the image.
The green and blue channels similarly specify
the intensity of green and blue, respectively,
for each pixel.
When the three channels are combined, you
get a full color image.
Basic concept of image-pexel
What is a Pixel?
Pixels are arranged in a 2-dimensional grid, represented
using squares.
Each pixel is a sample of an original image,
The intensity of each pixel is variable; in color systems,
each pixel has typically three or four components such as
red, green, and blue, or cyan, magenta, yellow, and black.
The word pixel is based on a contraction of pix ("pictures")
and el (for "element").
Con’t…
The number of pixels determines the resolution of a
computer monitor or TV screen
Generally the more pixels, the clearer and sharper
the image
The number of pixels is calculated by multiplying the
horizontal and vertical pixel measurements
The physical size of a pixel depends on the set
resolution for the display screen
what is resolution and what is the its relation
with pixel?
how we calculate total image pixel?
Assignment 1
Discuss in detail about resolutions of
physical device listed below
TV
Digital camera
Printer
Phone
Relationship of pixels
Con’t…
Connectivity and Adjacency
We want to know how the pixels are related to
each other.
Two pixels that are neighbour and have the
same gray level are adjacent.
Adjacency means next to you(neighbour pixel
and have same Gray level).
3 types of Adjacency:
a) 4-adjacency
 Two pixels p & q with values from V are 4-
adjacent, if q is in the set of N4(p).
Con’t…
c) m-adjacency (mixed adjacency)
Two pixels p & q with values from V are m-
adjacent if:
i. q is in N4(p), or
ii. q is in ND(p) and the set N4(p) Ʌ N4(q) has no pixels whose
values are from V. or it means no common pixel or empty set
is a modification of 8-adjacency.
useful to eliminate and solve the ambiguities (such as
multiple path connectivity to move in the pixels ) that
often arise when 8-adjacency is used.
If multiple path(4 neighbour and 8 neighbour ) give
priority to 4 neighbour path to reduce ambiguities.
Examples: Adjacency and Path
V = {1, 2}
0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
Examples: Adjacency and Path

V = {1, 2}

The 8-path from (1,3) to (3,3): The m-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3) (1,3), (1,2), (2,2), (3,3)
(ii)(1,3), (2,2), (3,3)
Connectivity
Connectivity used for:
Connectivity between pixels is a fundamental
concept that simplifies the definition of
numerous digital image concepts, such as region
and boundaries
To establish if two pixels are connected, it must
be determined if they are neighbours and their
Gray levels satisfy a specified criterion of
similarity(say if their Gray levels are equal).
For any pixel p in S, the set of pixels in S that
are connected to p, is a connected component of
S.
Con’t…
Let S represent a subset of pixels in an image.
Two pixels p & q are said to be connected in S, if
there exists a path between them consisting entirely
of pixels in S.

Digital path: is the line connecting pixels.


Region
A region in an image is a group of connected
pixels with similar properties.
Let R be a Subset of pixels in an image. We call R a
region of the image if R is a Connected Set.
Boundary
Boundary tracing, also known as contour tracing, of a
binary digital region can be thought of as
a segmentation technique that identifies
the boundary pixels of the digital region.
Boundary tracing is an important first step in the analysis
of that region.
A set of pixels in the region that have one or more
neighbours that are not in R.
Con’t…
Digital image representation
image can be define as 2D signal that varies over
spatial coordination X and Y that repersent
mathematically writen as f(x,y).
Con’t…

generally image is represent


f(x,y).
the value of the function
f(x,y)at every point indexed by
a row and column is called grey
value or intensity of the image
where M and N is column and
Row
Resolution
it is an important characteristic of an imaging
system.
it is the ability of imaging system to produce the
smallest discernable details .
the smallest size object clearly and differentiate in
form the neighboring small objects that are present
in the image.
the number of row in digital image is called vertical
resolution
the number of column in digital image is called
horizontal resolution
Con’t…
image resolution depend on two factor
optical resolution of the lens
spatial resolution a useful way to define resolution is the
smallest number of line pairs per unit distance
spatial resolution also depends on two parameters
number of pixel of the image
number of bits necessary for adequate intensity
resolution, referred to as bit depth
Con’t…
The number of bits necessary to encode the
pixels value is called bit depth.
So the total number of bits necessary to
represent the image is =number of rows *
number of column * bit depth
2d image 3d image
For binary image one is sufficient for
representing pixel values
Exercise: 2
Digital image operation
Digital image operation

Image operation Image

Brightness enhancement and contrast manipulation


are examples of image processing operation
Image analysis

Image Operation Numerical


data

Example of image analyses operations are


histogram of an image and counting an object
Con’t…
Image processing and image understanding
operations take information from image
Knowledge of used is also combined with image
information
Many intelligent application can be designed
Generally image processing operations are
divided into two categories
Low level operations
High level operation
Con’t…
Con’t…
Low level image processing is associated with
traditional image processing
High level image processing deals with image
understanding
Process of image understanding can be understood
as
Constructing model of any real object or science
Constructing model from image
Matching process b/n model created and real world
model
Feedback mechanism w/c invokes additional routines to
update model if required
Con’t…

This process is iteratively performed till


model converge to achieve global goal
These tasks are quite complex and
intensive
Digital image representation
Con’t…
• A point on the 2-D grid is
called a pixel or pel.
• Representation of digital
images by arrays of discrete
points on a rectangular grid:
a 2-D image
Con’t…
In the above picture, there
may be thousands of
pixels, that together make
up this image.
We will zoom that image
to the extent that we are
able to see some pixels
division
Con’t…
How many pixels are
sufficient?
Digital images consist of
pixels.
On a square grid, each pixel
represents a square region of
the image.
its same image with a 3 × 4,
b 12 × 16, c 48 × 64, and d
192 × 256 pixels.
If the image contains sufficient
pixels, it appears to be
continuous.
Digital Image Acquision process
Before any video or image processing can
commence an image must be captured by a camera
and converted into a manageable entity
This is the process known as image acquisition
The image acquisition process consists of
three steps;
Energy reflected from the object of inters
An optical system which focuses the energy
A sensor which measures the amount of energy.
Con’t…
Energy:-
In order to capture an image a
camera requires some sort of
measurable energy.
The energy of interest in this
context is light or more generally
electromagnetic waves(EM).
An EM wave can have different
wavelengths (or different energy
levels or different frequencies).
Con’t…

The range from approximately 400-700 nm


(nm = nanometer = 10-9) is denoted the
visual spectrum.
The EM waves within this range are those
your eye (and most cameras) can detect.
This means that the light from the sun (or a
lamp) in principle is the same as the signal
used for transmitting TV, radio or for mobile
phones etc.
The Optical System
 The light reflected from the object now has to be
captured by the camera.
 If a material sensitive to the reflected light is placed
close to the object, an image of the object will be
captured.
Con’t…
 One of the main ingredients in the optical system
is the lens.
 A lens is basically a piece of glass which focuses
the incoming light onto the sensor
 A high number of light rays with slightly different
incident angles collide with each point on the
object’s surface and some of these are reflected
toward the optics.
 This means that an image of the object is formed
to the right of the lens and it is this image the
camera captures by placing a sensor at exactly
this position.
A Simple Image formation model
 We shall denote images as a 2D function
f(x,y).
 The amplitude or value of f at spatial
coordinates (x,y) is a positive scalar quantity
whose physical meaning is determined by
the source of the image.
 f(x,y) must be nonzero and finite; that is,
 0 < f(x,y) < ∞
A Simple Image formation model

The function f(x,y) maybe characterized by


two components:
1. The amount of source illumination on the
scene viewed, and
2. The amount of illumination reflected by
the objects in the scene.
These are called the illumination and
reflectance, and denoted by i(x,y) & r(x,y),
respectively. The two functions are
combined as f(x,y).
Need of Sampling and Quantization in DIP
 Mostly the output of image sensors is in the
form of Analog signal
 Now the problem is that we cannot apply
digital image processing & its techniques
on analog signals.
 This is due to the fact that we cannot store
the output of image sensors which are in
the form of analog signals b/c it requires
infinite memory to store a signal that can
have infinite values.
Need of Sampling and Quantization in DIP
So, we have to convert this analog signal
into digital signal.
To create a digital image, we need to
convert the continuous data into digital
form
This conversion from analog to digital
involves two processes:
Sampling
Quantization.
Sampling and Quantization in DIP
Con’t…

Generating a digital image. a) Continuous image. b) A scan line from A to B in the


continuous image. Used to illustrate the concepts of sampling and quantization. c)
Con’t…

a, Continuous image projected onto a sensor array b. Result of image sampling and
quantization
Con’t…

In any part of image there can be infinite


points, so we can’t exactly store the analog
image as it would need infinite memory
Increasing the number of parts in which a
particular region of image is split into will
increase the resolution
The increase in number of parts(pixels) of a
digital image is called Zooming
Difference b/n Sampling and Quantization

Sampling is done prior to the quantization


process.
Quantization: reduce the number of values required
to be stored.
Quantization determines how many different colors image
can have
Quantization: mapping a larger set of values to a
smaller set. i.e. Rounding the numbers.
Con’t…
Digitization: means numbers have to be discrete.
i.e. 20
Digitizing the coordinate values is called sampling.
Digitizing the amplitude value or intensity of the
brightness values is called quantization.
In the sampling process, a single amplitude value
is selected from the time interval to represent it
while, in quantization, the values representing the
time intervals are rounded off, to create a finite
set of possible amplitude values.
Representation of different image type‘s
The binary image
The binary image as it name states, contain only
two pixel values 0 and 1
Image whose pixels have only two possible intensity
values.
Here 0 refers to black color and 1 refers to white color.
It is also known as Monochrome.
Black and white image:
The resulting image that is formed hence consist of
only black and white color and thus can also be called
as Black and White image.
Con’t…
8 bit color format:-
It is one of the most famous image format.
It has 256 different shades of colors in it.
It is commonly known as Grayscale image.
The range of the colors in 8 bit vary from 0-255. Where 0
stands for black, and 255 stands for white, and 127 stands
for gray color.
This format was used initially by early models of the
operating systems UNIX and the early color Macintoshes.
Con’t…

No gray level
One of the interesting this about this binary
image that there is no gray level in it.
Only two colors that are black and white are
found in it
Format
Binary images have a format of PBM
( Portable bit map )
Con’t…

16 bit color format:-


It is a color image format.
It has 65,536 different colors in it.
It is also known as High color format.
A 16 bit format is actually divided into three
further formats which are Red , Green and
Blue. The famous (RGB) format.
How would you distribute 16 into three?
Con’t…
16 bit color format:-
If you do it like this,
5 bits for R, 5 bits for G, 5 bits for B
Then there is one bit remains in the end.
So the distribution of 16 bit has been done like this.
5 bits for R, 6 bits for G, 5 bits for B.
The additional bit that was left behind is added into the
green bit.
Because green is the color which is most soothing to eyes
in all of these three colors.
(it may depend on the system to do distribution)
Con’t…
24 bit color format:-
It is also known as true color format.
Like 16 bit color format, in a 24 bit color format, the 24 bits
are again distributed in three different formats of Red, Green
and Blue.
Since 24 is equally divided on 8, so it has been distributed
equally between three different color channels.
Their distribution is like this.
8 bits for R, 8 bits for G, 8 bits for B
It format is the most common used format.
Its format is PPM ( Portable pixMap) which is supported by
Linux operating system.
The end

Thank you

You might also like