Dip 4
Dip 4
Introduction
Subject Coordinator
Dr. Bhupendra Singh Kirar
Department of Electronics and Communication Engineering
E-Mail ID: [email protected]
Google Scholar ID: https://scholar.google.co.in/citations?user=cBu3VZwAAAAJ&hl=en
Scopus ID: 57195508883
Orchid ID: 0000-0002-0417-8709
Subject Coordinator: Dr. Bhupendra Singh Kirar Subject Name & Code: DIP (ECE 501) Semester: VI 2
Digital Image Processing
By
Dr. Bhupendra Singh Kirar
Assistant Professor
Department of Electronics and Communication Engineering
Indian Institute of Information Technology, Bhopal
TOPICS (Previous ppt)
• Introduction
8
Cornea
• The cornea is a tough, transparent tissue that covers
the anterior surface of the eye.
9
Choroid
• The choroid lies directly below the sclera.
10
• At its anterior extreme, the choroid is divided into the
ciliary body and the iris diaphragm.
11
• The lens is made up of concentric layers of fibrous
cells and is suspended by fibers that attach to the
ciliary body.
12
Retina
• The innermost membrane of the eye is the retina,
which lines the Inside of the ǁall’s entire posterior
portion.
13
• There are two classes of receptors: cones and rods.
14
• Muscles controlling the eye rotate the eyeball until
the image of an object of interest falls on the fovea.
17
IMAGE FORMATION IN THE EYE
Image Formation in the Eye
• The principal difference between the lens of the eye
and an ordinary optical lens is that the former is
flexible.
20
• For example, the observer is looking at a tree 15 m
high at a distance of 100 m.
21
BRIGHTNESS ADAPTATION AND DISCRIMINATION
Light and the Electromagnetic
Spectrum
• Sir Isaac Newton discovered that when a beam of
sunlight is passed through a glass prism,
• The emerging beam of light is not white but consists
instead of a continuous spectrum of colors ranging
from violet at one end to red at the other.
34
The electromagnetic spectrum
35
• The electromagnetic spectrum can be expressed in
terms of wavelength, frequency, or energy.
By
Dr. Bhupendra Singh Kirar
Assistant Professor
Department of Electronics and Communication Engineering
Indian Institute of Information Technology, Bhopal
IMAGE SENSING AND ACQUISITION
IMAGE SENSING AND ACQUISITION
Most of the images in which we are interested are generated by the combination of an “illumination”
source and the reflection or absorption of energy from that source by the elements of the “scene” being
imaged.
Illumination may originate from a source of electromagnetic energy, such as a radar, infrared, or X-ray
system, ultrasound or even a computer-generated illumination pattern.
Depending on the nature of the source, illumination energy is reflected from, or transmitted through,
objects.
IMAGE SENSING AND ACQUISITION
An example in the second category is when X-rays pass through a patient’s body for the purpose of
generating a diagnostic X-ray image.
In some applications, the reflected or transmitted energy is focused onto a photo converter (e.g., a
phosphor screen) that converts the energy into visible light.
Electron microscopy and some applications of gamma imaging use this approach.
IMAGE SENSING AND ACQUISITION
A light source is contained inside the drum. As the light passes through the film, its intensity is modified by
the film density before it is captured by the sensor. This "modulation" of the light intensity causes
corresponding variations in the sensor voltage, which are ultimately converted to image intensity levels by
digitization.
This method is an inexpensive way to obtain high-resolution images because
mechanical motion can be controlled with high precision.
The main disadvantages of this method are that it is slow and not readily portable.
IMAGE SENSING AND ACQUISITION
A geometry used more frequently than single sensors is an in-line sensor strip, as in Fig.
The strip provides imaging elements in one direction. Motion perpendicular to the strip provides imaging
in the other direction.
In-line sensors are used routinely in airborne imaging applications, in which the imaging system is
mounted on an aircraft that flies at a constant altitude and speed over the geographical area to be imaged.
One dimensional imaging sensor strips that respond to various bands of the electromagnetic spectrum are
mounted perpendicular to the direction of flight.
An imaging strip gives one line of an image at a time, and the motion of the strip relative to the scene
completes the other dimension of a 2-D image.
IMAGE ACQUISITION USING SENSOR STRIPS
Other modalities of imaging based on the CAT principle include magnetic resonance
imaging (MRI) and positron emission tomography (PET).
IMAGE SENSING AND ACQUISITION
IMAGE ACQUISITION USING SENSOR ARRAYS Figure shows individual sensing elements arranged in the
form of a 2-D array.
CCD sensors are used widely in digital cameras and other light-
sensing instruments.
1. The amount of source illumination incident on the scene being viewed, and
2. The amount of illumination reflected by the objects in the scene.
Appropriately, these are called the illumination and reflectance components, and are
denoted by i( x, y) and r(x, y) , respectively.
Reflectance
Similarly, the following are typical values of r(x,y) :
Then
The interval [Lmin , Lmax ] is called the intensity (or gray) scale.
All intermediate values are shades of gray varying from black to white.
67
IMAGE SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
The output of most sensors is a continuous voltage waveform whose amplitude and
spatial behavior are related to the physical phenomenon being sensed.
To create a digital image, we need to convert the continuous sensed data into a
digital format.
Digitizing the
coordinate
values
Digitizing the
amplitude
values
IMAGE SAMPLING AND QUANTIZATION
BASIC CONCEPTS IN SAMPLING AND QUANTIZATION
IMAGE SAMPLING AND QUANTIZATION
Figure 2.16(a) shows a continuous image f that we want to convert to digital form.
An image may be continuous with respect to the x- and y-coordinates, and also in
amplitude.
To sample this function, we take equally spaced samples along line AB, as shown in
Fig. 2.16(c).
IMAGE SAMPLING AND QUANTIZATION
The samples are shown as small dark squares superimposed on the function, and
their (discrete) spatial locations are indicated by corresponding tick marks in the
bottom of the figure.
The continuous intensity levels are quantized by assigning one of the eight values
to each sample, depending on the vertical proximity of a sample to a vertical tick
mark.
The digital samples resulting from both sampling and quantization are shown as
white squares in Fig. 2.16(d).
Starting at the top of the continuous image and carrying out this procedure
downward, line by line, produces a two-dimensional digital image.
It is implied in Fig. 2.16 that, in addition to the number of discrete levels used, the
accuracy achieved in quantization is highly dependent on the noise content of the
sampled signal.
IMAGE SAMPLING AND QUANTIZATION
In practice, limits on sampling accuracy are determined by other factors, such as the
quality of the optical components used in the system.
When a sensing strip is used for image acquisition, the number of sensors in the
strip establishes the samples in the resulting image in one direction, and mechanical
motion establishes the number of samples in the other.
However, as we will show later in this section, image content also plays a role in
the choice of these parameters.
IMAGE SAMPLING AND QUANTIZATION
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Let f (s, t) represent a continuous image function of two continuous variables, s and
t.
Suppose that we sample the continuous image into a digital image, f (x, y), containing M
rows and N columns, where (x, y) are discrete coordinates.
For notational clarity and convenience, we use integer values for these discrete coordinates:
x = 0 1 2 , , , , … M-1 and y = 0 1 2 , , , , … N-1.
REPRESENTING DIGITAL IMAGES
Thus, for example, the value of the digital image at the origin is f (0 , 0), and its value at the
next coordinates along the first row is f ( 0 , 1).
Here, the notation (0, 1) is used to denote the second sample along the first row. It does not
mean that these are the values of the physical coordinates when the image was sampled.
In general, the value of a digital image at any coordinates (x, y) is denoted f ( x, y ), where x
and y are integers.
When we need to refer to specific coordinates ( i, j), we use the notation f ( i, j ), where the
arguments are integers.
The section of the real plane spanned by the coordinates of an image is called the spatial
domain, with x and y being referred to as spatial variables or spatial coordinates.
REPRESENTING DIGITAL IMAGES
The right side of this equation is a digital image represented as an array of real
numbers.
Each element of this array is called an image element, picture element, pixel, or
pel.
We will use the terms image and pixel throughout the lectures to denote a digital
image and its elements.
REPRESENTING DIGITAL IMAGES
Figure 2.19 shows a graphical representation of an image array, where the x- and y-axis are
used to denote the rows and columns of the array.
We generally use f (i, j ), when referring to a pixel with coordinates (i, j).
Clearly, aij = f (i,j), so Eqs. (2-9) and (2-10) denote identical arrays.
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
As Fig. 2.19 shows, we define the origin of an image at the top left corner.
This is a convention based on the fact that many image displays (e.g., TV monitors) sweep
an image starting at the top left and moving to the right, one row at a time.
More important is the fact that the first element of a matrix is by convention at the top left of
the array.
Choosing the origin of f (x, y) at that point makes sense mathematically because digital
images in reality are matrices.
In fact, as you will see, sometimes we use x and y interchangeably in equations with the rows
(r) and columns (c) of a matrix
REPRESENTING DIGITAL IMAGES
It is important to note that the representation in Fig. 2.19, in which the positive
x-axis extends downward and the positive y-axis extends to the right, is precisely the
right-handed Cartesian coordinate system with which you are familiar,† but shown
rotated by 90° so that the origin appears on the top, left.
The center of an M × N digital image with origin at (0 , 0) and range to (M-1 , N-1) is
obtained by dividing M and N by 2 and rounding down to the nearest integer.
Some programming languages (e.g., MATLAB) start indexing at 1 instead of at 0. The center
of an image in that case is found at
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Sometimes, the range of values spanned by the gray scale is referred to as the
dynamic range, a term used in different ways in different fields.
Here, we define the dynamic range of an imaging system to be the ratio of the maximum
measurable intensity to the minimum detectable intensity level in the system.
As a rule, the upper limit is determined by saturation and the lower limit by noise, although
noise can be present also in lighter intensities.
The dynamic range establishes the lowest and highest intensity levels that a system
can represent and, consequently, that an image can have.
Closely associated with this concept is image contrast, which we define as the difference in
intensity between the highest and lowest intensity levels in an image.
Conversely, an image with low dynamic range typically has a dull, washed-out gray look.
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
REPRESENTING DIGITAL IMAGES
Representing Digital Images
b=M×N×k
YOU
VERY MUCH
Subject Coordinator: Dr. Bhupendra Singh Kirar Subject Name & Code: DIP (ECE 501) Semester: VI 13