0% found this document useful (0 votes)
2 views

19ECE454 Image Processing_SET `1

The document outlines the steps in image processing, focusing on image sensing, acquisition, and formation. It discusses various sensor types, including single sensors, sensor strips, and sensor arrays, along with their applications in generating digital images. Additionally, it covers concepts such as image sampling, quantization, dynamic range, and resolution, emphasizing the importance of these factors in producing high-quality images.

Uploaded by

harishkumarat004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

19ECE454 Image Processing_SET `1

The document outlines the steps in image processing, focusing on image sensing, acquisition, and formation. It discusses various sensor types, including single sensors, sensor strips, and sensor arrays, along with their applications in generating digital images. Additionally, it covers concepts such as image sampling, quantization, dynamic range, and resolution, emphasizing the importance of these factors in producing high-quality images.

Uploaded by

harishkumarat004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Set 2

Prepared by Dr. Devi Vijayan


Steps in Image Processing
Image Sensing and Acquisition
• Depending on the nature of the source, illumination energy is
reflected from,or transmitted through,objects.

• An example in the first category is light reflected from a planar


surface.
• An example in the second category is when X-rays pass through a
patient’s body for the purpose of generating a diagnostic X-ray
film.
single sensor
• sensor of this type is the photodiode,- whose output voltage
waveform is proportional to light.
• The use of a filter in front of a sensor improves selectivity.
• For example, a green (pass) filter in front of a light sensor favors
light in the green band of the color spectrum.As a
consequence,the sensor output will be stronger for green light
than for other components in the visible spectrum.
• In order to generate a 2-D image using a single sensor, there has to
be relative displacements in both the x- and y-directions between
the sensor and the area to be imaged.
• Figure shows an arrangement used in high-precision scanning,
where a film negative is mounted onto a drum whose mechanical
rotation provides displacement in one dimension.
• The single sensor is mounted on a lead screw that provides
motion in the perpendicular direction.
• Since mechanical motion can be controlled with high precision,
this method is an inexpensive (but slow) way to obtain high-
resolution images.
Image Acquisition Using Sensor Strips
• The strip provides imaging elements in one direction. Motion perpen dicular
to the strip provides imaging in the other direction.
• flat bed scanners.
• Sensing devices with 4000 or more in-line sensors are possible.
• In-line sensors are used routinely in airborne imaging applications,in which
the imaging system is mounted on an aircraft that flies at a constant altitude
and speed over the ge ographical area to be imaged.
• One-dimensional imaging sensor strips that respond to various bands of the
electromagnetic spectrum are mounted perpendicular to the direction of
flight.
• The imaging strip gives one line of an image at a time, and the motion of the
strip completes the other dimension of a two-dimensional image. Lenses or
other focusing schemes are used to pro ject the area to be scanned onto the
sensors.
• Sensor strips mounted in a ring configuration are used in medical and
industrial imaging to obtain cross-sectional (“slice”) images of 3-D
objects.
• A rotating X-ray source provides illumination and the portion of the
sensors opposite the source collect the X-ray energy that pass through
the object (the sensors obviously have to be sensitive to X-ray energy).
• the out put of the sensors must be processed by reconstruction
algorithms whose objective is to transform the sensed data into
meaningful cross-sectional images.
• A 3-D digital volume consisting of stacked images is generated as the
object is moved in a direction perpendicular to the sensor ring.
Image Acquisition Using Sensor Arrays
• predominant arrangement found in digital cameras.
• A typical sensor for these cameras is a CCD array,which can be
manufactured with a broad range of sensing properties and can be packaged
in rugged arrays of 4000 * 4000 or more.
• CCD sensors are used widely in digital cameras and other light sensing
instruments.
• The response of each sensor is proportional to the integral of the light energy
projected onto the surface of the sensor,
• Noise reduction is achieved by letting the sensor integrate the input light
signal over minutes or even hours
• Since the sensor array is two dimensional, its key advantage is that a
complete image can be obtained by focusing the energy pattern onto the
surface of the array. Motion obviously is not necessary
Image Formation
• The first function per formed by the imaging system is to collect the
incoming energy and focus it onto an image plane.
• If the illumination is light,the front end of the imaging system is a lens,
which projects the viewed scene onto the lens focal plane,
• The sensor array, which is coincident with the focal plane, produces
outputs proportional to the integral of the light received at each sensor.
• Digital and analog circuitry sweep these outputs and convert them to a
video signal, which is then digitized by another section of the imaging
system.
• The output is a digital image, as shown diagrammatically.
• The colors that humans perceive in an object are determined by
the nature of the light reflected from the object.
• A body that reflects light and is relatively balanced in all visible
wavelengths appears white to the observer.
• However,a body that favors reflectance in a limited range of the
visible spectrum exhibits some shades of color.
• For example,green objects reflect light with wavelengths primarily
in the 500 to 570 nm range while absorbing most of the energy at
other wavelengths.
• Light that is void of color is called achromatic or monochromatic
light.
• The only attribute of such light is its intensity, or amount.
• The term gray level generally is used to describe monochromatic
intensity because it ranges from black, to grays, and finally to
white.
• Chromatic light spans the electromagnetic energy spectrum from
approximately 0.43 to 0.79 m.
• Three basic quantities are used to describe the quality of a
chromatic light source: radiance; luminance; and brightness.
• Radiance is the total amount of energy that flows from the light
source,and it is usually measured in watts (W).
• Luminance, measured in lumens (lm), gives a measure of the
amount of energy an observer perceives from a light source.
• Brightness is a subjective descriptor of light perception that is
practically impossible to measure.
Image f(x,y) – f --meaning
• .The nature of i(x, y) is determined by the illumination source,
• and r(x, y) is determined by the characteristics of the imaged
objects.
• these expressions also are applicable to images formed via tran
mission of the illumination through a medium, such as a chest X-
ray.
• In this case,we would deal with a transmissivity instead of a
reflectivity function,
Image Sampling and Quantization
• Objective is to generate digital images from sensed data.
• The output of most sensors is a continuous voltage waveform
whose amplitude and spatial behavior are related to the physical
phenomenon being sensed.
• To create a digital image, we need to convert the continuous
sensed data into digital form.
• This involves two processes: sampling and quantization
• Fig. shows a continuous image,f(x,y).
• An image may be continuous with respect to the x-and y-
coordinates, and also in amplitude.
• To convert it to digital form,we have to sample the function in both
coordinates and in amplitude.
• Digitizing the coordinate values is called sampling.
• Digitizing the amplitude values is called quantization.
• Objective is to generate digital images from sensed data.
• The output of most sensors is a continuous voltage waveform
whose amplitude and spatial behavior are related to the physical
phenomenon being sensed.
• To create a digital image,we need to convert the continuous
sensed data into digital form.
• This involves two processes:sampling and quantization
• An image may be continuous with respect to the x-and y-
coordinates, and also in amplitude.
• To convert it to digital form,we have to sample the func tion in both
coordinates and in amplitude.
• Digitizing the coordinate values is called sampling.
• Digitizing the amplitude values is called quantization.
• The one-dimensional function shown is a plot of amplitude (gray level) values of the
continuous image along the line segment AB.
• The random variations are due to image noise.
• To sample this function,we take equally spaced samples along line AB.
• However,the values of the samples still span (vertically) a continuous range of gray-
level val ues.
• In order to form a digital function,the gray-level values also must be converted
(quantized) into discrete quantities.
• the gray-level scale divided into eight discrete levels, ranging from black to white.
• The vertical tick marks indicate the specific value assigned to each of the eight gray
levels.
• The continuous gray levels are quantized simply by assigning one of the eight
discrete gray levels to each sample.
In practice, the method of sampling – sensor
arrangement
• When an image is generated by a single sensing element -
sampling is accomplished by selecting the number of individual
mechanical increments at which we activate the sensor to collect
data.
• When a sensing strip is used for image acquisition, the number of
sensors in the strip establishes the sampling limitations in one
image direction. Mechanical motion in the other direction can be
controlled more accurately, but it makes little sense to try to
achieve sampling density in one direction that exceeds the other
• When a sensing array is used for image acquisition, there is no
motion and the number of sensors in the array establishes the
limits of sampling in both directions.
• the quality of a digital image is determined to a large degree by the
number of samples and discrete gray levels used in sampling and
quantization.
Digital Image
• The result of sampling and quantization is a matrix of real
numbers.
• Assume that an image f(x, y) is sampled so that the resulting
digital image has M rows and N columns.
• the values of the coordinates at the origin are (x, y)=(0, 0).
• The next coordinate values along the first row of the image are
represented as (x, y)=(0, 1).
• The right side of this equation is by definition a digital image.
• Each element of this matrix array is called an image element,picture
element,pixel,or pel.
• Origin at the top left corner – Image displays
• Positive x axis extend downward and y axis extends right – right handed
Cartesian coordinate system
• Matlab - origin (1,1)
• This digitization process requires decisions about values for M,
N,and for the number,L,of discrete gray levels allowed for each
pixel.
• There are no requirements on M and N,other than that they have to
be positive integers.
• However, due to processing, storage, and sampling hardware
considerations, the number of gray levels typically is an integer
power of 2
• L = 2^k
The range of values spanned by the gray scale is called the dynamic range of an imaging system

Dynamic range: Ratio of maximum measurable intensity to minimum detectable intensity level. - Upper limit is
determined by saturation and lower limit by noise

– dynamic range establishes the lowest and highest intensity -

-image contrast – difference in intensity between highest and lowest intensity


contrast ratio

images whose gray levels span a significant portion of the gray scale as having a high dynamic range.

Conversely, an image with low dynamic range tends to have a dull, washed out gray look.
• The total number, b, of bits required to store a digital image is
• b = M*N*k --(k –count of bits)
• When an image can have 2^k gray levels,it is common practice to
refer to the image as a“k-bit image.”
• For example,an image with 256 possible gray-level values is called
an 8-bit image.
• Note that storagere quirements for 8-bit images of size 1024*1024
and higher are not insignificant.
Linear vs. coordinate indexing
• In image processing, linear indexing and coordinate indexing are
two ways to access pixel values,
• but they interpret the image matrix differently.
• 1. Coordinate Indexing (Row-Column):
• Uses (row, column) to locate a pixel.
• 2. Linear Indexing (Flattened Array):
• Treats the 2D matrix as a 1D array by stacking rows (or columns)
sequentially.
• Indexing is by a single number (alpha)
Spatial resolution
• Spatial resolution is the smallest discernible detail in an image.
• Suppose that we construct a chart with vertical lines of width W,
with the space between the lines also having width W.
• A line pair consists of one such line and its adjacent space. Thus,
the width of a line pair is 2W,and there are 1/2W line pairs per unit
distance.
• Largest number of discernible line pairs/unit distance
• Dots/inch –Printing industry
• In US – dpi (unit) newspaper – 75dpi
Effect of reducing Spatial Resolution
Lower resolution images
are smaller in size
Original size 2136*2136
72dpi size 165*166 - degraded
Effects of varying the number of intensity
levels – spatial resolution constant

32 level - fine ridge –like


Structures in the areas of
constant intensity
In 16 level –clearly visible
ridgelike structures in areas of
smooth gray levels (particularly
in the skull).This effect, caused
by the use of an insufficient
number of gray levels in
smooth areas of a digital image,
is called false contouring, so called
because the ridges resemble topographic contours in a map.
Intensity resolution
• Intensity/ Gray-level resolution similarly refers to the smallest
discernible change in gray level.
• measuring discernible changes in gray level is a highly subjective
process.
• Due to hardware considerations, the number of gray levels is
usually an integer power of 2,
• The most common number is 8 bits, with 16 bits being used in
some applications where enhancement of specific gray-level
ranges is necessary.
y+1

y
y-1

x x+1
x-1
Two Regions that are not
adjacent are said to be disjoint

Circled – not a member


of border of 1 - valued?

- Inner border
- Outer border
The concept of an edge is found frequently in discussions dealing with regions and boundaries.

key difference - The boundary of a finite region forms a closed path -“global”concept.

edges are formed from pixels with derivative values that exceed a preset threshold.

idea of an edge is a “local”concept that is based on a measure of gray-level discontinuity at a point.

possible to link edge points into edge segments - these segments linked - a way that correspond to boundaries,
but this is not always the case.

The one exception in which edges and boundaries correspond is in binary images - the edge extracted from a
binary region will be the same as the region boundary.
• Elementwise product –
• Hadamard product—
multiplying pair of pixels
Linear / Non linear
Operator H
• Additivity
• Homogeneity
Image of same size
Output same size as input
• The variances are arrays of same size as
input image, and there is a scalar variance
values for each pixel location
As K increases???
In order to avoid blurring and other artifacts ,
images to be registered
24 bit images – consists
of three 8 bit channels

TIFF, JPEG – Conversion to 0 to 255 range is automatic


Values after subtraction
Values after addition

Many software applications – set negative to zero


and other as 255

K = 255 – 8 BIT image

You might also like