0% found this document useful (0 votes)
19 views101 pages

Dip Unit-1

Uploaded by

Namadi Swetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views101 pages

Dip Unit-1

Uploaded by

Namadi Swetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

UNIT - 1

DIGITAL IMAGE FUNDAMENTALS & IMAGE


TRANSFORMS
Topic
• Digital image fundamentals & Image Transforms:
➢ Digital Image fundamentals
➢ Sampling and quantization
➢ Relationship between pixels
• Image Transforms:
➢ 2-D FFT
➢ 2-D FFT Properties
➢ Walsh transform
➢ Hadamard Transform
➢ Discrete cosine Transform
➢ Discrete Wavelet Transform
Image Processing
• An Image is defined as a 2-dimensional function
f(x,y) where x and y are spatial coordinates and
the amplitude of f at any pair of coordinates (x,y)
is called the intensity or gray level of the image at
that point.
• Image processing is a subclass of signal
processing concerned specifically with pictures.
• Improve image quality for human perception
and/or computer interpretation.
Several fields deal with images
• Computer Graphics : the creation of images.

• Image Processing : the enhancement or other


manipulation of the image – the result of which is
usually another images.

• Computer Vision: the analysis of image content.


Examples of fields that use DIP
• Categorize by image sources
• Radiation from the Electromagnetic spectrum
• Acoustic
• Ultrasonic
• Electronic (in the form of electron beams
used in electron microscopy)
• Computer (synthetic images used for modeling and
visualization)
Radiation from EM spectrum
Gamma-Ray Imaging
X-ray Imaging

FIGURE Examples of X-ray imaging. (a) Chest X-ray. (b) Head CT. (c)
Circuit boards.
Imaging in the Ultraviolet Band

FIGURE 1.8 Examples of ultraviolet imaging. (a) Normal corn. (b) Smut corn.
Imaging in the Visible and Infrared Bands
Imaging in the Microwave Band
Imaging in the Radio Band
Fundamental Steps in Digital Image Processing

Fig: Fundamental Steps in Digital Image Processing


Key Stages in Digital Image Processing:
Image Acquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Compression
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Colour Image Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
• Image acquisition: It could be as simple as being given an image
that is already in digital form. Generally the image acquisition stage
involves processing such scaling.
• Image Enhancement: It is among the simplest and most appealing
areas of digital image processing. The idea behind this is to bring
out details that are obscured or simply to highlight certain features
of interest in image. Image enhancement is a very subjective area
of image processing.
• Image Restoration: It deals with improving the appearance of an
image. It is an objective approach, in the sense that restoration
techniques tend to be based on mathematical or probabilistic
models of image processing. Enhancement, on the other hand is
based on human subjective preferences regarding what constitutes
a “good” enhancement result.
• Color image processing: It is an area that is been gaining
importance because of the use of digital images over the internet.
Color image processing deals with basically color models and their
implementation in image processing applications.
• Wavelets and Multiresolution Processing: These are the
foundation for representing image in various degrees of resolution
• Compression: It deals with techniques reducing the storage required to
save an image, or the bandwidth required to transmit it over the network.
It has to major approaches a) Lossless Compression b) Lossy Compression
• Morphological processing: It deals with tools for extracting image
components that are useful in the representation and description of shape
and boundary of objects. It is majorly used in automated inspection
applications.
• Representation and Description: It always follows the output of
segmentation step that is, raw pixel data, constituting either the boundary
of an image or points in the region itself. In either case converting the data
to a form suitable for computer processing is necessary.
• Recognition: It is the process that assigns label to an object based on its
descriptors. It is the last step of image processing which use artificial
intelligence of software.
• Knowledge base: Knowledge about a problem domain is coded into an
image processing system in the form of a knowledge base. This knowledge
may be as simple as detailing regions of an image where the information
of the interest in known to be located. Thus limiting search that has to be
conducted in seeking the information. The knowledge base also can be
quite complex such interrelated list of all major possible defects in a
materials inspection problems or an image database containing high
resolution satellite images of a region in connection with change detection
application.
Components of an Image Processing System

Figure : Components of Image processing System


• Image Sensors: With reference to sensing, two elements are required to
acquire digital image. The first is a physical device that is sensitive to the
energy radiated by the object we wish to image and second is specialized
image processing hardware.
• Specialize image processing hardware: It consists of the digitizer just
mentioned, plus hardware that performs other primitive operations such
as an arithmetic logic unit, which performs arithmetic such addition and
subtraction and logical operations in parallel on images.
• Computer: It is a general purpose computer and can range from a PC to a
supercomputer depending on the application. In dedicated applications,
sometimes specially designed computer are used to achieve a required
level of performance
• Software: It consists of specialized modules that perform specific tasks a
well designed package also includes capability for the user to write code,
as a minimum, utilizes the specialized module. More sophisticated
software packages allow the integration of these modules.
• Mass storage: This capability is a must in image processing applications. An image
of size 1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity
requires one Megabytes of storage space if the image is not compressed .Image
processing applications falls into three principal categories of storage
• Short term storage for use during processing
• On line storage for relatively fast retrieval
• Archival storage such as magnetic tapes and disks
• Image display: Image displays in use today are mainly color TV monitors. These
monitors are driven by the outputs of image and graphics displays cards that are
an integral part of computer system.
• Hardcopy devices: The devices for recording image includes laser printers, film
cameras, heat sensitive devices inkjet units and digital units such as optical and CD
ROM disk. Films provide the highest possible resolution, but paper is the obvious
medium of choice for written applications.
• Networking: It is almost a default function in any computer system in use today
because of the large amount of data inherent in image processing applications.
The key consideration in image transmission bandwidth.
3 types of Computerized Process
• Low-level : input, output are images Primitive operations such
as image preprocessing to reduce noise, contrast
enhancement, and image sharpening
• Mid-level : inputs may be images, outputs are attributes
extracted from those images
→Segmentation
→Description of objects
→Classification of individual objects
• High-level : Image analysis
Where does an image come from?

30
Image Acquisition and Sensing

Charged coupled device


CCD-chip

31
Under exposed

Correct exposed
• Integration over
time
– Exposure time
Over exposed
– Maximum charge
• Saturation
• Blooming 32
Image Acquisition Using a Single Sensor
Image Acquisition Using Sensor Strips
Image Acquisition Using Sensor Arrays

A/D

• Image elements, picture elements, pels, pixels


35
Example of Digital Image

Continuous image Result of image


projected onto a sampling and
sensor array quantization
Pixel values typically represent gray levels,
colours, heights, opacities etc
Remember digitization implies that a digital
image is an approximation of a real scene
1 pixel
Imaging system
• Image acquisition

• Illumination
– Passive: sun
– Active: ordinary lamp, X-ray, radar, IR

• Camera lens
– Focus the light on the CCD chip

38
A Simple Image Formation Model

• The function f(x,y) may be characterized by two


components:
→The amount of source light incident on the scene
being viewed called as Illumination and
→The amount of light reflected by the objects in the
scene called as Reflectance.
light intensity function

• As light is a form of energy, f(x,y) must be


nonzero and finite, i.e,
0< f(x,y) <∞
• The images people perceive in everyday visual
activities normally consist of light reflected
from objects.
Illumination and reflectance

• The functions i(x,y) and r(x,y) combine as a


product to form f(x,y):
f(x,y) = i(x,y) r(x,y)
Where 0< i(x,y) < ∞ and
0< r(x,y) <1
• The nature of i(x,y) is determined by illumination
source and r(x,y) is determined by characteristics
of the imaged objects.
Typical Ranges of i(x,y) for Visible light

• On clear day sun produces – 90,000 lm/m2

• On a cloudy day – 10,000 lm/m2

• On a clear evening a full moon– 0.1 lm/m2

• Commercial office – 1000 lm/m2


Typical Ranges of r(x,y)

• 0.01 for Black velvet

• 0.65 for Stainless steel

• 0.80 for Flat- white wall paint

• 0.90 for Silver plated metal

• 0.93 for Snow


Gray level
• We call the Intensity of monochrome image at any
coordinates (x,y) the Gray level (l) of image at that
point.
i.e. l=f(x,y)
• Thus ‘l’ lies in the range Lmin ≤ l ≤ Lmax
• ‘Lmin ’ is positive and ‘Lmax’ must be finite.
• The interval [Lmin, Lmax ] is called Gray Scale.
• Lmin = imin . rmin Lmin ≈ 10
• Lmax = imax . rmax Lmax ≈ 1000

• Common practice is [0,L-1] where l=0 is considered


Black and l=L-1 is considered White on gray scale.
• all intermediate values are shades of gray varying from
Black to White.
SAMPLING AND QUANTIZATION
• there are numerous ways to acquire images, but
our objective in all is the same: to generate digital
images from sensed data.
• The output of most sensors is a continuous
voltage waveform whose amplitude and spatial
behavior are related to the physical phenomenon
being sensed.
• To create a digital image, we need to convert the
continuous sensed data into digital form. This
involves two processes: sampling and
quantization.
Basic Concepts in Sampling and Quantization

• The basic idea behind sampling and quantization is


illustrated in Fig. 2.16. Figure 2.16(a) shows a continuous
image, f(x, y), that we want to convert to digital form.
• An image may be continuous with respect to the x- and y-
coordinates, and also in amplitude.
• To convert it to digital form, we have to sample the function
in both coordinates and in amplitude.

• Digitizing the coordinate values is called sampling.

• Digitizing the amplitude values is called quantization.


Sampling

• sampling = the spacing of discrete values in


the domain of a signal.
• sampling-rate = how many samples are
taken per unit of each dimension. e.g.,
samples per second, frames per second,
etc.
f(t)

t
Quantization

• Quantization = spacing of discrete values in the


range of a signal.
• usually thought of as the number of bits per
sample of the signal. e.g., 1 bit per pixel (b/w
images), 16-bit audio, 24-bit color images, etc.

f(t) 7
8 levels = 23: uses 3
Bits to represent the value of
the function.

0 t
Representing Digital Images
• The result of sampling and quantization is a matrix of
real numbers.
• Assume that an image f(x, y) is sampled so that the
resulting digital image has M rows and N columns.
• The values of the coordinates (x, y) now become
discrete quantities.
• For notational clarity and convenience, we shall use
integer values for these discrete coordinates.
• Thus, the values of the coordinates at the origin are (x,
y)=(0, 0).
• The next coordinate values along the first row of the
image are represented as (x, y)=(0, 1).
Complete MXN digital image in the compact
matrix form is

f(0,0), . . f(0,N-1)
f(0,1),
f(1,0), f(1,1), . . .
f(1,N-1)
f(x,y)=
f(M-1,0), f(M-1,1), . . f(M-1,N-1)

Digital image pixels


Digital Image Representation

The term Image refers to a two


dimensional light intensity function
f(x,y),

Where x and y denote spatial


coordinates.

The value or amplitude of ‘f’ at any


point (x,y) is proportional to the
intensity of the image at that point.

Sometimes viewing an image function in


perspective with third axis being
brightness is useful.
Digital Image Representation

• A digital image can be considered a matrix whose


row and column indices identify a point in the
image and
• The corresponding matrix element value identifies
the gray level at that point.
• Although the size of digital image varies with the
application but there are many adv’s of selecting
square arrays with sizes and number of gray levels
that are integer powers of 2.
Digital Image Representation

Pixel values in highlighted region

A set of number
CAMERA DIGITIZER in 2D grid.

Samples the analog data and digitizes it.


Expressing sampling & quantization
in more mathematical terms.
• Assume that an image f(x,y) is sampled so that the
resulting digital image has M rows and N columns.
• Each element of the above matrix array is called an
image element, picture element ,pixel or pel.
• If the gray levels also are integers then a digital image
becomes a 2-D function whose co-ordinates and
amplitude values are integers.
• This digitization process requires decisions about values
for M,N and for number ,L, of discrete gray levels
allowed for each pixel.
Number of bits

◼ The number of gray levels


typically is an integer power of 2

L = 2k

◼ Number of bits required to store a


digitized image

b=MxNxk
Resolution

• Resolution (how much you can see the detail of


the image) depends on two parameters i.e
number of samples (N) and
gray levels (K).
• The more these parameters are increased, the
closer the digitized array approximates the original
image.
• Also unfortunate fact is that storage and processing
requirements increase rapidly as a function of N,M
and K.
Spatial and Gray level Resolution

• Sampling is the principal factor determining


the spatial resolution of an image.
• Spatial resolution is simply the smallest
number of discernible line pairs per unit
distance.ex.100 line pairs per millimeter.
• Gray-level resolution refers to the smallest
discernible change in gray level.
Spatial Resolution
Gray level Resolution
Case-1:varying ‘N’, keeping ‘k’ constant
• There are three cases which we can study by varying N
,K.
• When varying the number of samples (N) in a digital
image
• i.e for ex. A 1024x1024,8-bit image sub sampled down
to size 32x32 pixels,
• we come across Checkerboard pattern in the borders
between flower petals and black background.
• A slightly more pronounced graininess through the
image also is beginning to appear. these effects are
more visible in 128x128 image and they become
pronounced in 64x64 and 32x32 images.
Checkerboard effect

(a) 1024x1024
(b) 512x512
(c) 256x256
(d) 128x128
(e) 64x64
(f) 32x32

if the resolution is decreased too much,the checkerboard


effect can occur.
Case-2:varying ‘k’, keeping ‘N’ constant.

• When varying the number of gray levels (k) in a digital


image and keeping the number of samples or spatial
resolution constant.
• Varying the gray levels from 256 to 2.
• The 32 –level image shown in figure however has an
almost imperceptible set of very fine ridge like
structures in areas of smooth gray levels. this effect
caused by insufficient use of gray levels in smooth
areas is called as false contouring.
False contouring

(a) Gray level = 16


(b) Gray level = 8
(c) Gray level = 4
(d) Gray level = 2

if the gray scale is not enough, the


smooth area will be affected.

False contouring can occur on the


smooth area which has fine gray
scales.
varying both N & K
• Case-3: varying both N & K
• Representative isopreference curves for three
types of image such as
• Woman’s face (having little detail)
• Picture of cameraman containing intermediate
amount of detail and
• The crowd picture contains a large amount of
detail in comparison to above two.
Size, Quantization Levels and Details

N= Size of the image Isopreference curves


K= bits per Pixel
Zooming and Shrinking Digital Images

• Zooming may be viewed as oversampling


• Shrinking may be viewed as undersampling.
• Zooming requires two steps:
1. The creation of new pixel locations,
2. The assignment of gray levels to those new
locations.
Three methods
• Nearest Neighbor Interpolation
• Pixel Replication
• Bilinear Interpolation
RELATIONSHIP BETWEEN PIXELS
• We consider several important relationships between pixels in a
digital image.
• NEIGHBORS OF A PIXEL
• A pixel p at coordinates (x,y) has four horizontal and vertical
neighbors whose coordinates are given by:

(x+1,y), (x-1, y), (x, y+1), (x,y-1)


• This set of pixels, called the 4-neighbors or p, is denoted by N4(p).
Each pixel is one unit distance from (x,y) and some of the neighbors
of p lie outside the digital image if (x,y) is on the border of the
image.
• . The four diagonal neighbors of p have
coordinates and are denoted by ND (p).

• (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)


• These points, together with the 4-neighbors, are
called the 8-neighbors of p, denoted by N8 (p).

• As before, some of the points in ND (p) and N8 (p)


fall outside the image if (x,y) is on the border of
the image.
ADJACENCY AND CONNECTIVITY
• Let v be the set of gray –level values used to define adjacency, in a
binary image, v={1}. In a gray-scale image, the idea is the same, but
V typically contains more elements, for example, V = {180, 181, 182,
…, 200}.
• If the possible intensity values 0 – 255, V set can be any subset of
these 256 values.
• if we are reference to adjacency of pixel with value.
• Three types of adjacency
• 4- Adjacency – two pixel P and Q with value from V are 4 –
adjacency if A is in the set N4(P)
• 8- Adjacency – two pixel P and Q with value from V are 8 –
adjacency if A is in the set N8(P)
• M-adjacency –two pixel P and Q with value from V are m –
adjacency if (i) Q is in N4(p) or (ii) Q is in ND(q) and the set N4(p) ∩
N4(q) has no pixel whose values are from V.
• Mixed adjacency is a modification of 8-adjacency. It is introduced to
eliminate the ambiguities that often arise when 8-adjacency is
used.
Digital Path
• A digital path (or curve) from pixel p with
coordinate (x,y) to pixel q with coordinate (s,t)
is a sequence of distinct pixels with
coordinates (x0,y0), (x1,y1), …, (xn, yn) where
(x0,y0) = (x,y) and (xn, yn) = (s,t) and pixels (xi,
yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n
• n is the length of the path
• If (x0,y0) = (xn, yn), the path is closed.
• We can specify 4-, 8- or m-paths depending on
the type of adjacency specified.
• Connectivity:
• Let S represent a subset of pixels in an image, two pixels p
and q are said to be connected in S if there exists a path
between them consisting entirely of pixels in S.
• For any pixel p in S, the set of pixels that are connected to it
in S is called a connected component of S. If it only has one
connected component, then set S is called a connected set.
• Region and Boundary:
• REGION: Let R be a subset of pixels in an image, we call R a
region of the image if R is a connected set.
• BOUNDARY: The boundary (also called border or
contour) of a region R is the set of pixels in the region that
have one or more neighbors that are not in R.
• or
• The border of a region is the set of pixels in the region that
have at least one back ground neighbor.
DISTANCE MEASURES
• For pixel p,q and z with coordinate (x.y) ,(s,t) and
(v,w) respectively D is a distance function or
metric if
• D [p.q] ≥ O {D[p.q] = O iff p=q}
• D [p.q] = D [p.q] and
• D [p.q] ≥ O {D[p.q]+D(q,z)

• Euclidean Distance
• D4 distance (also called city-block distance)
• D8 distance (also called chessboard distance)
• The Euclidean Distance between p and q is
defined as:
• De (p,q) = [(x – s)2 + (y - t)2]1/2
• Pixels having a distance less than or equal to
some value r from (x,y) are the points
contained in a disk of radius ‘ r ‘centered at
(x,y)
• The D4 distance (also called city-block distance) between p
and q is defined as:
• D4 (p,q) = | x – s | + | y – t |
• Pixels having a D4 distance from (x,y), less than or equal to
some value r form a Diamond centered at (x,y)

• Example:
• The pixels with distance D4 ≤ 2 from (x,y) form the following contours of
constant distance.
• The pixels with D4 = 1 are the 4-neighbors of (x,y)
• The D8 distance (also called chessboard distance) between p
and q is defined as:
• D8 (p,q) = max(| x – s |,| y – t |)
• Pixels having a D8 distance from (x,y), less than or equal to
some value r form a square Centered at (x,y).

• Example:
• D8 distance ≤ 2 from (x,y) form the following contours of
constant distance.
IMAGE TRANSFORMS
Discrete Fourier Transform (DFT)
• 1-D Discrete Fourier Transform (DFT) is defined
as

1-D Inverse Discrete Fourier Transform (IDFT) is


defined as
Discrete Fourier Transform (DFT)
• 2-D Discrete Fourier Transform (DFT) is defined
as

Where f(x,y) is a digital image of size MxN.


• u=0,1,2…….M-1 and v=0,1,2,………N-1.
• 2-D Inverse Discrete Fourier Transform (IDFT) is
defined as

Where x=0,1,2…….M-1 and y =0,1,2,………N-1.


DFT Properties
WALSH TRANSFORM
• We define now the 1-D Walsh transform as
follows:

• The above is equivalent to:


• 1-D Inverse Walsh Transform

• The above is again equivalent to

• The array formed by the inverse Walsh matrix


is identical to the one formed by the forward
Walsh matrix apart from a multiplicative
factor N.
2-D Walsh Transform
• We define now the 2-D Walsh transform as a
straightforward extension of the 1-D transform:

• 2-D Inverse Walsh Transform


• We define now the Inverse 2-D Walsh transform. It is
identical to the forward 2-D Walsh transform
HADAMARD TRANSFORM
DISCRETE COSINE TRANSFORM (DCT)
• The 1-D Discrete Cosine transform (DCT) is defined as

For u=0,1,2,….N-1
• The inverse DCT is defined as

For x=0,1,2……N-1
• Where
2-D DISCRETE COSINE TRANSFORM (DCT)
• The 2-D Discrete Cosine transform (DCT) is defined as

For u=0,1,2,….N-1 v=0,1,2,….N-1

• The inverse DCT is defined as

For x=0,1,2……N-1 y=0,1,2……N-1


Where
DISCRETE WAVELET TRANSFORM (DWT)
1-D DWT
• 1-D Discrete Wavelet Transform (DWT)

• Inverse Discrete Wavelet Transform (DWT)


2-D DWT
• In two dimensions a two dimensional Scaling
function and three two- dimensional
Wavelets are required.
2-D DWT
• 2-D Discrete Wavelet Transform (DWT) of image
f(x,y) of size MxN is given as
• Inverse Discrete Wavelet Transform (DWT)

You might also like