0% found this document useful (0 votes)
75 views123 pages

Dept:Electronics and Communication Engineering Subject Name:Digital Image Processing SUBJECT CODE:15A04708

This document provides information about a digital image processing course syllabus from Narayana Engineering College in Nellore, India. It includes 5 units that will be covered: introduction to digital image processing, 2D orthogonal and unitary transforms, image enhancement, image degradation and restoration, and image compression. The objectives of the course are to understand fundamental concepts, apply filters for enhancement, understand compression methods, apply segmentation techniques, and select restoration techniques. It provides background on what a digital image is, the history and applications of digital image processing such as enhancement, medicine, and more.

Uploaded by

murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views123 pages

Dept:Electronics and Communication Engineering Subject Name:Digital Image Processing SUBJECT CODE:15A04708

This document provides information about a digital image processing course syllabus from Narayana Engineering College in Nellore, India. It includes 5 units that will be covered: introduction to digital image processing, 2D orthogonal and unitary transforms, image enhancement, image degradation and restoration, and image compression. The objectives of the course are to understand fundamental concepts, apply filters for enhancement, understand compression methods, apply segmentation techniques, and select restoration techniques. It provides background on what a digital image is, the history and applications of digital image processing such as enhancement, medicine, and more.

Uploaded by

murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 123

NARAYANA ENGINEERING COLLEGE-NELLORE

DEPT:ELECTRONICS AND COMMUNICATION ENGINEERING

SUBJECT NAME:DIGITAL IMAGE PROCESSING

SUBJECT CODE:15A04708

FACULTY:K.SELVAKUMARASAMY
ASSOCIATE PROFESSOR
ECE DEPT.

1
SYLLABUS
UNIT–I

Introduction to Digital Image processing – Example fields of its usage- Image sensing and
Acquisition – image Modeling - Sampling, Quantization and Digital Image representation – Basic
relationships between pixels, - Mathematical tools/ operations applied on images - imaging
geometry.

UNIT–II

2D Orthogonal and Unitary Transforms and their properties - Fast Algorithms - Discrete Fourier
Transform - Discrete Cosine Transforms- Walsh- Hadamard Transforms- Hoteling Transforms ,
Comparison of properties of the above.

UNIT–III

Background enhancement by point processing Histogram processing, Spatial filtering,


Enhancement infrequency Domain, Image smoothing, Image sharpening, Color image
Enhancement

2
SYLLABUS
UNIT–IV

Degradation model, Algebraic approach to restoration – Inverse filtering – Least Mean Square
filters, Constrained Least square restoration. Blind Deconvolution Image segmentation: Edge
detection -,Edge linking, Threshold based segmentation methods – Region based Approaches -
Template matching –use of motion in segmentation

UNIT–V

Redundancies in Images - Compression models, Information theoretic perspective- Fundamental


coding theorem. Huffman Coding, Arithmetic coding, Bit plane coding, Run length coding,
Transform coding, Image Formats and compression standards.
TEXT BOOKS:

1. R.C .Gonzalez & R.E. Woods, “Digital Image Processing”, Addison Wesley/Pearson education,3rd Edition, 2010.

2. A .K. Jain, “Fundamentals of Digital Image processing”, PHI.

REFERENCES:

1. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image processing usingMATLAB”, Tata McGraw Hill,
2010.

2. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”, Tata McGraw Hill.

3. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004

3
COURSE OBJECTIVES AND
OUTCOMES
 To study the image fundamentals and mathematical transforms necessary for
image processing.
 To Understand the concepts of image enhancement in spatial and frequency
domain.
 To Acquire knowledge on image compression methods.
 To Acquire knowledge on image segmentation methods.
 To Understand the concepts of image restoration

Understand the fundamental concepts of a digital image processing system.(BL-02)

Apply 2D filters for image enhancement in spatial and frequency domain. (BL-03)

Understand various image compression methods.(BL-02)

Apply segmentation methods on digital images .(BL-03)

Select the techniques for image restoration.(BL-03)


4
INTRODUCTION TO DIGITAL IMAGE PROCESSING

UNIT 1 CONTENTS:

INTRODUCTION TO DIGITAL IMAGE PROCESSING – EXAMPLE


FIELDS OF ITS USAGE- IMAGE SENSING AND ACQUISITION – IMAGE
MODELING - SAMPLING, QUANTIZATION AND DIGITAL IMAGE
REPRESENTATION - BASIC RELATIONSHIPS BETWEEN PIXELS, -
MATHEMATICAL TOOLS/ OPERATIONS APPLIED ON IMAGES - IMAGING
GEOMETRY.

5
WHAT IS A DIGITAL IMAGE?

▪ A Digital Image is a representation of a two dimensional image as a finite set of


digital values, called picture elements or pixels Pixel values typically represent gray
levels, colours , heights etc
▪ Digital image is an approximation of a real scene.

Fig 1.1 : Digital Image 6


WHAT IS A DIGITAL IMAGE?

DEFINITION OF DIGITAL IMAGE:

An Image maybe defined as a two-dimensional function, ƒ(x,y), where x and y


are spatial (Plane) Coordinates, and the amplitude of ƒ at any pair of coordinates (x,y) is
called the Intensity or gray level of the image at that point( x,y) and intensity values of ƒ
are finite, discrete quantities, we call the image as Digital Image.

1 pixel

Fig 1.2 : Digital Image 7


WHAT IS A DIGITAL IMAGE?

Common image formats include:


– 1 sample per point (B&W or Grayscale)

– 3 samples per point (Red, Green, and Blue)

– 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity)

Fig 1.3 : For most of this course we will focus on grey-scale images

8
WHAT IS A DIGITAL IMAGE?

▪ Digital Image processing is a method to perform some operations on an Digital image, in


order to get an enhanced Digital image or to extract some useful information from it.

▪ Processing of Images which are digital in nature by means of Digital Computer.

▪ Digital image processing focuses on two major tasks


▪ Improvement of pictorial information for human interpretation
▪ Processing of image data for storage, transmission and representation for
autonomous machine perception

9
WHAT IS A DIGITAL IMAGE?

The continuum from image processing of computer vision can be broken up into low-,
mid- and high-level processes.

Fig 1.4 : Image Processing

10
HISTORY OF DIGITAL IMAGE PROCESSING

Early 1920s: One of the first applications of digital


imaging was in the news- paper industry
▪ The Bartlane cable picture transmission service
Images were transferred by submarine cable
between London and New York
▪ Pictures were coded for cable transfer and Fig 1.5 : Early Digital Image
reconstructed at the receiving end on a telegraph 1920s

printer

11
HISTORY OF DIGITAL IMAGE PROCESSING

Mid to late 1920s: Improvements to the Bartlane


system resulted in higher quality images
▪ New reproduction processes based
on photographic techniques
▪ Increased number of tones in
reproduced images

Fig 1.6 : Early Digital Image


Mid to late 1920s

12
HISTORY OF DIGITAL IMAGE PROCESSING

▪ 1960s: Improvements in computing technology and the


onset of the space race led to a surge of work in digital
image processing
▪ 1964: Computers used to improve the quality of
images of the moon taken by the Ranger 7 probe
▪ Such techniques were used in other space missions
including the Apollo landings

Fig 1.7 : Moon Image by Ranger 7


probe minutes before landing

13
HISTORY OF DIGITAL IMAGE PROCESSING

▪ 1970s: Digital image processing begins to be used in


medical applications
▪ 1979: Sir Godfrey N. Hounsfield & Prof. Allan
M.Cormack share the Nobel Prize in medicine for the
invention of tomography, the technology behind
Computerised Axial Tomography (CAT) scans
Fig 1.8 : Typical head slice CAT image

14
HISTORY OF DIGITAL IMAGE PROCESSING

1980s - Today: The use of digital image processing techniques has exploded and they
are now used for all kinds of tasks in all kinds of areas
▪ Image enhancement/restoration

▪ Artistic effects

▪ Medical visualisation

▪ Industrial inspection

▪ Law enforcement

▪ Human computer interfaces

15
APPLICATIONS – IMAGING MODALITIES

Fig 1.9 : EM spectrum according to energy per photon

16
APPLICATIONS: IMAGE ENHANCEMENT

▪ One of the most common uses of DIP techniques: improve quality, remove noise etc

Fig 1.10 : Image Enhancement 17


APPLICATIONS: THE HUBBLE TELESCOPE

Launched in 1990 the Hubble telescope can take


images of very distant objects However, an
incorrect mirror made many of Hubble’s
images useless Image processing techniques
were used to fix this.

Fig 1.11 : Hubble Telescope Image 18


APPLICATIONS: ARTISTIC EFFECTS

Artistic effects are used to make


images more visually appealing, to
add special effects and to make
composite images.

Fig 1.12 : Artistic Effects 19


APPLICATIONS: MEDICINE

Fig 1.13 : X-ray Image Fig 1.14 : Gamma Image


20
APPLICATIONS: MEDICINE

▪ Radio frequencies
▪ Magnetic Resonance Imaging (MRI)

Fig 1.15 : MRI Image


Fig 1.16 : UV Sound
21
APPLICATIONS: MEDICINE

fig 1.17 3D tomography and rendering with transparencies

22
APPLICATIONS: MEDICINE

Take slice from MRI scan of canine heart, and find boundaries between types of tissue
▪ Image with gray levels representing tissue density

▪ Use a suitable filter to highlight edges

fig 1.18 Original Image of Dog Heart fig 1.19 Edge Detection Image

23
APPLICATIONS: GIS

Geographic Information Systems


▪ Satellite imagery

▪ Terrain classification (LANDSAT)

▪ Meteorology (NOAA)

fig 1.20 Geographic Information Systems


24
APPLICATIONS: GIS

Night-Time Lights of the World data set

(infra red)
▪ Global inventory of human
settlement
▪ Not hard to imagine the kind of
analysis that might be done
using this data

fig 1.21 Geographic Information Systems


25
APPLICATIONS: INDUSTRIAL INSPECTION

Human operators are expensive, slow and


unreliable

Make machines do the job instead


Industrial vision systems are used in all
kinds of industries

Can we trust them?

fig 1.22 Industrial Inspection


26
APPLICATIONS: PCB INSPECTION

PRINTED CIRCUIT BOARD (PCB) INSPECTION


▪ Machine inspection is used to determine that all components are present and
that all solder joints are acceptable
▪ Both conventional imaging and x-ray imaging

fig 1.23 PCB Images

27
APPLICATIONS: LAW ENFORCEMENT

Image processing techniques are used


extensively by law enforcers
▪ Number plate recognition for speed
cameras/automated toll systems
▪ Fingerprint recognition
fig 1.24 Number Plate Recognition
▪ Enhancement of CCTV images

fig 1.25 Fingerprint Recognition


28
APPLICATIONS: HCI

Try to make human computer interfaces more


natural
▪ Face recognition

▪ Gesture recognition

Does anyone remember the user interface from


fig 1.26 Face Recognition
“Minority Report”?

These tasks can be extremely difficult

fig 1.27 Gesture Recognition


29
KEY STAGES IN DIGITAL IMAGE PROCESSING

fig 1.28 Stages in DIP


30
KEY STAGES IN DIGITAL IMAGE PROCESSING
IMAGE ACQUISITION:

The first process is to acquire a digital image. To do this we need


image sensor equipment having the ability to digitize the signal
produced by the sensor. The sensor could be a TV camera, a line-
scan camera, etc

fig 1.29 Image Acquisition 31


KEY STAGES IN DIGITAL IMAGE PROCESSING
IMAGE ENHANCEMENT:

It is a subjective area of image processing which is used to bring


out detail that is obscured or to highlight certain features of
interest in an image. Example: increasing the contrast of an image
for better vision

fig 1.30 Image Enhancement 32


KEY STAGES IN DIGITAL IMAGE PROCESSING:
IMAGE RESTORATION:

It also deals with improving the appearance of an image. But it is


done using the mathematical or probabilistic methods of image
processing

fig 1.31 Image Restoration 33


KEY STAGES IN DIGITAL IMAGE PROCESSING
MORPHOLOGICAL PROCESSING:

It deals with tools for extracting image components that are useful in the
representation and description of shape and boundary of objects. It is
majorly used in automated inspection applications

fig 1.32 Morphological Processing 34


KEY STAGES IN DIGITAL IMAGE PROCESSING
SEGMENTATION:

• It may be defined as portioning an input image into its constituent parts or


objects. It is very important to distinguish between different objects in an
image as in the case of systems employed for traffic control, or crowd
control.
• In character recognition, the key role of segmentation is to extract
individual characters and words from the background
fig 1.33 Segmentation 35
KEY STAGES IN DIGITAL IMAGE PROCESSING
OBJECT RECOGNITION:

It is a process that assigns a label to an object based on its descriptors i.e.,


the information provided by its descriptors and the recognized object is
interpreted by assigning a meaning to it.

fig 1.34 Object Recognition 36


KEY STAGES IN DIGITAL IMAGE PROCESSING
REPRESENTATION & DESCRIPTION:

It is a process which transforms raw data into a form suitable for


subsequent computer processing. The first decision is to choose between
boundary representation and regional representation. Boundary
representation is used when the details of external shape characteristics is
important whereas the regional representation is used when the internal
properties are important.
fig 1.35 Representation & Description 37
KEY STAGES IN DIGITAL IMAGE PROCESSING
IMAGE COMPRESSION:

This technique is used for the storage required to save an image or the
bandwidth required to transmit it. which is most important in Internet
applications

fig 1.36 Image Compression 38


KEY STAGES IN DIGITAL IMAGE PROCESSING
COLOUR IMAGE PROCESSING:

As we know, to restore the natural characteristics of an image it is


necessary to preserve the color information associated with an image.
For this purpose we go for color image processing

fig 1.37 Colour Image Processing 39


FUNDAMENTAL STEPS IN DIP

fig 1.38 Steps in DIP 40


• Color Image Processing: As we know, to restore the natural
characteristics of an image it is necessary to preserve the color
information associated with an image. For this purpose, we go
for color image processing.

• Wavelets and Multi-resolution: This is the foundation for


representing images in various degrees of resolution.
Particularly it is employed for image data compression and for
pyramidal representation where images are subdivided into
successively into smaller regions.

• Knowledge Base: The function of knowledge base is to guide


the operation of each processing module and control the
interaction between them. A feedback request through the
knowledge base to the segmentation stage for another ‘look’ is
an example of knowledge utilization in performing image
processing tasks.

41
COMPONENTS OF IMAGE PROCESSING SYSTEM

• Mid -1980s:Numerous models of image processing systems


as substantial peripheral devices that attached to equally
substantial host computers

• Late 1980s and early 1990s :Image processing hardware-


single boards compatible with industry standard buses,
engineering workstation cabinets and personal computers

• Till-date: There is reduction in cost, miniaturization and


application specific systems.

42
COMPONENTS OF IMAGE PROCESSING SYSTEM
Image Sensors With reference to sensing, two elements are required to acquire digital

image. 1) A physical device that is sensitive to the energy radiated by the object we wish to

image 2) specialized image processing hardware


Network

Image displays Computer Mass storage

Image processing
Hardcopy
Special image software
processing hardware

PROBLEM DOMAIN Image sensors


43
Fig:1.39 components of DIP
COMPONENTS OF IMAGE PROCESSING SYSTEM

Specialize image processing hardware – It consists of the digitizer and hardware

that performs other primitive operations such as an arithmetic logic unit

Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy software
processing hardware

PROBLEM DOMAIN Image sensors


44
COMPONENTS OF IMAGE PROCESSING SYSTEM
Computer It is a general purpose computer and can range from a PC to a supercomputer

depending on the application. In dedicated applications, sometimes specially designed

computer are used to achieve a required level of performance

Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy
processing hardware software

PROBLEM DOMAIN Image sensors


45
COMPONENTS OF IMAGE PROCESSING SYSTEM
Software It consist of specialized modules that perform specific tasks as well designed

package also includes capability for the user to write code. More sophisticated software

packages allow the integration of these modules

Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy software
processing hardware

PROBLEM DOMAIN Image sensors


46
COMPONENTS OF IMAGE PROCESSING SYSTEM
Mass storage :An image of size 1024 x1024 pixels, in which the intensity of each pixel is an
8- bit quantity requires one megabytes of storage space if the image is not compressed.
Image processing applications falls into three principal categories of storage
i) Short term storage for use during processing
ii) On line storage for relatively fast retrieval
iii) Archival storage such as magnetic tapes and optical disks housed in jukeboxes

Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy software
processing hardware

PROBLEM DOMAIN Image sensors


47
COMPONENTS OF IMAGE PROCESSING SYSTEM
Image displays Image displays in use today are mainly color TV
monitors. These monitors are driven by the outputs of image and
graphics displays cards that are an integral part of computer system.

Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy
processing hardware software

PROBLEM DOMAIN Image sensors


48
COMPONENTS OF IMAGE PROCESSING SYSTEM
Hardcopy devices The devices for recording image includes laser
printers, film cameras, heat sensitive devices inkjet units and digital
units such as optical and CD ROM disk. Films provide the highest
possible resolution, but paper is the obvious medium of choice for
written applications.
Network

Image displays Computer Mass storage

Special image Image processing


Hardcopy software
processing hardware

PROBLEM DOMAIN Image sensors


49
COMPONENTS OF IMAGE PROCESSING
• Networking It is almost a default function in any computer
system in use today because of the large amount of data inherent
in image processing applications. The key consideration in image
transmission bandwidth.

50
ELEMENTS OF THE VISUAL PERCEPTION

• The field of DIP is built on a foundation of mathematical and


probabilistic formulations

• But human intuition and analysis play a central role in the


choice of technique based on subjective, visual judgments

• Thus a basic understanding of human visual perception as a


first step in understanding digital image processing is must

51
STRUCTURE OF THE HUMAN EYE

Fig:1.40 structure of human eye 52


IMAGE FORMATION IN THE EYE

• In an ordinary photographic camera, the lens has a


fixed focal length and focusing at various distances is
achieved by varying the distance between the lens and the
imaging plane, where the film is located.

• In human eye the converse is true: the distance


between the lens and the imaging region (the retina) is
fixed and the focal length needed to achieve proper focus is
obtained by varying the shape of the lens.

53
IMAGE FORMATION IN THE EYE

• The distance between the center of the lens and the retina
along the visual axis is approximately 17m meters

• The range of the focal lengths is approximately 14mmeters


to 17mmeter, latter taking place when the eye is relaxed
and focused at distances greater than about 3meters

• The geometry in next figure illustrates how to obtain the


dimensions of an image formed on the retina.

54
IMAGE FORMATION IN THE EYE

Fig:1.41

• For example, suppose that a person is looking at a tree 15m high at


a distance of 100 m.
• Letting h denote the height of that object in the retinal image, the
geometry yields 15/100=h/17 or h=2.22m meters. 55
IMAGE FORMATION IN THE EYE

• The retinal image is focused primarily on the region of the


fovea.
• Perception then takes place by the relative excitation of light
receptors which transform radiant energy into electrical
impulses that ultimately are decoded by the brain.
How human system works as below
• Light energy is focused by the lens of the eye into sensors
and retina.

• The sensors respond to the light by an electrochemical


reaction that sends an electrical signal to the brain (through
the optic nerve)

• The brain uses the signals to create neurological patterns


that we perceive as images.

56
IMAGE SENSING & ACQUISITION

▪ Images are generated by the combination of an “illumination” source and the


reflection or absorption of energy from that source by the elements of the “scene”
being imaged.
▪ There are three types of sensing methods to acquire images.
1. Image sensing using single sensor.
2. Image sensing using a sensor strip.
3. Image sensing using array of sensors.

fig 1.42 Sensor 57


IMAGE SENSING WITH SINGLE SENSOR

▪ To generate a 2-D image using a single sensor, there has to be relative displacements
in both the x- and y-directions between the sensor and the area to be imaged.

fig 1.43 2-D Image 58


IMAGING WITH SENSOR STRIP

▪ The strip provides imaging elements in one direction.


▪ Motion perpendicular to the strip provides imaging in the other direction.
▪ This is the type of arrangement used in most flat bed scanners.

fig 1.44 Sensor Strip 59


IMAGING WITH ARRAY SENSORS

▪ Here individual sensors arranged in the form of a 2-D array.


▪ This is also the predominant arrangement found in digital cameras.

fig 1.45 Array Sensor 60


IMAGE SAMPLING AND QUANTIZATION
• For numerous ways to acquire images, objective is same: To generate digital
images from sensed data

• The output of most sensors is a continuous voltage waveform whose amplitude


and spatial behavior are related to the physical phenomenon being sensed

• To create a digital image, we need to convert the continuous sensed data in to


digital form

• This involves two processes :sampling and quantization


• Sampling the analog signal mean instantaneously measuring the voltage of the
signal at fixed interval in time.
• The value of the voltage at each instant is converted into a number and stored.

• The number represents the brightness of the image at that point this gives the
grabbed image is now a digital image and can be accessed as a two dimensional
array of data

• Each data point is called a pixel(pixel element)


• The notation used to express a digital image: I(r,c)
• I(r,c)=the brightness of the image at point (r,c)
61
IMAGE SAMPLING AND QUANTIZATION

▪ Formation of digital image from a continuous image basically involves two


steps. They are Sampling and Quantization.
▪ An image may be continuous with respect to x and y coordinates and also in
amplitude.
▪ To convert it to digital form, we have to sample the function in both
coordinates and in amplitude.
▪ Digitizing the coordinate values is called sampling.
▪ Digitizing the sampling whereas digitizing the amplitude values is called
quantization.

fig 1.46(a) Continuous image projected on to a sensor array


fig 1.46 (b) Result of image sampling and quantization 62
IMAGE SAMPLING AND QUANTIZATION
• A plot of amplitude(intensity level)values of the
continuous image along the line segment AB.
• To sample this function , we take equally spaced
samples along line AB.
• The spatial location of each is indicated by a
vertical tick mark in the part of the figure.
• The samples are shown as small white squares
superimposed on the function.
• The set of these discrete locations gives the
sampled function.
• However, the values of the samples still span a
continuous range of intensity values.
• Figure shows the intensity scale divided into fig 1.47 Sampling and Quantization
eight discrete intervals, ranging from black to
white.

63
IMAGE SAMPLING AND QUANTIZATION
• The vertical tick marks indicate the
specific value assigned to each of the
eight intensity intervals.
• The continuous intensity levels are
quantized by assigning one of the eight
values to each sample.
• The assignment is made depending on
the vertical proximity of a sample to a
tick mark.
• Starting at the top of the image and
carrying out this procedure line by line
produces a two-dimensional digital
image fig 1.47 Sampling and Quantization

64
• In addition to the number of discrete levels used ,the accuracy in
quantization is highly dependent on the noise content of the sampled signal
• In practice, the method of sampling is determined by the sensor
arrangement used to generate the image.
1. When an image is generated by a single sensing element combined with
mechanical motion , the output of the sensor is quantized in the manner
described above.
2. When a sensing strip is used for image acquisition the number of sensors in
the strip establishes the sampling limitations in one image direction.
Mechanical motion in the other direction can be controlled.
3. When a sensing array is used for image acquisition , there is no motion and
the number of sensors in the array establishes the limits of sampling in both
the directions.

65
Fig.1.48

The quality of a digital image is determined to a large degree by the


number of samples and discrete intensity levels used in sampling and
quantization.
66
IMAGE MODEL
▪ An Image as function f(x, y)

▪ The amplitude of the image is nonzero and finite i.e. 0< f(x,y)<α

▪ The function f(x, y) may be characterized by two components:

▪ The amount of source illumination incident on the scene being


viewed

▪ The amount of illumination reflected by the objects in the scene.

▪ Illumination component = i(x,y)

▪ Reflectance Component = r(x,y)

So, f(x,y)=i(x,y) . r(x,y)

Where, 0<i(x,y)< α
0<r(x,y)<1

67
WHAT IS GRAY LEVELS ?

▪ It is the brightness of a pixel.


▪ The value associated with a pixel representing it’s lightness from black to
white.
l=f(x0,y0)

L min < l < L max

L min =i min . r min and L max =i max . r max

fig 1.49 Gray Levels 68


REPRESENTATION OF DIGITAL
IMAGES
A Digital image can be represented as a function The same can be represented in the form of a matrix
as

69
REPRESENTATION OF DIGITAL
IMAGES

70
Spatial and Gray – level Resolution
• Sampling is the principal factor determining the spatial resolution of an image.
• Spatial resolution is the smallest discernible detail in an image.
• Suppose that we construct a chart with vertical lines of width W, with the space
between the lines also having width W.
• A line pair consists of one such line and its adjacent space.
• The width of a line pair is 2W, and there are 1/2Wline pairs per unit distance.
• Resolution is simply the smallest number of discernible line pairs per unit
distance.
• Gray-level resolution similarly refers to the smallest discernible change in
gray level.
• We have considerable discretion regarding the number of samples used to
generate a digital image, but this is not true for the number of gray levels. Due
to hardware considerations, the number of gray levels is usually an integer
power of 2.
71
Spatial and Gray – level Resolution
The subsampling was accomplished by deleting the appropriate number of rows
and columns from the original image

(a) 1024*1024, 8-bit image (b) 512*512 image resampled into


A 1024*1024, 8-bit image subsampled down to size
1024*1024 pixels by row and column duplication (c) through (f)
32*32 pixels The number of allowable gray levels was 256*256, 128*128, 64*64, and 32*32 images resampled into
kept at 256. 1024*1024 pixels

72
(a) 452*374,256-level image (b)–(d) Image displayed in 128, 64, and 32 gray levels,
while keeping the spatial resolution constant (e)–(g) Image displayed in 16, 8, 4, and 2 gray
levels.

73
• An early study by Huang [1965] attempted to quantify experimentally the effects
on image quality produced by varying N and k simultaneously.
• Sets of these three types of images were generated by varying N and k, and
observers were then asked to rank them according to their subjective quality.
Results were summarized in the form of so-called isopreference curves in the Nk-
plane.
• Each point in the Nk-plane represents an image having values of N and k equal to
the coordinates of that point.
• It was found in the course of the experiments that the isopreference curves tended
to shift right and upward, but their shapes in each of the three image categories
were similar. since a shift up and right in the curves simply means larger values for
N and k, which implies better picture quality.

a) Image with a low level of detail (b) Image with a medium level of detail (c) Image
with a relatively large amount of detail
Representative iso-preference curves
74
for the three types of images
• Pixel :A pixel is the smallest unit of a digital image or graphic that can be
displayed and represented on a digital display device. A pixel is also known
as a picture element
• A pixel is the basic logical unit in digital graphics. Pixels are combined to
form a complete image, video, text or any visible thing on a computer
display.
• A pixel is represented by a dot or square on a computer monitor display
screen. Pixels are the basic building blocks of a digital image or display and
are created using geometric coordinates.

• Resolution refers to the number of pixels in an image.


• Resolution is sometimes identified by the width and height of the image as
well as the total number of pixels in the image.
• For example, an image that is 2048 pixels wide and 1536 pixels high
(2048 x 1536) contains (multiply) 3,145,728 pixels (or 3.1 Megapixels).
You could call it a 2048 x 1536 or a 3.1 Megapixel image. 75
BASIC RELATIONSHIPS BETWEEN PIXELS

• Neighborhood

• Adjacency

• Connectivity

• Paths

• Regions and boundaries

76
BASIC RELATIONSHIPS BETWEEN PIXELS
• Neighbors of a pixel p at coordinates (x,y)

 4-neighbors of p, denoted by N4(p):


(x-1, y), (x+1, y), (x,y-1), and (x, y+1).

 4 diagonal neighbors of p, denoted by


ND(p):
(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1,
y+1).

 8 neighbors of p, denoted N8(p)


N8(p) = N4(p) U ND(p)
77
NEIGHBORS OF A PIXEL

Neighborhood relation is used to tell adjacent pixels. It is useful for analyzing regions.

(x,y-1)
4-neighbors of p:

(x-1,y) p (x+1,y) (x−1,y


)
N4(p) = (x+1,y)
(x,y+1)
(x,y−1
)
(x,y+1) considers only
4-neighborhood relation
fig 1.48 4-neighbors of p vertical and horizontal neighbors.

Note: q ∈ N4(p) implies p ∈ N4(q)

78
NEIGHBORS OF A PIXEL

(x-1,y-1) (x+1,y-1)
Diagonal neighbors of p:

p (x−1,y−1
)
ND(p) = (x+1,y−1
(x-1,y+1) (x+1,y+1) )
(x−1,y+1
Diagonal -neighborhood) relation considers only
fig 1.50 Diagonal neighbors of p (x+1,y+1)
diagonal neighbor pixels.

79
NEIGHBORS OF A PIXEL

8-neighbors of p:
(x−1,y−1)
(x-1,y-1) (x,y-1) (x+1,y-1)
(x,y−1)
(x+1,y−1)
N8(p) = (x−1,y)
(x+1,y)
(x-1,y) p (x+1,y)
(x−1,y+1)
(x,y+1)
(x+1,y+1)
(x-1,y+1) (x,y+1) (x+1,y+1)

fig 1.49 8-neighbors of p 8-neighborhood relation considers all neighbor pixels.

80
BASIC RELATIONSHIPS BETWEEN PIXELS
• Adjacency A pixel p is adjacent to pixel q if they are
connected. Two image subsets S1 and S2 are
adjacent if some pixel in S1 is adjacent to some pixel
in S2 S1
Let V be the set of intensity values S2

 4-adjacency:Two pixels p and q with values from V


are 4-adjacent if q is in the set N4(p).
 8-adjacency: Two pixels p and q with values from V
are 8-adjacent if q is in the set N8(p).

 m-adjacency: Two pixels p and q with values from V are m-adjacent


if
(i) q is in the set N4(p), or
(ii) q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels whose values
are from V.

81
Example

Consider two image subsets S1 and S2 shown in the


following figure, for V={1}, Determine whether these two
subsets are (a) 4-adjacent(b) 8-adjacent (c)m-adjacent
1 1 1 1 1 1 1 0

1 1 0 0 1 0 1 1

1 1 0 1 0 0 1 1

1 0 0 0 1 1 1 1

82
Consider two image subsets S1 and S2 shown in the
following figure, for V={1}, Determine whether these two
subsets are (a) 4-adjacent(b) 8-adjacent (c)m-adjacent

S1 S2

0 0 0 0 0 0 0 1 1 0

1 0 0 1 0 0 1 0 0 1

1 0 0 1 0 1 1 0 0 0

0 0 1 1 1 0 0 0 0 0

0 0 1 1 1 0 0 1 0 1

83
Connectivity, Adjacency
Case I: V={1} V={0,1,2,3,4,5,6,7,8,9,10}
Case 2: v={0}

Binary Image Gray scale Image

0 1 0 1 54 10 100 5

0 0 1 0 81 150 2 34

0 0 1 0 201 200 3 45

1 0 0 0 7 70 147 52

Not connected to any pixel Not connected to any pixel

84
EXAMPLES: ADJACENCY AND PATH
V = {1}
0 1,1 1 1,2 1 1,3 0 1 1 0 1 1
0 2,1 1 2,2 0 2,3 0 1 0 0 1 0
0 3,1 0 3,2 1 3,3 0 0 1 0 0 1
m-adjacent
8-adjacent The m-path from (1,3) to (3,3):
The 8-path from (1,3) to (3,3): (1,3), (1,2), (2,2), (3,3)
(i) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)

In computer we need to recognise the number, so single path is required

1 1 1
1 0 0
1 1 1
0 0 1
1 1 1
85
• Path
 A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q
with coordinates (xn, yn) is a sequence of distinct pixels with coordinates

(x0, y0), (x1, y1), …, (xn, yn)

Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.

 Here n is the length of the path.

 If (x0, y0) = (xn, yn), the path is closed path.

 We can define 4-, 8-, and m-paths based on the type of adjacency used.

86
BASIC RELATIONSHIPS BETWEEN PIXELS
• Connected in S
Let S represent a subset of pixels in an image. Two pixels p with
coordinates (x0, y0) and q with coordinates (xn, yn) are said to be
connected in S if there exists a path (x0, y0), (x1, y1), …, (xn, yn)
Wherei,0  i  n,( xi , yi )  S
Let S represent a subset of pixels in an image
• For every pixel p in S, the set of pixels in S that are connected to p is called
a connected component of S.
• If S has only one connected component, then S is called Connected Set.
• A Region R is a subset of pixels in an image such that all pixels in R form a
connected component. i.e We call R a region of the image if R is a
connected set
• Two regions, Ri and Rj are said to be adjacent if their union forms a
connected set.
• Regions that are not to be adjacent are said to be disjoint.

87
BASIC RELATIONSHIPS BETWEEN PIXELS
• Boundary (or border)

 The boundary of the region R is the set of pixels in the region that
have one or more neighbors that are not in R.
 If R happens to be an entire image, then its boundary is defined as
the set of pixels in the first and last rows and columns of the image.

• Foreground and background

 An image contains K disjoint regions, Rk, k = 1, 2, …, K. Let Ru


denote the union of all the K regions, and let (Ru)c denote its
complement.
All the points in Ru is called foreground;
All the points in (Ru)c is called background.

88
QUESTION 1

• In the following arrangement of pixels, are the two regions (of


1s) adjacent? (if 8-adjacency is used)

1 1 1 Region 1

1 0 1
0 11 0 Region 2
0 0 11
1 1 1
1 1 1

89
DISTANCE MEASURES
• Given pixels p, q and z with coordinates (x, y), (s, t), (u, v) respectively,
the distance function D has following properties:
a. D(p, q) ≥ 0 [D(p, q) = 0, if p = q]
(x2,y2)
q
b. D(p, q) = D(q, p)

p
c. D(p, z) ≤ D(p, q) + D(q, z) (x1,y1)

90
DISTANCE MEASURES
The following are the different Distance measures:
a. Euclidean Distance : Is the straight-line distance
between two pixels.
De(p, q) = [(x-s)2 + (y-t)2]1/2

b. City Block Distance: is the distance metric measures


the path between the pixels based on a 4-connected
neighborhood. pixels whose edges touch are I unit
apart and pixels diagonally touching are 2 units apart
D4(p, q) = |x-s| + |y-t|

c. Chess Board Distance: The chessboard distance


metric measures the path between the pixels based
on an 8-connected neighborhood. pixels whose edges
or corners touch are 1 unit apart.
D8(p, q) = max(|x-s|, |y-t|)

91
Euclidean distance vs city block

Euclidean distance vs chessboard


92
APPLICATION OF THE DISTANCE MEASURE

93
• Imagine that the foreground regions in the input binary image
are made of some uniform slow blurring material.

• Light fires simultaneously at all points along the boundaries


of this region and watch the fire move into the interior.

• At points where the fire travelling from two different


boundaries meet itself ,the fire will extinguish itself and the
points at which this happens form the so called-‘Quench line’.

• This line is the skeleton.

• The distance transform is an operator normally only applied to


binary images.

94
• The result of the transform is a grey level image ,that looks
similar to the input image except that grey level intensity of
points inside foreground regions are changed to show the
distance to the closest boundary from each point.

• Just as there are many different types of distance transform,


there are many types of skeletonization algorithms all of which
produce slightly different results. However, the general effects
are all similar.

• The skeleton is useful because, it provides a simple and compact


representation of a shape that preserves many of the topological
and size characteristics of the original shape.

95
• Just as there are many different types of distance transform, there are
many types of skeletonization algorithms all of which produce slightly
different results. However, the general effects are all similar.

• The skeleton is useful because, it provides a simple and compact


representation of a shape that preserves many of the topological and size
characteristics of the original shape.

• Thus, for instance, we can get a rough idea of the length of a shape by
considering just the end points of the skeleton and finding the maximally
separated pair of end points on the skeleton.
• We can distinguish many qualitatively different shapes from one another on
the basics of how many ‘tripple points’ there are i.e. points where atleast
three branches of this skeleton meet.

96
97
98
(3,3)
3 1 2 1 q (0,3)

0 2 0 2

1 2 1 1

1 0 1 2

P (0,0)
(3,0)

99
(0,0)
a) 2 P 3 2 6 1 b) 4 2 2 P 3
6 2 3 6 2
4 3 2 1
5 3 2 3 5
1 2 2 0
2 4 3 5 2
1 3 1 0
4 5 2 3 6 q (4,4) q

100
INTRODUCTION TO MATHEMATICAL OPERATIONS IN DIP
• Array vs. Matrix Operation

Array operation is carried out by pixel by pixel basis, whereas matrix


operations are slightly different.

 a11 a12   b11 b12 


A B 
 a21 a22  b21 b22 
Array
product
operator
 a11b11 a12b12  Array product
A .* B   
a b
 21 21 a b
22 22 
Matrix
product
operator
 a11b11  a12b21 a11b12  a12b22  Matrix product
A*B
 a21b11  a22b21 a21b12  a22b22 

101
Let

Array Operation:

Matrix Operation:

102
LINEAR VERSUS NONLINEAR OPERATIONS
• We have to check whether output image is linear or
nonlinear compared to input image.
• Consider a general operator, H , that produces an Output
image, g(x,y) , for a given input image , f(x,y):
H  f ( x, y )   g ( x, y )
Then H is said to be a linear operator if
H  ai f i ( x, y )  a j f j ( x, y ) 
Additivity
 H  ai fi ( x, y )  H  a j f j ( x, y ) 
 ai H  f i ( x, y )  a j H  f j ( x, y )  Homogeneity

 ai gi ( x, y )  a j g j ( x, y )
Example: Sum operator – linear
Max operator - Nonlinear 103
ARITHMETIC OPERATIONS

• Arithmetic operations between images are array


operations. The four arithmetic operations are
denoted as
s(x,y) = f(x,y) + g(x,y)
d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)

104
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION

Noiseless image: f(x,y)


Noise: n(x, y) (at every pair of coordinates (x,y), the noise is uncorrelated and has
zero average value)

Corrupted image: g(x, y)


g(x, y) = f(x, y) + n(x, y)

Reducing the noise by adding a set of noisy images, {g i (x, y)}

K
1
g ( x, y ) 
K
 g ( x, y )
i 1
i

105
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION
K
1
g ( x, y ) 
K
 g ( x, y )
i 1
i

1 K  2
E  g ( x, y )  E   g i ( x, y )   2 K
 K i 1  g ( x ,y ) 1
 gi ( x , y )
K i 1
1 K 
 E    f ( x, y )  ni ( x, y )   1 2
 K i 1  2   n( x, y )
1 K

1 K
 ni ( x , y ) K
 f ( x, y )  E 
K

i 1
ni ( x, y ) 

K i 1

 f ( x, y )

106
EXAMPLE: ADDITION OF NOISY IMAGES FOR NOISE REDUCTION

► In astronomy, imaging under very low light levels frequently causes


sensor noise to render single images virtually useless for analysis.
► In astronomical observations, similar sensors for noise reduction by
observing the same scene over long periods of time. Image averaging is
then used to reduce the noise.

107
AN EXAMPLE OF IMAGE SUBTRACTION: MASK MODE RADIOGRAPHY
Mask h(x,y): an X-ray image of a region of a patient’s body
Live images f(x,y): X-ray images captured at TV rates after injection of the
contrast medium
Enhanced detail g(x,y) = f(x,y) - h(x,y)
The procedure gives a movie showing how the contrast medium propagates
through the various arteries in the area being observed .

108
IMAGE SUBTRACTION FOR ENHANCING DIFFERENCES

fig 1.57(a) Infrared image of the Washington, D.C.area.


fig 1.57(b)Image obtained by setting to zero the least significant bit of every pixel in (a).
fig 1.57(c) Difference of the two images, scaled to the range [0,255] for clarity.

109
IMAGE MULTIPLICATION AND DIVISION FOR
SHADING CORRECTION

fig 1.58(a) Shaded SEM image of a tungsten filament and support, magnified approx. 130
times.
fig 1.58(b) The Shading pattern.
fig 1.58(c) Product of (a) by the reciprocal of (b).

fig 1.59(a) Digital dental X-ray image.


fig 1.59(b) ROI mask for isolating teeth with fillings.
110
fig 1.59(c) Product of (a) and (b).
SET AND LOGICAL OPERATIONS

Weeks 1 & 2 111


SET AND LOGICAL OPERATIONS
• Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where x and y are
spatial coordinates and z denotes the intensity at the point (x, y).
A  {( x, y, z ) | z  f ( x, y )}
• The complement of A is denoted Ac
Ac  {( x, y, K  z ) | ( x, y, z )  A}
K  2k  1; k is the number of intensity bits used to represent z

• The union of two gray-scale images (sets) A and B is defined as the set

A  B  {max(a, b) | a  A, b  B}
z

112
LOGICAL OPERATIONS

▪ Illustration of logical operations


involving foreground (white)
pixels Black represents binary
1s. The dashed lines are shown
for reference only. They are not
part of the result.

fig Logical Operations 113


SET AND LOGICAL OPERATIONS

114
IMAGE GEOMETRY

Geometric transformation consists of two basic operations:


1. Spatial transformation of image coordinates.
2. Intensity interpolation that assigns intensity values to the
spatially transformed pixels.
The Transformed image coordinates may be expressed as
(x,y)=T{(v,w)}

One of the most commonly used special coordinate


transformations is the affine transform

115
INTENSITY ASSIGNMENT
• Forward Mapping

( x, y )  T {(v, w)}
It’s possible that two or more pixels can be transformed to the same location
in the output image.

• Inverse Mapping
(v, w)  T 1{( x, y )}
The nearest input pixels to determine the intensity of the output pixel value.
Inverse mappings are more efficient to implement than forward mappings.

116
117
IMAGE INTERPOLATION
• Interpolation — Process of using known data to estimate
unknown values
e.g., zooming, shrinking, rotating, and geometric correction
• Interpolation (sometimes called resampling) — an imaging
method to increase (or decrease) the number of pixels in a digital
image.
Some digital cameras use interpolation to produce a larger image
than the sensor captured or to create digital zoom

f1(x2,y2) =
f(x1,y1)
f(round(x2), round(y2))
=f(x1,y1)

f1(x3,y3) =
f(round(x3), round(y3))
=f(x1,y1)

118
(x,y)

119
BICUBIC INTERPOLATION
• The intensity value assigned to point (x,y) is obtained by the following equation

• The sixteen coefficients are determined by using the sixteen nearest neighbors.

3 3
f 3 ( x, y )   aij x y i j

i 0 j 0

120
121
EXAMPLES: INTERPOLATION

122
EXAMPLES: INTERPOLATION

123

You might also like