0% found this document useful (0 votes)
13 views

Module 1-DIP

Digital image processing

Uploaded by

sm3406676
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module 1-DIP

Digital image processing

Uploaded by

sm3406676
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Sub Code:21EC732

By
Dr. A Chrispin Jiji
Associate Professor
Department of ECE
Dr.Cambridge Institute
A Chrispin Jiji, ofProfessor,
Associate Technology,
Dept. Bangalore
of ECE, CIT 1
Text book & Reference book
Textbook:
 Digital Image Processing- Rafel C Gonzalez and Richard E.
Woods, PHI 3rd Edition 2010.
 Fundamentals of Digital Image Processing-A. K. Jain, PHI
Learning Pvt Ltd 2014.

Reference Books:
 Digital Image Processing- S.Jayaraman, S.Esakkirajan,
T.Veerakumar, Tata McGraw Hill 2014.
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 2
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 3
Module-1
Digital Image Fundamentals:
1. What is Digital Image Processing?
2. Origins of Digital Image Processing
3. Examples of fields that use DIP
4. Fundamental Steps in Digital Image Processing
5. Components of an Image Processing System
6. Elements of Visual Perception
7. Image Sensing and Acquisition
8. Image Sampling and Quantization
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
9. Some Basic Relationships Between Pixels 4
Teaching Learning Process:
1. Chalk & Talk
2. Power Point Presentation
3. Youtube videos
4. Videos on Image Processing applications

Self study topics: Arithmetic & Logical operations

Practical topics: Problems on Basic Relationships between


Pixels

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 5


1. What is Digital Image Processing
Key Points:
 What is an Image?
 What is a Digital Image?
 Types of Digital images
 What is Image Processing ?
 Types of Image Processing ?
 What is Digital Image Processing?
 Types of Digital Image Processing (3 types of
computerized process)
 Several fields deal with images
 Why Digital Image Processing
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
What is an Image?
 Image is a representation of something or someone.
 Examples are any drawing, painting, photograph etc
 Images are very powerful tool in communication
 It may be black & white or color

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


 An image is defined as a two-dimensional function of
f(x,y)
- Where x and y are spatial (plane) coordinates
- The amplitude of f at ant pair of coordinates (x,y) is called
intensity or gray level of the image at that point

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
What is a Digital Image?
 When x, y, f(x,y) are all finite, discrete quantities, we call the
image a Digital Image
 Digital Image is composed of a finite number of elements,
each of which has a particular location and value.
 These elements are referred as
- Picture elements
- Image elements
- Pels
- Pixels
 Pixel is the most widely used term

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Types of Digital images

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
What is Image Processing ?
• Image processing is manipulation of images by a brain or
computer
• Image Processing is a method to convert an image into
digital form and perform some operations on it, in
order to get an enhanced image or to extract some
useful information from it
• It is a type of signal dispensation in which input is image
and output may be image or characteristics associated
with that image

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Types of Image Processing ?

-Analog image processing: In this Digital image processing: deals with


type of processing the images are developing a digital system that performs
manipulated by electrical means by operations on an digital image. It help in
varying the electrical signal. The manipulation of the digital images by using
common example include is the computers.
television image

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


What is Digital Image Processing?
 Digital image processing is a method to perform some
operations on a digital image, in order to get an enhanced
image or to extract some useful information from it
 It is a type of signal processing in which input is an image and
output may be image or characteristics/features associated with that
image.
 The main motivation or purpose behind DIP is:
- Improvement of picture information for interpretation
- Storage, transmission and representation of digital data for machine
perception is possible

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Types of Digital Image Processing (3 types of
computerized process)
 Low-level : input, output are images
Primitive operations such as image preprocessing to reduce noise,
contrast enhancement, and image sharpening
 Mid-level : inputs may be images, outputs are attributes extracted
from those images
- Segmentation
- Description of objects
- Classification of individual objects
 High-level : Image analysis
It involves making sense of recognized
objects and performing functions associated
with visions.
For example: Automatic character
Dr. A Chrispin
recognition, Jiji, Associate
military Professor, Dept. of ECE, CIT
recognition, 21
autonomous navigation etc
Several fields deal with images
 Computer Graphics : the creation of images.
 Image Processing : the enhancement or other manipulation of the
image – the result of which is usually another images.
 Computer Vision: the analysis of image content.

 Computer Vision, Image Processing and Computer Graphics often


Dr. Atogether
work ChrispintoJiji, Associateresult.
get amazing Professor, Dept. of ECE, CIT 22
Machine/Computer vision
 Machine vision or computer vision deals with
developing a system in which the input is an image
and the output is some information.
 For example: Developing a system that scans
human face and opens any kind of lock. This system
would look something like this.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 23


Computer graphics
 Computer graphics deals with the formation of
images from object models, rather then the image is
captured by some device.
 For example: Object rendering. Generating an
image from an object model. Such a system would
look something like this.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 24


Artificial intelligence
 Artificial intelligence is more or less the study of
putting human intelligence into machines.
 Artificial intelligence has many applications in
image processing.
 For example: developing computer aided diagnosis
systems that help doctors in interpreting images of
X-ray , MRI e.t.c and then highlighting conspicuous
section to be examined by the doctor.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 25


Why Digital Image Processing
Interest in Digital Image processing methods stems
from two principal application areas
 Improvement of pictorial information for human
interpretation
 Processing of image data for storage, transmission
and representation for autonomous machine
perception

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


2. Origins of Digital Image Processing

Key Points:
• Early 1920s
• Mid to late 1920s
• 1960s
• 1964
• 1970
• 1979
• 1980s to today
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 27
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 28
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 29
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 30
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 31
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 32
3. Examples of the fields that uses DIP:
Key Points:
 Image Enhancement
 Hubble Telescope
 Artistic Effects
 Medicine
 Geographical Information System
 Industrial Inspection
 Printed Circuit Board (PCB) Inspection
 Law Enforcement
 Human Computer Interface
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 33
Image Enhancement

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 34


Hubble Telescope

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 35


Artistic Effects

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 36


Medicine

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 37


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 38
Industrial Inspection

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 39


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 40
Law Enforcement

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 41


Human Computer Interface

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 42


4. Fundamental steps in DIP
Key Points:
 Image Acquisition
 Image Enhancement
 Image Restoration
 Color Image Processing
 Wavelets and Multiresolution Processing
 Image Compression
 Morphological Processing
 Segmentation
 Representation and Description
 Object Recognition
 Knowledge Base
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Knowledge base
 Knowledge about a problem domain is coded into
image processing in the form of knowledge database
 The knowledge base controls the interaction between
different modules of an image processing system.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


i) Image acquisition is the first process.
To acquire digital image. In this step, the image is
captured by a camera and is digitized if not in
digital form.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


ii) Image Enhancement is the process of
manipulating an image so that the result is more
suitable than the original image.
It is subjective.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


iii) Image Restoration (To bring to original condition)
 It is the process of improving the appearance (reducing
blur, noise etc) of an image by using mathematical or
probabilistic model
 It is objective, because there is one way to restore the
model by using mathematical or probabilistic model
 Identify the degradation process and attempts to restore
it.
 Almost similar to image enhancement, but more
objective.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 49
iv) Color Image Processing is an area that has been
gaining in importance because of the significant increase
in the use of digital images over the Internet.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


v) Wavelets are the foundation for representing images
in various degrees of resolution

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


vi)Compression deals with techniques for reducing the
storage required to save an image or the bandwidth to
transmit it.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


vii) Morphological processing (meaning structures or
shapes)
Process of extracting image components, structures or
shapes that are useful in the representation and
description of the shape. Deals with tools for extracting
image components that are useful in the representation
and description of shape.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


viii) Segmentation is the process of partitioning a digital
image into multiple segments or image objects. The goal of
segmentation is to simplify and/or change the
representation of an image into something that is more
meaningful and easier to analyze. Here computer tries to
separate objects from the image

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


ix) Representation & Description
Representation (showing):
 Deals with converting the data into a suitable form for
computer processing
 Make a decision whether the data should be represented as a
boundary or as a complete region. It is almost always follows
the output of a segmentation stage.
 Boundary Representation: Focus on external shape
characteristics, such as corners and inflections
 Region Representation: Focus on internal properties, such as
texture or skeleton shape

Choosing a representation is only part of the solution for


transforming raw data into a form suitable for subsequent
computer processing (mainly recognition)
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 56
Description (explaining):
 Extracting attributes or features

 also called, feature selection, deals with extracting


attributes that result in some information of interest.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


x) Object Recognition
 The process that assigns label to an object based on the
information provided by its description
 Identifying a specific object in a digital image.

 Applications in the field of monitoring and surveillance,


medical analysis, robot localization and navigation etc.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


4. Advantages
 Less noise sensitive

 More easily recoverable after damage

 Displayed on computer monitors

 Storage: CD, DVD

 Transmission via Internet

 Compression- save storage space or speedy communication

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


5.Components of an Image Processing System
Key Points
 Image Sensor
 Specialized Image Processing Hardware
 Computer
 Image Processing Software
 Hardcopy
 Mass Storage
 Image Displays

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Image Sensor:
Two elements are required to acquire digital images
- Physical device-Energy radiated by the object we wish to
image
- Digitizer-is a device for converting the output of physical
sensing device into digital form
Example: Digital video camera
Sensor-Produce electrical output proportional to light
intensity
Digitizer-Converts these output to digital data
 Image sensors senses the intensity, amplitude, co-ordinates and
other features of the images and passes the result to the image
processing hardware.
Dr. A Chrispin It includesProfessor,
Jiji, Associate the problem domain.
Dept. of ECE, CIT
Specialized Image processing hardware:
 Consists of digitizer + hardware that perform other
primitive operations such as ALU, which performs
arithmetic and logical operation in parallel on entire image
 Example: Averaging images for the purpose of noise
reduction
 This type of hardware is sometimes called front-end
subsystem
 Important Characteristics is speed

 This unit performs functions that require fast data


throughput(is a measure of how many units of information
a system can process in a given amount of time or amount
of data actually transmitted and received in a given period)
eg.Dr.Digitizing
A Chrispinand
Jiji,averaging
Associate Professor, Dept.30
video, image of frames/s)
ECE, CIT that
the typical main computer cannot handle
Computer:
 In an image processing system is a general purpose
computer and can range from PC to supercomputer
 Achieve required level of performance

 General purpose image processing system. In these system


any well-equipped PC type machine is suitable for offline
image processing tasks
Software:
 Consists of specialized modules that perform specific task

 A well designed package also includes the capability of


the user to write code that, as a minimum utilizes the
specialized modules
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Mass Storage:
 Must in image processing applications

 An image of size 1024x1024 pixels, in which the intensity


of each pixel requires one megabyte of storage space if the
image is not compressed
 For thousands of images it is a challenging task

Principal categories:
1. Short term storage for use during processing ex.
computer memory
2. On-line storage for relatively fast recall ex. magnetic disk
or optical media storage
3. Archival storage characterized by infrequent access
Dr.
ex. Amagnetic
Chrispin tape
Jiji, Associate Professor,
and optical disc Dept. of ECE, CIT
Image displays:
 Color TV monitors

 Monitors are driven by the output of image and graphic


display cards that are an integral part of the computer
system
Hardcopy:
 Recording images

 Example-laser printers, film camera, optical and CD


ROM disc
Networking:
Communicate with remote sites via internet

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


6. Elements of visual perception

• Objective: To help a human observer perceive the


visual information in an image.
• Therefore, it is important to understand the human
visual system.
• It consists of
- Eye (image sensor or camera)

- Optic nerve (transmission path), and

- Brain (image information processing unit or


computer).
i) Structure of human Eye

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Human Eye:
 Diameter: 20 mm

 3 membranes enclose the eye

- Cornea & sclera

- Choroid

- Retina

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Cornia:
 Tough transparent tissue that covers the anterior
portion of the eye
Sclera (white part of the eye):
 Opaque, fibrous, protective outer membrane that
encloses remainder of the eye

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Choroid
 A thin layer of tissue that is part of middle layer of the wall of the
eye, between the sclera (white outer layer of the eye) and the retina
(the inner layer of nerve tissue at the back of the eye)
 Contains blood vessels that serve as a major source of nutrition to the
eye (brings oxygen and nutrients to the eye)
 Heavily pigmented to reduce extraneous light entrance and
backscatter within the optical globe.
 At its anterior extreme, it is divided into the ciliary body and iris
diaphragm to controls the amount of light that enters the pupil
 The front of iris contains visible pigment and back contains black
pigment
 The central opening (the pupil) varies in diameter from 2 to 8mm.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


The Lens
 Made up of fibrous cells and is suspended by fibers
that attach it to the ciliary body.
 It is slightly yellow and absorbs approx. 8% of the
visible light spectrum.
 The lens focuses light from objects onto the retina

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


The Retina
 The retina lines the entire posterior portion.
 The retina is covered with light receptors
- Cones (6-7 million per eye) and
- Rods (75-150 million per eye)
Cones are concentrated around the fovea and are very
sensitive to colour
Rods are more spread out and are sensitive to low levels
of illumination

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Cones: (Red 65%, Green 33%,Blue 2%)
 6 – 7 millions located primarily in the central portion
of the retina
 Highly sensitive to color
 Photopic (bright-light) vision: vision with cones
 Color receptors, high resolution in the fovea, less
sensitive to light

Rods
 75- 150 millions distributed over the retinal surface.
 Not involved in color vision and sensitive to low
illumination
 Scotopic (dim-light) vision: vision with rods
 Color blind, much more sensitive to light (night
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
vision), lower resolution
Receptor Distribution
 It is radially symmetric about the fovea.

 Cones are most dense in the center of the fovea while


rods increase in density from the center out to
approximately 20% off axis and then decrease.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
The Fovea (Fovea Centralis)
 Fovia means small cavity or depression, Centralis means central or in
the middle
 Function: To allow for sharp and focused vision
 The fovea is circular (1.5 mm in diameter) but can be assumed to be a
square sensor array (1.5 mm x 1.5 mm), dense with cones. The density
of cones: 150,000 elements/mm2 ~ 337,000 cones for the fovea.
 Photoreceptors around fovea responsible for spatial vision (still
images). Photoreceptors around the periphery responsible for
detecting motion.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Blind spot:
 Point on retina where optic nerve emerges, devoid of
photoreceptors.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Activity 1:
Draw an image similar to that below on a piece of paper
(the dot and cross are about 6 inches apart)

Close your right eye and focus on the cross with your left
eye
Hold the image about 20 inches away from your face and
move it slowly towards you
The dot should disappear!
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
ii) Image Formation in the Eye
• The eye lens (if compared to an optical lens) is flexible.
• It gets controlled by the fibers of the ciliary body and to
focus on distant objects it gets flatter (and vice versa).
 Focal length: Distance between the center of the lens and
the retina
– varies from 14 mm to 17 mm (refractive power of lens
goes from minimum to maximum).

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Classical optical theory:
-A ray passes through the centre C of the lens.
-h is the height of the object on the retina (note that is
located close to the fovea)
-Perception then takes place by the relative excitation of
lightDr.
receptors, which
A Chrispin transform
Jiji, Associate radiant
Professor, energy
Dept. into
of ECE, CIT
electrical impulses that ultimately decoded by the brain
iii) Brightness Adaptation and discrimination
Brightness adaptation:
 The range of light intensity levels to which the human
visual system can adapt
- about 1010 from scotopic threshold to the glare limit.

- about 106 from photopic threshold to the glare limit.

 The subjective brightness is a logarithmic function of light


intensity incident on the eye.
 Brightness adaptation level: the current sensitive level of
the visual system.
 Transition from scotopic to photopic visual is gradual over
the range from -3 to -1 mL in log scale
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
mL = milliLambert
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Brightness discrimination:
• Ability of the eye to discriminate between changes in
light intensity at any specific adaptation level.
• Background intensity given by I
I
• Increment of illumination for short duration at intensity
I c
• is the increment of illumination when the
illumination is visible halfIc / Ithe time against background
intensity
I c / I
• Weber ratio is given by
• A small value of implies that a small percentage
change in intensity is visible, representing good
brightness discrimination
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
• A large value of I c / I implies that a large percentage
change is required for discrimination, representing
poor brightness discrimination
• Typically, brightness discrimination is poor at low
levels of illumination and improves at higher levels
of background illumination

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


 Brightness discrimination is poor at low levels of
illumination.
 The two branches in the curve indicate that at low
levels of illumination vision is carried out by the
rods, whereas at high level by the cones.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Perceived brightness:
 Three phenomena demonstrate that Perceived brightness is not a
simple function of actual intensity.
a) Mach band
 Visual system tends to under/overshoot around the boundary of
regions of different intensities
 Perceive brightness that is strongly scalloped near the
boundaries. These scalloped bands are called Mach bands

FIGURE: Illustration of the Mach band effect.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


b) Simultaneous contrast
 A region’s perceived brightness does not depend simply
on its intensity .
 All the center squares have exactly same intensity, they
appear to the eye to become darker as the background gets
lighter

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


c) Optical illusions
 The eye “ fills in ” non-existing information or
wrongly perceives geometrical properties of objects.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


7. Image Sampling and Quantization:
Conversion of analog signal to digital signal:
 The output of most of the image sensors is an analog
signal, and we cannot apply digital processing on it
because we cannot store it. It requires infinite memory
to store a signal that can have infinite values.
 So, we have to convert an analog signal into a digital
signal.
 To create an image which is digital, we need to covert
continuous data into digital form.
 There are two steps in which it is done.
a) Sampling
b) AQuantization
Dr. Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
 Sampling and quantization are the two important
processes used to convert continuous analog image
into digital image
 Image Sampling refers to discretization of spatial
coordinates whereas image quantization refers to
discretization of gray level values or amplitude
values.
 Given a continuous image f(x,y), digitizing the
coordinate value is called sampling and digitizing the
amplitude (intensity) value is called quantization.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


i) Basic concepts in sampling & quantization
 The basic idea behind converting an analog signal
to its digital signal is to convert both of its axis
f(x,y) and x,y into a digital format
 Since an image is continuous not just in its co-
ordinates x,y but also in its intensity or grey level
f(x,y)
 Digitizing of co-ordinates is known as sampling.
Digitizing the amplitude is known as quantization.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


a) Sampling:
 The term sampling refers to take samples
 We digitize (x,y) coordinates in sampling
 It is done on independent variable. In case of equation y=sin(x)

 There are some random variations in the signal. These variations are
due to noise. In sampling we reduce noise by taking samples.
 More samples we take, the quality of the image would be better, the
noise would be more removed and same happens vice versa.
 The more samples eventually mean that collecting more data, and in
case of image, it means more pixels.
 Sampling on the (x,y) coordinates, the image is not converted to
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
digital format, unless sampling of the f(x,y) - axis too which is known
as quantization.
b) Quantization
 It is opposite to sampling. It is done on y axis.
 Quantizing an image means actually dividing a signal into quanta
(partitions).
 On the y axis, we have amplitudes. So, digitizing the amplitudes is
known as Quantization.
 when we sample an image, we actually gather a lot of values, and
in quantization, we set levels to these values.
 This can be clearer in the image below.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Two approaches to quantization
 Rounding the sample to the closest integer. (e.g. round 3.14
to 3) - Create a Quantizer table that generates a staircase
pattern of values based on a step size.
 For this example, let’s choose to represent each sample by 4
bits. There are an infinite number of voltages between -10
and 10. We will have to assign a range of voltages to each
4-bit codeword

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
ii)Representing Digital images

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
iii) Spatial & Intensity Resolution
 Resolution: Total number of pixels in an image.

Spatial Resolution:
 Spatial resolution states that the clarity of an image cannot
be determined by the pixel resolution. The number of pixels
in an image does not matter.
 Spatial Resolution is defined as

- the smallest discernible detail in an image (or)

- number of independent pixels values per inch.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Measuring spatial resolution
 Spatial resolution refers to clarity, so for different devices,
different measure has been made to measure it.
 For example,
- Pixels per inch

- Dots per inch

- Lines per inch

Pixel per inch: PPI is measure for different devices such as


tablets , Mobile phones e.t.c.
Dots per inch: DPI is usually used in laser printers.
Lines per inch: LPI is usually used in monitors.
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dots per inch:
 It is a measure of spatial resolution of printers.

 In case of printers, dpi means that how many dots of ink


are printed per inch when an image get printed out from
the printer.
 There may be many dots per inch used for printing one
pixel.
 The reason behind this that most of the color printers uses
color model.
 The colors are limited.

 Printer has to choose from these colors to make the color


of the pixel whereas within pc, you have hundreds of
thousands of colors.
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
 The higher is the dpi of the printer, the higher is the
quality of the printed document or image on paper.
 Usually some of the laser printers have dpi of 300 and
some have 600 or more.
 Example

Newspaper- 75dpi
Magazines-133 dpi
Brochures- 175 dpi
Book page-2400 dpi

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Lines per inch:
 It refers to lines of dots per inch. The resolution of half
tone screen is measured in lines per inch.
 The following table shows some of the lines per inch
capacity of the printers.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
iii) Optical illusions
 The eye “ fills in ” non-existing information or
wrongly perceives geometrical properties of objects.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


iii) Optical illusions
 The eye “ fills in ” non-existing information or
wrongly perceives geometrical properties of objects.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


iii) Optical illusions
 The eye “ fills in ” non-existing information or
wrongly perceives geometrical properties of objects.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


iii) Optical illusions
 The eye “ fills in ” non-existing information or
wrongly perceives geometrical properties of objects.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
9. Basic Relationships between Pixels
i. Neighbors of a pixel
ii. Adjacency
iii. Path
iv. Connectivity
v. Region & Boundary
vi. Distance Measures

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
1) Define 4, 8 and m-adjacency. Compute the
length of shortest 4, 8, m-path between p and q
in the image segment by considering
a) V={0,1}
b) V={1,2}

i). N4(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 130


ii). N8(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 131


iii). Nm(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 132


ii) V={1,2}

N4(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 133


ii). N8(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 134


iii). Nm(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 135


2) Define 4, 8 and m-adjacency. Compute the
length of shortest 4, 8, m-path between p and q in
the image segment by considering V={2,3,4}

i). N4(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 136


ii). N8(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 137


iii). Nm(p)

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 138


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
3)Consider the two image subset s1 and s2 as shown in fig.
For v={1}, determine whether these two subsets are i) 4-
adjacent ii) 8 –adjacent and iii) m-adjacent.

Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 142


Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT
4) A common measure of transmission for digital data is the
baud rate defined as the number of bit transmitted per
second. Generally transmission is accomplished in packets
consisting of a start bit, a byte (8 bits) of information and a
stop bit. Using these facts find how many minutes would it
take to transmit a 2048x2048 image with 256 intensity
levels using a 33.6 K baud modem

Soln:
The total amount of data (including the start and stop bit) in an 8-bit,
2048x2048 image, is (2048) 2 x [8 + 2] bits.
The total time required to transmit this image over a 33.6K baud link
is (2048) 2 x [8 + 2] / 33.6x1000sec
Dr. A Chrispin Jiji, Associate Professor, Dept. of ECE, CIT 150

You might also like