Dip Unit 1
Dip Unit 1
FUNDAMENTALS OF DIGITAL
IMAGE PROCESSING
21CSE251T-DIP
II B. TECH AIML
CONTENTS
• Steps in Digital Image Processing
• Components
• Elements of Visual Perception
• Image Sensing and Acquisition
• Image Sampling and Quantization.
Digitizing the
coordinate values
is called
sampling.
Digitizing the
amplitude values
is called
• The samples are shown as small dark squares superimposed on the
function, and their (discrete) spatial locations are indicated by
corresponding tick marks in the bottom of the figure.
• The set of dark squares constitute the sampled function. However,
the values of the samples still span (vertically) a continuous range
of intensity values.
• In order to form a digital function, the intensity values also must be
converted (quantized) into discrete quantities.
• The vertical gray bar in Fig(c) depicts the intensity scale divided into
eight discrete intervals, ranging from black to white. The vertical
tick marks indicate the specific value assigned to each of the eight
(a) Continuous image projected onto a sensor
array.
(b) Result of image sampling and quantization.
Image Sampling and Quantization
• When a sensing strip is used for image acquisition,
the number of sensors in the strip establishes the
samples in the resulting image in one direction, and
mechanical motion establishes the number of
samples in the other.
• Quantization of the sensor outputs completes the
process of generating a digital image. When a sensing
array is used for image acquisition, no motion is
required. The number of sensors in the array
establishes the limits of sampling in both directions.
What is Sensing Strip and Sensing
Array?
• Sensing Strip:
• A sensing strip is a linear array of sensors arranged in a single row.
• It captures one line of the image at a time as the sensor or the object
moves relative to each other.
• Commonly used in scanners, where the strip moves across the
document to capture the entire image.
• Sensing Array:
• A sensing array consists of a 2D grid of sensors, capturing an entire
image in one exposure.
• Each sensor in the array corresponds to a pixel in the image, allowing
for the capture of the full spatial information in one go.
• Commonly used in digital cameras, webcams, and smartphones.
SOME BASIC RELATIONSHIPS BETWEEN
PIXELS
• A pixel p at coordinates (x, y) has two horizontal and two vertical
neighbors with coordinates
(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)
• This set of pixels, called the 4-neighbors of p, is denoted N 4(p).
SOME BASIC RELATIONSHIPS BETWEEN
PIXELS
• The four diagonal neighbors of p have coordinates
(x + 1, y + 1), (x + 1, y - 1), (x - 1, y + 1), (x - 1, y - 1)
and are denoted ND(p). These neighbors, together with the 4-neighbors,
are called the 8-neighbors of p, denoted by N 8(p).
• The set of image locations of the neighbors of a point p is called the
neighborhood of p.
• Adjacency
• Definition: Two elements (e.g., pixels, nodes, or regions) are
adjacent if they share a common edge or vertex.
• Types:
• 4-adjacency: Two elements share a common side.
• 8-adjacency: Two elements share a common side or corner.
• Example:
ABC
DEF
GHI
• 4-adjacency: Pixel E is adjacent to D, F, B, and H.
• 8-adjacency: Pixel E is adjacent to D, F, B, H, and also A, C, G, I.
m-Adjacency (Mixed Adjacency)
• Rules of m-Adjacency:
• For two pixels p and q to be m-adjacent:
1. Example:
101
010
101
Connectivity
Connected Set
What is the Use of all these terms?
In medical imaging, we might identify a tumor
as a connected region of specific pixel
Region intensities.
Boundary (Border or Contour) of a Region R
• The boundary of a region R consists of the pixels in R that are
adjacent to the background (complement of R).
• These are pixels that have at least one neighboring pixel that does not
belong to R.