0% found this document useful (0 votes)
53 views143 pages

Lect 1 Computervision Student PPT 16-9-2017

The document discusses computer imaging and digital image processing. It defines computer imaging as the acquisition and processing of visual information by a computer. Computer imaging is divided into computer vision, where outputs are used by computers, and image processing, where outputs are viewed by humans. Both categories involve image analysis, feature extraction, and pattern classification to interpret images.

Uploaded by

vivek singhal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views143 pages

Lect 1 Computervision Student PPT 16-9-2017

The document discusses computer imaging and digital image processing. It defines computer imaging as the acquisition and processing of visual information by a computer. Computer imaging is divided into computer vision, where outputs are used by computers, and image processing, where outputs are viewed by humans. Both categories involve image analysis, feature extraction, and pattern classification to interpret images.

Uploaded by

vivek singhal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 143

Computer Imaging and Digital image

processing

Definition of computer imaging:


Acquisition and processing of visual information by
computer.
Computer imaging can be divided into two main
categories:
Computer Vision:
applications of the output are for use by a
computer.
Image Processing:
applications of the output are for use by human.
These two categories are not totally separate and
distinct.
• They overlap each other in certain areas.

• Computer Vision
Does not involve human in the visual loop.
Image analysis involves the examination of image data
to facilitate in solving a vision problem
Computer Vision
• Image analysis process involves two other
topics:
– Feature extraction: acquiring higher level
image info (shape and color)
– Pattern classification: using higher level
image information to identify objects within
image.
Computer Vision
• Examples of applications of computer
vision:
– Quality control (inspect circuit board).
– Hand-written character recognition.
– Biometrics verification (fingerprint, retina,
DNA, signature, etc).
– Satellite image processing.
– Skin tumor diagnosis.
Image Processing
• Processed images are to be used by
human.
• Among the major topics are:
– Image restoration.
– Image enhancement.
– Image compression.
Image Processing
• Image restoration:
– The process of taking an image with some
know, or estimated degradation, and restoring
it to its original appearance.
– Done by performing the reverse of the
degradation process to the image.
– Examples: correcting distortion in the optical
system of a telescope.
Image Processing

An Example of Image Restoration


Image Processing
• Image enhancement:
– Improve an image visually by taking an
advantage of human visual system’s
response.
– Example: improve contrast, image
sharpening, and image smoothing.
Image Processing

An Example of Image Enhancement


Image Processing
• Image compression:
– Remove the amount of data required to
represent an image by:
• Removing unnecessary data that are visually
unnecessary.
• Taking advantage of the redundancy that is
inherent in most images.
– Example: JPEG, MPEG, etc.
Component of Digital Image
Processing
Computer Imaging and Digital image
processing

Definition of computer imaging:


Acquisition and processing of visual information by
computer.
Computer imaging can be divided into two main
categories:
Computer Vision:
applications of the output are for use by a
computer.
Image Processing:
applications of the output are for use by human.
These two categories are not totally separate and
distinct.
• They overlap each other in certain areas.

• Computer Vision
Does not involve human in the visual loop.
Image analysis involves the examination of image data
to facilitate in solving a vision problem
Computer Vision
• Image analysis process involves two other
topics:
– Feature extraction: acquiring higher level
image info (shape and color)
– Pattern classification: using higher level
image information to identify objects within
image.
Computer Vision
• Examples of applications of computer
vision:
– Quality control (inspect circuit board).
– Hand-written character recognition.
– Biometrics verification (fingerprint, retina,
DNA, signature, etc).
– Satellite image processing.
– Skin tumor diagnosis.
What is Computer Vision?
• Computer vision is the science and technology of
machines that see.

• Concerned with the theory for building artificial systems


that obtain information from images.

• The image data can take many forms, such as a video


sequence, depth images, views from multiple cameras,
or multi-dimensional data from a medical scanner
What is Computer Vision?
• Deals with the development of the theoretical and
algorithmic basis by which useful information about
the 3D world can be automatically extracted and
analyzed from a single or multiple or 2D images of
the world.
Components of a computer vision
system
Camera

Lighting

Computer

Scene

Scene Interpretation
Srinivasa Narasimhan’s slide
Computer vision vs human
vision

What we see What a computer sees


Computer Vision, Also Known
As ...
• Image Analysis
• Scene Analysis
• Image Understanding
Some Related Disciplines
• Image Processing
• Computer Graphics
• Pattern Recognition
• Robotics
• Artificial Intelligence
Image Processing

• Image Enhancement
Image Processing (cont’d)
• Image Restoration(e.g., correcting out-focus
images)
Image Processing (cont’d)
• Image Compression
Computer Graphics
• Geometric modeling
Computer Vision
Computer Vision
Make computers understand images and
video.
What kind of scene?

Where are the cars?

How far is the


building?


Robotic Vision
• Application of computer vision in robotics.
• Some important applications include :
– Autonomous robot navigation
– Inspection and assembly
Pattern Recognition
• Has a very long history (research work in this field
started in the 60s).
• Concerned with the recognition and classification of 2D
objects mainly from 2D images.
• Many classic approaches only worked under very
constrained views (not suitable for 3D objects).
• It has triggered much of the research which led to
today’s field of computer vision.
• Many pattern recognition principles are used extensively
in computer vision.
Artificial Intelligence
• Concerned with designing systems that are intelligent
and with studying computational aspects of intelligence.
• It is used to analyze scenes by computing a symbolic
representation of the scene contents after the images
have been processed to obtain features.
• Many techniques from artificial intelligence play an
important role in many aspects of computer vision.
• Computer vision is considered a sub-field of artificial
intelligence.
Why is Computer Vision
Difficult?
• It is a many-to-one mapping
– A variety of surfaces with different material
and geometrical properties, possibly under
different lighting conditions, could lead to
identical images
– Inverse mapping has non unique solution (a
lot of information is lost in the transformation
from the 3D world to the 2D image)
• It is computationally intensive
• We do not understand the recognition
problem
Practical Considerations
• Impose constraints to recover the scene
– Gather more data (images)
– Make assumptions about the world
• Computability and robustness
– Is the solution computable using reasonable
resources?
– Is the solution robust?
• Industrial computer vision systems work very
well
– Make strong assumptions about lighting conditions
– Make strong assumptions about the position of
objects
– Make strong assumptions about the type of
objects
An Industrial Computer Vision System
The Three Stages of Computer Vision
• low-level

image image

• mid-level

image features

• high-level
features analysis

36
• Low-level processing
– Standard procedures are applied to improve image quality
– Procedures are required to have no intelligent capabilities.
Low-Level

sharpening

blurring

38
Spatial Frequency Resolution
• To understand the concept of spatial
frequency, we must first understand the
concept of resolution.
• Resolution: the ability to separate two
adjacent pixels.
– If we can see that two adjacent pixels as
being separate, then we can say that we can
resolve the two.
Spatial Frequency Resolution
• Spatial frequency: how rapidly the signal
changes in space.
Spatial Frequency Resolution
• If we increase the frequency, the stripes
get closer until they finally blend
together.
Brightness Adaptation
• If fewer gray levels are used, we will
observe false contours (bogus line).
• This resulted from gradually changing light
intensity not being accurately presented.
Brightness Adaptation

Image with 8 bits/pixel (256 Image with 3 bits/pixel (8 gray


gray levels – no false contour) levels – contain false contour)
Brightness Adaptation
• An interesting phenomena that our vision
system exhibits related to brightness is
called the Mach Band Effect.
• This creates an optical illusion.
• When there is a sudden change in
intensity, our vision system response
overshoots the edge.
Brightness Adaptation
• This accentuates edges and helps us to
distinguish and separates objects within
an image.
• Combined with our brightness adaptation
response, this allows us to see outlines
even in dimly lit areas.
Brightness Adaptation

• An illustration of the
Mach Band Effect.

• Observe the edges


between the different
brightness.

• The edges seem to


be a bit “stand out”
compared to the rest
of the image.
Temporal Resolution
• Related to how we respond to visual
information as a function of time.
– Useful when considering video and motion in
images.
– Can be measured using flicker sensitivity.
• Flicker sensitivity refers to our ability to
observe a flicker in a video signal
displayed on a monitor.
Temporal Resolution
Temporal Resolution
• The cutoff frequency is about 50 hertz
(cycles per second).
– We will not perceive any flicker for a video
signal above 50Hz.
– TV uses frequency around 60Hz.
• The brighter the lighting, the more
sensitive we are to changes.
Image Representation
• Digital image I(r, c) is represented as a
two-dimensional array of data.
• Each pixel value corresponds to the
brightness of the image at point (r, c).
• This image model is for monochrome (one
color, or black and white) image data.
Image Representation
• Multiband images (color, multispectral)
can be modeled by a different I(r, c)
function for each separate band of
brightness information.
• Types of images that will discuss:
– Binary
– Gray-scale
– Color
– Multispectral
Binary Images
• Takes only two values:
– Black and white (0 and 1)
– Requires 1 bit/pixel
• Used when the only information required is
shape or outline info. For example:
– To position a robotic gripper to grasp an
object.
– To check a manufactured object for
deformations.
– For facsimile (FAX) images.
Binary Images
Binary Images
• Binary images are often
created from gray-scale
images via a threshold
operation.
– White (‘1’) if pixel value is
larger than threshold.
– Black (‘0’) if it is less.
Gray-Scale Images
• Also referred to as monochrome or one-
color images.
• Contain only brightness information. No
color information.
• Typically contain 8 bits/pixel data, which
corresponds to 256 (0 to 255) different
brightness (gray) levels.
Gray-Scale Images
• Why 8 bits/pixel?
– Provides more than adequate brightness
resolution.
– Provides a “noise margin” by allowing
approximately twice gray levels as required.
– Byte (8-bits) is the standard small unit in
computers.
Gray-Scale Images
• However, there are applications such as
medical imaging or astronomy that
requires 12 or 16 bits/pixel.
– Useful when a small section of the image is
enlarged.
– Allows the user to repeatedly zoom a specific
area in the image.
Color Images
• Modeled as three band monochrome
image data.
• The values correspond to the brightness in
each spectral band.
• Typical color images are represented as
red, green and blue (RGB) images.
Color Images

• Using the 8-bit standard model, a color


image would have 24 bits/pixel.
– 8-bits for each of the three color bands
(red, green and blue).
Color Images
• For many applications, RGB is transformed to a
mathematical space that decouples (separates) the
brightness information from color information.
• The transformed images would have a:
– 1-D brightness or luminance.
– 2-D color space or chrominance.
• This creates a more people-oriented way of describing
colors.
Color Images
• One example is the
hue/saturation/lightness (HSL) color
transform.
– Hue: Color (green, blue, orange, etc).
– Saturation: How much white is in the color
(pink is red with more white, so it is less
saturated than pure red).
– Lightness: The brightness of the color.
Color Images
• Most people can relate to this method of
describing color.
– “A deep, bright orange” would have a large
intensity (bright), a hue of orange and a high
value of saturation (deep).
– It is easier to picture this color in mind.
– If we define this color in terms of RGB
component, R = 245, G = 110, B = 20, we
have no idea how this color looks like.
Color Images
• In addition to HSL, there are various other
formats used for representing color
images:
– YCrCb
– SCT (Spherical Coordinate Transform)
– PCT (Principle Component Transform)
– CIE XYZ
– L*u*v
– L*a*b
Color Images

• One color space can be converted to


another color space by using
equations.
• Example: Converting RGB color space
to YCrCb color space.
Multispectral Images
• Typically contain information outside
normal human perceptual range.
– Infrared, ultraviolet, X-ray, acoustic or radar
data.
• They are not really images in usual sense
(not representing scene of physical world,
but rather information such as depth).
• Values are represented in visual form by
mapping the different spectral bands to
RGB.
Multispectral Images
• Sources include satellite system,
underwater sonar system, airborne radar,
infrared imaging systems, and medical
diagnostic imaging systems.
• The number of bands into which the data
are divided depends on the sensitivity of
the imaging sensory.
Digital Image File Formats
• BIN.
• PPM.
• GIF (Graphics Interchange Format).
• TIFF (Tagged Image File Format).
• JFIF (JPEG File Interchange Format).
Digital Image File Formats
• GIF (Graphics Interchange Format):
– Commonly used in WWW.
– Limited to a maximum of 8 bits/pixel (256
colors).
– The bits are used as an input to a lookup
table.
– Allow for a type of compression called LZW.
– Image header is 13 bytes long.
Digital Image File Formats
• TIFF (Tagged Image File Format):
– Allows a maximum of 24 bits/pixel.
– Support several types of compression: RLE,
LZW, and JPEG.
– Header is of variable size and is arranged in a
hierarchical manner.
– Designed to allow user to customize it for
specific applications.
Digital Image File Formats
• JFIF (JPEG File Interchange Format):
– Allows images compressed with JPEG
algorithm to be used in many different
computer platforms.
– Contains a Start of Image (SOI) and an
application (APPO) marker that serves as a
file header.
– Being used extensively in WWW.
Digital Image File Formats
• Sun Raster file format:
– Defined to allow for any number of bits per
pixel.
– Supports RLE compression and color lookup
tables.
– Contains 32-byte header, followed by the
image data.
Digital Image File Formats
• SGI file format:
– Handles up to 16 million colors.
– Supports RLE compression.
– Contains 512-byte header, followed the image
data.
– Majority of the bytes in header are not used,
presumably for future extension.
Digital Image File Formats
• EPS (Encapsulated PostScript):
– Not a bitmap image. The file contains text.
– It is a language that supports more than just
images. Commonly used in desktop
publishing.
– Directly supported by many printers (in the
hardware itself).
– Commonly used for data interchange across
hardware and software platforms.
– The files are very big.
• What is Digital Image Processing?
Digital Image
— a two-dimensional function
x and y are spatial coordinates
The amplitude of f is called intensity or gray level at the point
(x, y)
f ( x, y )
Digital Image Processing
— process digital images by means of computer, it covers low-,
mid-, and high-level processes
low-level: inputs and outputs are images
mid-level: outputs are attributes extracted from input images
high-level: an ensemble of recognition of individual objects

Pixel
— the elements of a digital image
Image Sampling and Quantization

Digitizing the
coordinate
values
Digitizing the
amplitude
values
Image Sampling and Quantization
Representing Digital Images
Representing Digital Images

• The representation of an M×N numerical


array as

 f (0, 0) f (0,1) ... f (0, N  1) 


 f (1, 0) f (1,1) ... f (1, N  1) 
f ( x, y )  
 ... ... ... ... 
 
 f ( M  1, 0) f ( M  1,1) ... f ( M  1, N  1) 
Representing Digital Images

• The representation of an M×N numerical


array as

 a0,0 a0,1 ... a0, N 1 


 a a1,1 ... a1, N 1 
A  1,0 
 ... ... ... ... 
 
 aM 1,0 aM 1,1 ... aM 1, N 1 
Representing Digital Images

• The representation of an M×N numerical


array in MATLAB

 f (1,1) f (1, 2) ... f (1, N ) 


 f (2,1) f (2, 2) ... f (2, N ) 
f ( x, y )  
 ... ... ... ... 
 
 f ( M ,1) f ( M , 2) ... f (M , N )
• The smallest element resulting from the discretization of
the space is called a pixel (picture element).
• For 3-D images, this element is called a voxel
(volumetric pixel).
• A combination of the words “volumetric” and “pixel,”
voxels are the measure of the dimensions of a
perceptible cube in a 3D image. Unlike pixels, which
have length and breadth, voxels have the third
dimension, which is depth.
Representing Digital Images

• Discrete intensity interval [0, L-1], L=2k

• The number b of bits required to store a M × N


digitized image

b=M×N×k
Spatial and Intensity Resolution

• Spatial resolution
— A measure of the smallest discernible detail in an
image
— stated with line pairs per unit distance, dots (pixels) per
unit distance, dots per inch (dpi)

• Intensity resolution
— The smallest discernible change in intensity level
— stated with 8 bits, 12 bits, 16 bits, etc.
Spatial and Intensity Resolution
Spatial and Intensity Resolution
Spatial and Intensity Resolution
Basic Relationships Between Pixels

• Neighborhood

• Adjacency

• Connectivity

• Paths

• Regions and boundaries

• Distance
Basic Relationships Between Pixels

• Neighbors of a pixel p at coordinates (x,y)

 4-neighbors of p, denoted by N4(p):


(x-1, y), (x+1, y), (x,y-1), and (x, y+1).

 4 diagonal neighbors of p, denoted by ND(p):


(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).

 8 neighbors of p, denoted N8(p)


N8(p) = N4(p) U ND(p)
Basic Relationships Between Pixels

• Adjacency
Let V be the set of intensity values

 4-adjacency: Two pixels p and q with values from V are 4-


adjacent if q is in the set N4(p).

 8-adjacency: Two pixels p and q with values from V are 8-


adjacent if q is in the set N8(p).
• 8-adjacency
• Two pixels p and q are called 8-adjacent if both
have the same value and q is in N8(p).

• Here, p and q1, are 8 adjacent because p and q1


are 8-neighbours and both are 1. p and q2 are
also 8 adjacent because p and q2 are 8-
neighbours and both are 1. p and q3 are not 8-
adjacent because p is 1 and q3 is 0.
Basic Relationships Between Pixels

• Adjacency
Let V be the set of intensity values

 m-adjacency: Two pixels p and q with values from V are


m-adjacent if

(i) q is in the set N4(p), or

(ii) q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels whose
values are from p.
Basic Relationships Between Pixels

• Path
 A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q
with coordinates (xn, yn) is a sequence of distinct pixels with
coordinates

(x0, y0), (x1, y1), …, (xn, yn)

Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.

 Here n is the length of the path.

 If (x0, y0) = (xn, yn), the path is closed path.

 We can define 4-, 8-, and m-paths based on the type of adjacency
used.
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
8-adjacent
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
8-adjacent m-adjacent
Examples: Adjacency and Path
V = {1, 2}

0 1 1
1,1 1,2 1,3 0 1 1 0 1 1
0 2 0
2,1 2,2 2,3 0 2 0 0 2 0
0 0 1
3,1 3,2 3,3 0 0 1 0 0 1
8-adjacent m-adjacent

The 8-path from (1,3) to (3,3): The m-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
Basic Relationships Between Pixels

• Connected in S
Let S represent a subset of pixels in an image. Two pixels
p with coordinates (x0, y0) and q with coordinates (xn, yn)
are said to be connected in S if there exists a path

(x0, y0), (x1, y1), …, (xn, yn)

Where i,0  i  n,( xi , yi )  S


• Two pixels are connected if they are adjacent.
Similarly, two subsets are connected or adjacent if
some pixels in S1 is adjacent to some pixels in S2.

(a) (b)
Consider two image subsets S1 and S2 in fig 1.59(a)
and (b) V = {1}. As pixel p in sub image S1 and pixel
q in sub image S2 have a value 1 and are 8-adjacent,
thus S1 and S2 are 8-adjacent but not 4-
adjacent,because p and q are not 4-adjacent for V =
{1}.
Path
• A digital path between pixel p having co-ordinates (x, y)
to pixel q with (u, v) co-ordinates is a sequence of
connected pixels (x, y) (x0, y0) (x1, y1) (x2, y2) . . . . (u, v).
• Length of the path is the count of connected pixels.
• If first pixel is same as last pixel, that is (x, y) = (v, w) it is
called closed path.
• Path length between p and q is = 5.

Figure : Example showing path between pixels p and q


Region
• R is a subset of pixels in an image. If every pixel in R
is connected to other pixels in R, then R is called a
Region. Figure illustrates a region in an image.
Boundary of a region is defined as a set of pixels in
the region that have one or more neighbors that are
not in R. Boundary is the edge of a region.
Connected Component
S is a subset of an image. Two pixels p
and q are said to be connected in s if there
exists a path between p and q which
passes entirely through S. Fig 1.62shows
connected components in an image
Distance Measure
• Distance measures are extensively used in clustering techniques.
Let us consider three pixels p, q and z shown in fig 1.63 having
following co-ordinates (x, y), (s, t), (v, w) respectively. Many distance
measuring formula can be defined in image processing. For any
formula to qualify as a distance measure, following conditions
should be satisfied
1. Distance should be a non negative number
Dis(p, q) ≥ 0,
If p = q, → Dis(p, q) = 0
2. Distance between p and q should be same as distance between q
and p
• Dis(p, q) = Dis(q, p)
3. Distance between any two points p and z should be less than equal
to sum of distance between p and q and between q and z.
• Dis(p, z) ≤ Dis(p, q) + Dis(q, z)
• Mostly commonly used distance measures are the
following
1. Euclidean Distance: between p and q is defined as
Diseq(p, q) = √(x - s)2 + (y - t)2
2. City-block distance (Dis4): between p and q city block
distance is given by
Discity(p, q) = Dis4(p, q) = |x - s| + |y - t|
3. Chessboard distance (Dis8): between p and q, chess
board distance is given by
Dischess(p, q) = Dis8(p, q) = max(|x - s|, |y - t|)
Example 1.
For V = {0, 1}, find the length of shortest 4, 8 and m-paths between p and q.
repeat for V = {1, 2} for the given image.

Solution
4 path: The paths starts from p but does not reach q as no path exist
between q and previous pixel.

• 8 path is not unique. In the above figure, two paths are shown, one with
dotted arrows, shortest path is 4.
• m path is unique.
Q1. Let p and q be the pixels at coordinates
(10,15) &(15,25) respectively. Find out
which distance measure gives the
minimum distance between pixels.
Minkowski Distance
• The generalized metric distance.

• When
λ=1 it behaves City Block Distance.
λ=2 it behaves Euclidean Distance.
λ=∞ Chebyshev distance is a special
case of minkowski distance.
Basic Relationships Between Pixels

Let S represent a subset of pixels in an image

• For every pixel p in S, the set of pixels in S that are connected to p is


called a connected component of S.

• If S has only one connected component, then S is called Connected


Set.

• We call R a region of the image if R is a connected set

• Two regions, Ri and Rj are said to be adjacent if their union forms a


connected set.
• Regions that are not to be adjacent are said to be disjoint.
Basic Relationships Between Pixels

• Boundary (or border)

 The boundary of the region R is the set of pixels in the region that
have one or more neighbors that are not in R.
 If R happens to be an entire image, then its boundary is defined as the
set of pixels in the first and last rows and columns of the image.

• Foreground and background

 An image contains K disjoint regions, Rk, k = 1, 2, …, K. Let Ru denote


the union of all the K regions, and let (Ru)c denote its complement.
All the points in Ru is called foreground;
All the points in (Ru)c is called background.
Question 1

• In the following arrangement of pixels, are the two


regions (of 1s) adjacent? (if 8-adjacency is used)

1 1 1
Region 1
1 0 1
0 1 0
0 0 1 Region 2

1 1 1
1 1 1
Question 2

• In the following arrangement of pixels, are the two


parts (of 1s) adjacent? (if 4-adjacency is used)

1 1 1
Part 1
1 0 1
0 1 0
0 0 1 Part 2

1 1 1
1 1 1
• In the following arrangement of pixels, the two regions
(of 1s) are disjoint (if 4-adjacency is used)

1 1 1
Region 1
1 0 1
0 1 0
0 0 1 Region 2

1 1 1
1 1 1
• In the following arrangement of pixels, the two regions
(of 1s) are disjoint (if 4-adjacency is used)

1 1 1
foreground
1 0 1
0 1 0
0 0 1 background

1 1 1
1 1 1
Question 3

• In the following arrangement of pixels, the circled point


is part of the boundary of the 1-valued pixels if 8-
adjacency is used, true or false?

0 0 0 0 0
0 1 1 0 0
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
Question 4

• In the following arrangement of pixels, the circled point


is part of the boundary of the 1-valued pixels if 4-
adjacency is used, true or false?

0 0 0 0 0
0 1 1 0 0
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
Distance Measures

• Given pixels p, q and z with coordinates (x, y), (s, t),


(u, v) respectively, the distance function D has
following properties:

a. D(p, q) ≥ 0 [D(p, q) = 0, iff p = q]

b. D(p, q) = D(q, p)

c. D(p, z) ≤ D(p, q) + D(q, z)


Distance Measures
The following are the different Distance measures:

a. Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]1/2

b. City Block Distance:


D4(p, q) = |x-s| + |y-t|

c. Chess Board Distance:


D8(p, q) = max(|x-s|, |y-t|)
Question 5

• In the following arrangement of pixels, what’s the value


of the chessboard distance between the circled two
points?

0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
Question 6

• In the following arrangement of pixels, what’s the value


of the city-block distance between the circled two
points?

0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
Question 7

• In the following arrangement of pixels, what’s the value


of the length of the m-path between the circled two
points?

0 0 0 0 0
0 0 1 1 0
0 1 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
Question 8

• In the following arrangement of pixels, what’s the value


of the length of the m-path between the circled two
points?

0 0 0 0 0
0 0 1 1 0
0 0 1 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0

You might also like