0% found this document useful (0 votes)
13 views69 pages

Lec 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views69 pages

Lec 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Digital Image Processing

Neighborhood Operations in Images

Objectives

❖To understand the basic relationship between pixels


in image
❖To study the neighborhood, adjacency, connectivity,
paths, regions and boundary of the image
Basic Relationships Between Pixels

Neighborhood

Adjacency

Connectivity

Paths

Regions and boundaries


(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Neighbors of a Pixel

Neighborhood relation is used to tell adjacent pixels. It is


useful for analyzing regions.

(x,y-1) 4-neighbors of p:

(x-1,y) p (x+1,y)
(x−1,y)
(x+1,y)
N4(p) = (x,y−1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and


horizontal neighbors.
Note: q  N4(p) implies p  N4(q)
Neighbors of a Pixel (cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

p
(x−1,y−1)
(x+1,y−1)
ND(p) = (x−1,y+1)
(x-1,y+1) (x+1,y+1)
(x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Neighbors of a Pixel (cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x-1,y) p (x+1,y)
(x−1,y−1)
(x,y−1)
(x+1,y−1)
(x-1,y+1) (x,y+1) (x+1,y+1)
(x−1,y)
N8(p) = (x+1,y)
(x−1,y+1)
(x,y+1)
(x+1,y+1)
8-neighborhood relation considers all neighbor pixels.
Basic Relationships Between Pixels

Neighbors of a pixel p at coordinates (x,y)


➢ 4-neighbors of p, denoted by N4(p):
(x-1, y), (x+1, y), (x,y-1), and (x, y+1).
➢ 4 diagonal neighbors of p, denoted by ND(p):
(x-1, y-1), (x+1, y+1), (x+1,y-1), and (x-1, y+1).
➢ 8 neighbors of p, denoted N8(p)
N8(p) = N4(p) U ND(p)
Connectivity

Connectivity is adapted from neighborhood relation.


Two pixels are connected if they are in the same class (i.e. the
same color or the same range of intensity) and they are
neighbors of one another.

For p and q from the same class


 4-connectivity: p and q are 4-connected if q  N4(p)
 8-connectivity: p and q are 8-connected if q  N8(p)
 mixed-connectivity (m-connectivity):
p and q are m-connected if q  N4(p) or
q  ND(p) and N4(p)  N4(q) = 
Adjacency

A pixel p is adjacent to pixel q is they are connected.


Two image subsets S1 and S2 are adjacent if some pixel
in S1 is adjacent to some pixel in S2

S1
S2
We can define type of adjacency: 4-adjacency, 8-adjacency
or m-adjacency depending on type of connectivity.
Path

A path from pixel p at (x,y) to pixel q at (s,t) is a sequence


of distinct pixels:
(x0,y0), (x1,y1), (x2,y2),…, (xn,yn)
such that
(x0,y0) = (x,y) and (xn,yn) = (s,t)
and
(xi,yi) is adjacent to (xi-1,yi-1), i = 1,…,n

q
p

We can define type of path: 4-path, 8-path or m-path


depending on type of adjacency.
Path (cont.)

8-path m-path
p p p

q q q

8-path from p to q m-path from p to q


solves this ambiguity
results in some ambiguity
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1

8-adjacent
Examples: Adjacency and Path
V = {1, 2}

0 1 1 0 1 1 0 1 1
0 2 0 0 2 0 0 2 0
0 0 1 0 0 1 0 0 1

8-adjacent m-adjacent
Examples: Adjacency and Path
V = {1, 2}
01,1 11,2 11,3 0 1 1
0 1 1
02,1 22,2 02,3 0 2 0
0 2 0
03,1 03,2 13,3 0 0 1
0 0 1

8-adjacent m-adjacent

The 8-path from (1,3) to (3,3): The m-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
Examples: Adjacency and Path
V = {1, 2}
01,1 11,2 11,3 0 1 1
0 1 1
02,1 22,2 02,3 0 2 0
0 2 0
03,1 03,2 13,3 0 0 1
0 0 1

8-adjacent m-adjacent

The 8-path from (1,3) to (3,3): The m-path from (1,3) to (3,3):
(i) (1,3), (1,2), (2,2), (3,3) (1,3), (1,2), (2,2), (3,3)
(ii) (1,3), (2,2), (3,3)
Distance

For pixel p, q, and z with coordinates (x,y), (s,t) and (u,v),


D is a distance function or metric if

 D(p , q)  0 (D(p,q) = 0 if and only if p = q)

 D(p,q) = D(q,p)

 D(p,z)  D(p,q) + D(q,z)

Example: Euclidean distance

De ( p, q) = ( x − s ) 2 + ( y − t ) 2
Distance Measures
The Euclidean Distance between p and q is defined as:
De (p,q) = [(x – s)2 + (y - t)2]1/2

Pixels having a distance less than or equal


to some value r from (x,y) are the points
contained in a disk of radius r centered at
(x,y)

q
(s,t)

p
(x,y)
Distance Measures

The D4 distance (also called city-block distance) between p and q is defined


as:
D4 (p,q) = | x – s | + | y – t |
Pixels having a D4 distance from
(x,y), less than or equal to some
value r form a Diamond
centered at (x,y) q
(s,t)

D4
p
(x,y)
Distance (cont.)

D4-distance (city-block distance) is defined as

D4 ( p, q) = x − s + y − t

2
2 1 2
2 1 0 1 2
2 1 2
2

Pixels with D4(p) = 1 is 4-neighbors of p.


Distance (cont.)

D8-distance (chessboard distance) is defined as

D8 ( p, q) = max( x − s , y − t )

2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2

Pixels with D8(p) = 1 is 8-neighbors of p.


Distance Measures

The D8 distance (also called chessboard distance) between p and q is defined


as:
D8 (p,q) = max(| x – s |,| y – t |)
Pixels having a D8 distance from
(x,y), less than or equal to some
value r form a square
Centered at (x,y) q
(s,t)

D8(b)

p D8(a)
(x,y)
D8 = max(D8(a) ,
D8(b))
Region and Boundary

Region
Let R be a subset of pixels in
an image, we call R a region of the
image if R is a connected set.
Boundary
The boundary (also called border
or contour) of a region R is the set of pixels in
the region that have one or more neighbors
that are not in R.
Region and Boundary
If R happens to be an entire image, then its boundary is defined
as the set of pixels in the first and last rows and columns in the image.

This extra definition is required because an image has no


neighbors beyond its borders

Normally, when we refer to a region, we are referring to subset


of an image, and any pixels in the boundary of the region that happen to
coincide with the border of the image are included implicitly as part of the
region boundary.
Arithmetic Operations

Arithmetic operations between images are array


operations. The four arithmetic operations are denoted as

s(x,y) = f(x,y) + g(x,y)


d(x,y) = f(x,y) – g(x,y)
p(x,y) = f(x,y) × g(x,y)
v(x,y) = f(x,y) ÷ g(x,y)

26
Set and Logical Operations
Set and Logical Operations

Let A be the elements of a gray-scale image


The elements of A are triplets of the form (x, y, z), where x
and y are spatial coordinates and z denotes the intensity at the
point (x, y).

A = {( x, y, z ) | z = f ( x, y )}
The complement of A is denoted Ac

Ac = {( x, y, K − z ) | ( x, y, z )  A}
K = 2k − 1; k is the number of intensity bits used to represent z

28
Set and Logical Operations

The union of two gray-scale images (sets) A and B is defined


as the set

A  B = {max(a, b) | a  A, b  B}
z

29
Set and Logical Operations

30
Set and Logical Operations

31
Color Image Processing

• In automated image analysis, color is a powerful descriptor, which


simplifies object identification and extraction.
• The human eye can distinguish between thousands of color shades and
intensities but only about 20-30 shades of gray. Hence, use of color in
human image processing would be very effective.
• Color image processing consists of two parts:
Pseudo-color processing and Full color processing.
• In pseudo-color processing, (false) colors are assigned to a monochrome image. For
example, objects with different intensity values maybe assigned different colors, which
would enable easy identification/recognition by humans.
• In full-color processing, images are acquired with full color sensors/cameras. This has
become common in the last decade or so, due to the easy and cheap availability of
color sensors and hardware.
Color Fundamentals

➢ Cones in the retina are


responsible for color perception in
the human eye.
➢ Six to seven million cones in the
human eye can be divided into
three categories: red light sensitive
cones (65%), green light sensitive
cones (33%) and blue light
sensitive cones (2%).
➢ The combination of the responses
of these different receptors gives us
our color perception
➢ This is called the tri-stimulus
model of color vision
Color Image Processing

➢ Due to the absorption characteristics of


the human eye, all colors perceived by the
human can be considered as a variable
combination of the so called three primary
colors:
✓ Red (R) (700 nm)
✓ Green (G) (546.1 nm)
✓ Blue (B) (435.8 nm)
➢ Note that the specific color
wavelengths are used mainly for
standardization.
➢Primary colors when added produce
secondary colors:
✓ Magenta (red + blue)
✓ Cyan (green + blue)
✓ Yellow (red + green)
Additive vs. Subtractive Color Systems
• involves light emitted directly
from a source • Subtractive color starts with an
• mixes various amounts of red, object that reflects light and uses
green and blue light to colorants to subtract portions of
produce other colors. the white light illuminating an
• Combining one of these object to produce other colors.
additive primary colors with • If an object reflects all the white
another produces the additive light back to the viewer, it
secondary colors cyan, appears white.
magenta, yellow. • If an object absorbs (subtracts)
• Combining all three primary all the light illuminating it, it
colors produces white. appears black.
Color Space

• Color space: A 3-dimensional coordinate system


for specifying color
• A color is specified as a single point in the color space
• Commonly Used Color Space:
• RGB (Red, Green, Blue)
• used for screen displays (e.g. monitor)
• CMYK (Cyan, Magenta, Yellow, Black)
• used for printing
• HSI (Hue, Saturation, Intensity)
• used for manipulation of color components
• L*a*b*, L*u*v* (“perceptual” color spaces)
• distances in color space correspond to distances in human color
perception
Color Terms

• Hue - dominant wavelength


e.g. Red roses, green traffic light
• Saturation - relative purity; amount of white light mixed
e.g. Bright orange, pink milk
• Chromaticity - Hue and Saturation taken together
• Intensity (brightness) - amount of achromatic light
- light apart from color
- gray level; B & W
• Tristimulus values - amount of red, green and blue
to form a particular color
• Color gamut - range of colors
RGB Space

24bit color cube composed of


256x256x256=16,777,216 colors
RGB Space

• When light is mixed, wavelengths combine (add)

• The Red-Green-Blue (RGB) model is used most often for


additive models. Additive to produce other colors

• Can’t produce all visible colors

• Contained within CIE XYZ

• Perfect for imaging since hardware uses three color phosphors


Color Image Processing

Safe color: a subset of colors hat are likely to be


reproduced faithfully, reasonably independently of
viewer hardware capabilities.
Color Image Processing
CMY Model

• Paint/ink/dye subtracts color from white light and reflects the


rest

• Mixing paint pigments subtracts multiple colors

• The Cyan-Magenta-Yellow (CMY) model is the most common


subtractive model
• Same as RGB except white is at the origin and black is at the extent of the
diagonal

• Very important for printing devices


• Already white so we can’t add to it!!
CMY Model

 C  1  R
M  = 1 − G
    
Y  1 B

White → minus Blue minus Green = Red


HSI Model

• The intensity component (I) is decoupled from the color components


(H and S)
• Ideal for developing image processing algorithms
• H and S are closely related to the way human visual system perceives
colors
• HSI allows us to separate intensity and chromaticity → color is
decomposed
Hue and Saturation
HSI Color Space
HIS Components of Image
Image Hue

Intensity
Saturation
Chapter 6
Color Image Processing
Color Image Processing
Color Components
Color Image Processing

• Pseudo-color image processing


• Assign color to monochrome images
• Intensity slicing
• Gray level to color transformation
• Spatial domain approach – three different transformation
functions
• Frequency domain approach – three different filters
• Full-color image processing
• Color image enhancement and restoration
• Color compensation
Example of
Pseudocolor Image Processing
Basics of

Full Color Image Processing
Full-color image processing approaches fall into two major categories.
✓ In the first category, we process each component image individually and then
form a composite processed color image from the individually processed
components.
✓ In the second category, we work with color pixels directly
Color Transformation

As for color transformations:

The pixel values here are triplets or quartets (i.e.,


groups of three or four values)
An example
Color Complements
Histogram Processing
Color Image Before
Smoothing
After Smoothing
Color Image Sharpening
Color Image Segmentation
Segmentation in
HSI:
• saturation is used as a
masking image in order
to isolate further ROI in
the hue image.
• intensity image is used
less frequently for
segmentation of color
images because it carries
no color information.
Segmentation in RGB

Euclidean Distance between z and a points

Mahalanobis Distance between z and a


points
Color Edge Detection

Let r, g, and b be unit vectors


along the R,G, and B axis of
RGB color space

Processing the three individual planes to


form a composite gradient image can yield
erroneous results.
Color Edge Detection
Color Edge Detection in RGB space
Noise in Color Image
Noise in Color Image
Noise In Color Image

You might also like