Scilab PDF
Scilab PDF
Image Processing
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
0.1 Information
Lecture ( Wednesday, 08:00 – 09:30):
Participation necessary
Basis: Printed slides
Completion and notes necessary
Questions welcome
Prof. Stilla
Exercises:
Partially during lecture
Some additional exercise courses for programming
Exams:
Answering questions L. Hoegner
Calculations, working on images
Question block – true / false statements
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
Image Processing
1 Introduction
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
1.1 Motivation
Digital Image processing
operates upon pictorial information of a digit form
uses digital circuit, computer processors and software to carry out an
operation
Aim
better or more illustrative representation
information extraction (as automatic as possible)
Information extraction requires
knowledge about imaging process (real world → image) and
interpretation process of humans (image → objects)
therefore, image processing involves methods of
signal processing
pattern recognition and object recognition
Shining example
Human / eye, visual perception
2 images → 3 D description of the visible scene in object space
Dream of a “visual equipped" robot (very old idea)
(Inverted
brightness)
255 255
in Image Description
out
generative
Image
Image Computer Image analysis (also):
Processing
Graphics Image understanding
Machine vision
Image interpretation
Descript.
Image Everything Computer Vision
Analysis else
General aim of image
analysis:
Generation of an
image or scene
description
Segmented image
Preprocessing
sampled image
Mapping
scene
Knowledge Model
Object modeling
attributive description
model–based image analysis specification of the object
model-driven image analysis through its global
properties
...
(e.g. building: attributes
list such as height, surface
Description of image formation area, building volume, ...)
through:
Illumination modeling structural description
Sensor modeling specification of the general
Scene or object modeling object layout through its
components (e.g. building
= walls + roofs+…)
Specification hierarchy
Ordering of the knowledge according to a certain level of specification
The specification levels could be linked through statements such as
„is_a_kind_of“
Example:
A family house is_a_kind_of residential houses. A residential house
is_a_kind_of buildings. A building is_a_kind_of constructions.
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_01] 23
1.5.4 Processing layers
processing processed
operations
layer data
symbolic
high level objects
recognition
feature
medium level features
extraction
image
low level pixel
manipulation
Image Processing
2 Digital image characterization
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
row
Grey
black value
gray
B = b(r,c) B = b(x,y)
y
r
y
x
B = b(x,y) B = b(x,y)
x
y
for a) sru: b ( , )
for b) s‘lu: b‘ ( , ); s‘ro: b‘ ( , )
gray value
gray
black white
After sampling and quantization,
k=2L the digital image is generated.
L=number of the converter bits In order to save memory, the data
is usually compressed.
z t
y y
Bi+2(x,y,zi+2) Bi+2(x,y,ti+2)
Bi+1(x,y,zi+1) Bi+1(x,y,ti+1)
Bi(x,y,zi) Bi(x,y,ti)
x x
File formats
Band-Sequential or Band-
Interleaved (BSQ)
monochrome
color image
2
Q0(k-1) Q1 (k-1)
1
Q2 (k-1) Q3 (k-1)
2021
1220
1000
1010
2211
1010
0001
i
x i a jx x 0
i
yi a jy y0
j1 j1
Coding:
(8 parameters)
Variance δB2
The variance is the gray values‘ mean-squared deviation from the mean
Standard deviation δB
b r, c
1 2
2B
n rc r 1 c 1
nr nc
b r, c 2b(r, c)
1
2 2
n rc r 1 c 1
1 n r nc 2 n r nc n r nc
2
b r, c 2 b(r, c)
n rc r 1 c 1 r 1 c 1 r 1 c 1
1 n r nc 2
b r, c 2n rc n rc
2
n rc r 1 c 1
1 n r nc 2
b r, c n rc
2
n rc r 1 c 1
g i g min
PB g i n rc
pB g
1
PB g pB gi 1
n rc gi g min
0 pB gi 1
The histogram of an image gives reliable information about
brightness and contrast, although the entire spatial information get
lost.
Graphical representations of histograms= bar charts
g max
gi g min
gi pB gi
variance
g max
1
g PB g i
2
2
B i
n rc gi g min
g max
g pB gi
2
2B i
gi g min
SB g max n rc
Cumulative realtive histogram s(g)
1
sB g SB g
n rc
s B g max 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.05
0.2
0.045
0.1
0.04
0
0.035 50 100 150 200 250
0.03
0.025
0.02
0.015
0.01
0.005
0
50 100 150 200 250
g max
1
H
gi g min
p B g i log 2
pB gi
g max
H
gi g min
p B gi log 2 p B gi
Example
homogeneous image B=(b(x,y))=g
Dual image (pB (g1)=0,5; pB (g2)=0,5)
PB (g) uniformly distributed
WB,φ(g1,g2) =
0 0 1 1 2 3
0 0 0 1 2 3
0 0 1 2 3 3
Low contrast
0 1 1 2 3 3
1 2 2 3 3 3 High contrast
2 2 3 3 3 3
Draw the grey value changes of a line and determine the spatial frequency
in x direction!
2.4.1 Covariance
1 nr nc
B1B2 b1 r, c 1 b 2 r, c 2
n rc r 1 c 1
n r nc
1
B1B2 b1 r, c b2 r, c 1 2
n rc r 1 c 1
nr nc
B1B2 b r, c b r, c
1 1 2 2
B1B2 r 1 c 1
not defined
B2 B1
Template
Special case: the reference (or template) has only 2 gray values. The
computation of the corresponding correlation coefficient could get easier.
Not all the pixel pairs b1(r,c) (from the sliding window B1 in the test image),
b2(r,c) (from the template B2) are used in the computation of the
correlation. Only certain areas that remain after applying a mask to the
template or after zeroing some values from the template image, are
investigated. This is called the masked correlation.
Example of a masked template B2
Black area: gray value g1
Gray area : gray value g2
white area: the corresponding pixels are
not used in the computation of the correlation.
Masked correlation
reference
Test image
Correlation image
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ip2_14_02] 64
2.4.2 Correlation coefficient – example character recognition
Mask-based correlation
Query with one template in order to find the position in the test image
where the template and the character in the test image fit at best
Mask-based correlation
Query with different templates for the template that matches the best
with the test image
Image Processing
3 Image transformation
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
Zooming
Affine transformation
Projective transformation
Procedure
Each pixel in the input image is projected in the output one, where the gray
values are interpolated.
Disadvantage
When few pixels are available in the input image, the corresponding pixels in
the output image will be assigned with “NO VALUE” (holes in the output image).
Procedure
Each pixel is replaced in its position in the input image and then
assigned with the interpolated gray values from the output image
Advantage
No uncovered pixels
Often used in the digital photogrammety
g(0,1)
g(1,1)
(0,1)
g(0,0)
y‘‘ (1,1)
(0,0) g(1,0)
x‘‘
(1,0)
x
b‘ ( x‘1, y‘1 ) =
b‘
255
0 b
0 255
0 b
0 255
The images could thus be normalized for a given mean and standard
deviation.
0 if [b( x, y) k2 ] k1 0 255
k1
b '( x, y) [b( x, y) k2 ] k1 bmax bmin
[b( x, y) k2 ] k1 255
255 if
k2 bmin
b‘
255
0 b
0 bmin bmax 255
f max f bmax
f min f bmin
0 b
0 bmin bmax 255
Through, e.g.
Exponential transformation
Logarithmic transformation
Sigmoid transformation
Tangent transformation
b‘
255
0 b
0 255
b‘
255
0 b
0 255
0 b
0 255
α = 0.9
0 b
0 255
α = 0.9
255 if b( x, y) bs
Segmentation by a static
threshold
b '( x, y) Gray value image: 2-level image
0 else (e.g. images of scanned
drawings or text)
b‘
255
0 b
0 bs 255 bs 128
0 b
0 bmin bmax 255 bmin 128 bmax 160
b‘
255
0 b
0 b2 b3 255
b1 n4
g g
input histogram output histogram
transformation with fn
Histogram linearization or equalization
b` r,c f n (b(r,c))
pB(g)
pB‘(g)
1
so that sB' g g
gmax
Advantage: simple implementation,
g since the scaling function can be
g directly derived from the gray-value
input histogram output histogram distribution
0.8
0.08
0.7
0.07
0.6
0.06
0.5
0.05
0.4
0.04
0.03 0.3
0.02 0.2
0.01 0.1
0 0
50 100 150 200 250 50 100 150 200 250
0.05
0.9
0.045
0.8
0.04
0.7
0.035
0.6
0.03
0.5
0.025
0.4
0.02
0.3
0.015
0.01 0.2
0.005 0.1
0 0
50 100 150 200 250 50 100 150 200 250
0.05
0.9
0.045
0.8
0.04
0.7
0.035
0.6
0.03
0.5
0.025
0.4
0.02
0.3
0.015
0.01 0.2
0.005 0.1
0 0
50 100 150 200 250 50 100 150 200 250
0.05
0.9
0.045
0.8
0.04
0.7
0.035
0.6
0.03
0.5
0.025
0.4
0.02
0.3
0.015
0.01 0.2
0.005 0.1
0 0
50 100 150 200 250 50 100 150 200 250
LUT r g b
0 r0 g0 b0
1 r1 g1 b1
. . . .
5 000 101 189 .
. . . .
255 r255 g255 b255
3.2.2.2 Convolution
Input image B with b(r,c)
Filter matrix h(k,l) with operators K,L: (uneven)
(Filter mask, Filter kernel, Impule response)
K 1 K 1 L 1 L 1 Often the sum of weights (=
k ; ; l ;
2 2 2 2 elements of the filter matrix)
K l L l
2 2
K l L l
h(k, l) 1
k l
Convolution 2 2
K l L l
2 2
b ' r, c K l L l
b(r k, c l) h(k, l) r, c
k l
2 2
commutative
associative
distributive
scalar multiplication
?
1 1 1
h k, l 1 1 1
1
b' bh
9
1 1 1
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_03] 57
3.2.2.3 Effect of mean operators - edge
BR 20 20 20 20 20 20 20 20 20 80 80 80 80 80 80 80 80 80 80
1 1 1 1
HR3 3
BR*HR3
HR5 1 1 1 1 1 1
5
BR*HR5
BR 20 20 20 20 26 20 20 20 20 80 80 80 80 50 80 80 50 80 80
HR 1 1 1 1
3
BR*HR
k k m 2 l lm 2
1
h( k , l ) e 2 2 where km,lm= coordinate
2 2 of the central pixel
1 3
h(k, l) h (k,l)
h g (k, l) g
-3 -2 -1 0 1 2 3
k,l
-15 -15
-10 -10
-5 -5
0 0
5 5
10 10
15 15
-15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15
Bx 20 20 20 20 20 20 20 21 32 50 68 79 80 80 80 80 80 80 80
Hx -1 0 1
B‘x
1 0 1 1 1 1
H PREWITT _ x 1 0 1 H PREWITT _ y 0 0 0
1 0 1 1 1 1
1 0 1 1 2 1
H SOBEL _ x 2 0 2 H SOBEL _ y 0 0 0
1 0 1 1 2 1
bSOBEL _ MAG
_ x bSOBEL _ y
2 2
bSOBEL
bSOBEL _ DIR
bSOBEL _ y
arctan
bSOBEL _ x
b'ROBERTS Max b( x 1, y 1) b( x, y ) , b( x, y 1) b( x 1, y )
b'KOMPAß Max b' N , b'O , b'S , b'W , b' NO , b'SO , b' NW , b'SW
1 1 1 1 1 1 1 1 1
1 1
H NW
1 2 1 HN 1 2 H NO 1 2
1 1 1 1 1 1 1 1 1
Northwest North Northeast
1 1 1 1 1 1
1
H W 1 2 1
HO 1 2
1 1 1 1 1 1
West East
1 1 1 1 1 1 1 1 1
H SW 1 2 1 H S 1 2 1 H SO 1 2 1
1 1 1 1 1 1 1 1 1
Southwest South Southeast
2 b ( x, y ) 2 b ( x, y )
b ( x , y )
x 2 y 2
0 0 0 0 1 0 0 1 0
H H x H y 1 2 1 0 2 0 1 4 1
0 0 0 0 1 0 0 1 0
Often different masks are used, which use algebraic sign vice versa or
other binomial representations.
0 1 0 1 1 1 1 2 1 1 2 1
H1 1 4 1 H 2 1 8 1 H 3 2 4 2 H 4 2 12 2
0 1 0 1 1 1 1 2 1 1 2 1
Bx 20 20 20 20 20 20 20 21 32 50 68 79 80 80 80 80 80 80 80
Hx 1 -2 1
B‘x
The values of the resulting image can not immediately displayed, because
of positive and negative values -> scaling between 0 and 255
The Laplace-Operator ist very sensitive to noise.
0 1 0 1 1 1
H1 1 4 1 H 2 1 8 1
0 1 0 1 1 1
2 G ( x , y ) 2 G ( x, y ) 1
x2 y2
LoG : G G ( x, y ) e 2 2
(x) 2 (y ) 2 2 2
x y
2 2
x y
2 2
G(x, y) 1 x 1
e 2 2
e 2 2
( x)
x 22 2
24
x 2 y2 x 2 y2
2 G(x, y) 1 x 1
e 2 2
2 ( x) e 2 2
(1)
(x) 2
24 2 4
1
x 2 y2
x 2
e 2 2
1
24
2 G(x, y) 1
x 2 y2
y 2
e 2 2
1
(y) 2
24
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_03] 82
=4 =5
emphasizing edges
detecting contours weighted decreasing of low
suppression of big differences of space frequencies
homogeneous areas neighboring (highpass filter)
pixel values
emphasizing details
Example
Further
decomposition:
Linear increase of the computation effort with nhx [1 1]*[1 1] =[1 2 1]
2nhx multiplications [1 2 1] *[1 1]=[1 3 3 1]
nhx-2 additions
usw.
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_03] 90
3.2.2.8 Computational effort: Binomial filter decomposition (2)
1 2 1
Binomial filter Example
h(k, l) 2 4 2
1
16
1 2 1
Computation of the convolution without decomposition
1
b (15,93) (1 113 2 116 1 104 2 99 4 101 2 125 1 0 2 107 1 105) 101.3 101
16
Computation after decomposition
113 116 116 104 229 220 229 220 449
b1 1 1 99 101 101 125 200 226 b 1 1 1 1 200 226 426
1
98 91 19 18 19 Rank sequence : {13, 16, 22, 31, 36, 39, 57, 69, 96}
92 96 57 16 10
88 69 31 22 11 niedrigster mittler höchster
42 39 36 13 15 Rang Rang Rang
35 33 32 17 12
Erosion: Median: Dilatation:
13 36 96
Addition:
Subtraction:
1 n
b r, c bk r, c r, c
n k 1
Capturing object and background (b1) and background alone (b2) and
subtraction
Movement detection by subtraction of two images captured at
different times
Color composite and color transformation
gray
NDVI BNDVI
B1 B2
B = B1 - B2 +128
Problem: Aliasing
Remarks
For its interesting frequency
properties, the Gaussian filter is often
used!
Gaussian pyramid
Instead of discrete levels, a
continuous scale factor could be used
scale-space representation
The total memory needed to store an
image pyramid, generated with a sub-
sampling factor of 2, is increased only
with the 1/3 of the memory required
for the original image.
Image Processing
4 Segmentation
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
4 Segmentation (1)
Segmentation
Definition (Haralick und Shapiro, 1992):
”An image segmentation is the partition of an image into a set of
nonoverlapping regions whose union is the entire image. The purpose of
segmentation is to decompose the image into parts that are meaningful
with respect to a particular application“
In image analysis often certain objects or
relevant areas has to be quantitatively
R2
described. For this purpose the objects have
to be extracted from the hole image. Formally
spoken: pixels of region R1 have to be R1/p
composed using a uniformity condition p and
to distinguished from region R2.
Goal of segmentation:
Splitting of images into meaningful image R1: Object
parts R2: Background
p: Uniformity condition
4 Segmentation (3)
Approaches:
Point based approach: Search all points or point sets that apply to
criterion p.
Check each pixel, if it satisfies a specified (homogeneity) criterion and
assign the pixel to the according region.
For the check, the attribute of the pixel is necessary. Often the grey value
or color of the pixel is used, but others are possible too like: local
neighborhood or global mathematical models.
Region based approach: Determine the crossover – border of the area
of validity of p.
Segmentation operations can combine several criterions to check the
pixel’s attributes (i.e. gradient strenght and gradient direction). Dependent
on the rules for combining criterions pixel can be assigned to different
regions. Resulting overlapping regions should be seperated afterwards.
In this chapter mainly grey values are used as examples. The methods are
usable for other features without any limitations.
original image
smoothed image
segmentation
with global
threshold
segmentation
with dynamic
threshold
Reliable foreground
g≥s1=220 (black) Unreliable greyvalules Foreground: assigned
s1≥g≥s2 (black) unreliable greyvalues
examined region
current region
Remark: Only region growing algorithms are introduced here. There are many
other – more complex – methods of region based segmentation.
Segmented region
(black)
Initialisation
– maximum difference
ΔgS,(r,c)max=18
Segmented region
(black)
Local approaches
Determination of local function properties:
Example with local maximum, but transferable to other properties like
minimum, inflecioni point, etc.
Global approaches
Determination of global function properties
Esample with Hough transformation
Main application:
Postprocessing of edge filtered images
Edge strenght and gradient direction per pixel given
Test, if investigated pixel x0 is local maximum in gradient direction (where
necessary interpolation of neighboring grey values for x−1 und x1, graphic).
Pixel that are no maxima are supprest
Thninnung pf blurred edges =
Non-Maximum-Suppression
y=g(x)
Formula parable:
y = a + bx + cx2 y‘ = b + 2cx y‘‘ = 2c
y 0 a bx 0 cx 02 a a y0
1
y 1 a bx 1 cx 21 a b c b y1 y1 y0'
2
1 1
y1 a bx1 cx12 a b c c y 1 2y0 y1 y0''
2 2
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_04] 31
b y '0
y ' b 2cx S 0 x S ''
2c y0
xS is local maximum, if y‘‘ = 2c < 0 and xS within the borders of the
investigated pixel.
Applicattions: like before, but with higher accuracy
Camera calibation, object measurements in industrial IP, extraction of
roads from airborne images.
original
4.3.1.2 2d case
Features from image functions
Test, if investigated pixel is local maximum (minimum) for a predifined
neighborhood (2D Non-Maximum-Suppression):
For the segmentation of linear structures: Grey value mit describe
a local maximum in horizontal and/or vertical direction
Test of 4 pixels
For the segmentation of local maxima: Grey value must be local
maximum within a 3 × 3 or bigger neighborhhood
Test of 8 pixels
Applications: Non-Maximum-Suppression, if no information about the
direction is available
4 neighborhood 8 neighborhood
Example:
valley line and ridge line from DTM with differential geometry, i.e.
watersheds of glaciers and creeks.
Hough-Transformation:
Parameter description of a line with
d = r · sin(α) + c · cos(α)
Determination of a and d per pixel:
a from edges detection,
d from line parameterisation
Disadvantage
For functions with many
parameters the maxima search
can be very complex (ellipsoid: 5
parameters 5D histogram).
Image Processing
5 Binary image processing
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
e8(B1) =
e4(B2) =
e8(B2) =
B1 B2
bb ' ( x, y ) bb ( x 1, y 1) bb ( x 1, y ) bb ( x 1, y 1)
bb ( x, y 1) bb ( x, y ) bb ( x, y 1)
bb ( x 1, y 1) bb ( x 1, y ) bb ( x 1, y 1)
Dilatation
Bright objects (g=1) on dark background (g=0) will increase
For binary images bb‘(x,y) small objects (g=0) disappear, e.g. 3x3
bb ' ( x, y ) bb ( x 1, y 1) bb ( x 1, y ) bb ( x 1, y 1)
bb ( x, y 1) bb ( x, y ) bb ( x, y 1)
bb ( x 1, y 1) bb ( x 1, y ) bb ( x 1, y 1)
M
a) section of image B c) Dilatation: B M
duality:
BM=BM
BM=BM
b) Erosion: B M d) Median
B B
Erosion Dilatation
BM BM
Opening Closing
BM BM
B M = (B M) M B M = (B M) M
B B
BM BM
(BM)M (BM)M
B = B / B M
=BBM
M =BBM
Erosion: B M contour: B = B / B M
M B nM
disadvantage:
(B nM) M
Rasterized image is removed in layers around the object until only the
topological connected line remains ( = centerline).
Iterative execution of the erosion with different structuring elements.
b) Segentation with
histogram threshold
c) Closing and Opening
a) Original image
a b c d e f
The segmented image (b) is processed with thinning algorithms of Hilditch
(c), Tsuruoka (d), Deutsch (e) and Tamura (f).
The results show the different influence of disturbances of the contour on
the algorithms and the similar influence of inclusions.
One possible solution to reduce the disturbances of the segmented image
are morphologic filling operations like dilatation and erosion
Segementation
with simple
threshold so
Erosion and
Dilatation
Centerline
654333333456 333333333333
543222222345 322222222223
432111111234 1 321111111123 111
321 ** ** ** 123 1*1 321 ** ** ** 123 1 *1
432111111234 1 321111111123 111
543222222345 322222222223
654333333456 333333333333
Kampfer(3,4)-distance: 4/3≈√2,
i.e. Euklidean distance over diagonal is
approximated
2 11 10 9 9 9 9 9 9 10 11 12
11 8 7 6 6 6 6 6 6 7 8 11
10 7 4 3 3 3 3 3 3 4 7 10 434
9 6 3 * * * * * * 3 6 3 3*3
10 7 4 3 3 3 3 3 3 4 7 10 434
11 8 7 6 6 6 6 6 6 7 8 11
12 11 10 9 9 9 9 9 9 10 11 12
Image Processing
6 Vectorization and geometric primitives
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
Output:
Vector data like polygons, straight lines, circles and ellipses
-> so called geometric „primitives“
Goal:
Data reduction with focussing on relevant features
Extraction of more significant structural and topological information
like start point and end point of a polygon, neighborhood relation of
regions, et al.
Kontur 1 (r,c):
0 0
1 1
1 2
2 3
3 3
Kontur 2 (r,c):
3 3
4 4
5 5
Kontur 3 (r,c):
3 3
3 2
4 1
Approximation
Skeleton as 5 1 with polygons
binary image
Vectorization:
contours
STILLA & HOEGNER (2014-10-14) Image Processing [tum_ipc_14_06] 5
Results
Contours with end nodes, inner nodes and cross nodes
Approach
Linking
Simple Linking
Direction dependent Linking
Chain-Code (Freeman-Code)
Approach:
Determination of start points: Search image for endpoints and cross
points of the skeleton (e.g. with masks)
Linking: Follow the skeleton from the start point until the next start
point is reached
Break, if all start point are processed
Advantage:
Explicite description of the skeleton, no loss of information
Disadvantage:
Needs much memory
Approach:
Determination of start points: like simple linking
Linking: Check if the difference of the directions of neighboring pixels is smaller
than a given threshold.
smaller (equal): Linking
greater: End of contour and definition of a new star point (p.e. in the break
between the two pixels) Continuation with new contour
Break, if all start point are processed
Advantage:
explicite desription of the skeleton, no loss of information
Better adaptation to image information
Disadvantage:
Needs much memory
Possible fragmentation at contours with uncertain definition of th direction, p.e.
in the direct neighborhood of cross points
Disadvantage:
less explicite
Time consuming because the original coordinates have to be
reconstructed for many algorithms and applications (p.e. Display
contour).
Result
Always closed contours
Problem
Surrounding polygons of neighboring regions do not necessarily tangent
(see regions A and B) Changes in the topology
Solution
Store coordinates of outer pixel edges instead of pixel coordinates
more memory space, but normally, the perpetuation of the topology is
critical.
A
B
Contour 1: Region A:
- coordinates (r/c): - coordinates (r/c):
.. , .. , .. 0/2, 0/3, 1/1, 1/2, ..
3
1 - neighboring regions: - contours:
A 4 A 1,2,3
2 5
B Contour 2:
- coordinates (r/c): Region B:
.. , .. , .. - coordinates (r/c):
- neighboring regions .. , .. , ..
-Split contour at cross : - contours :
points A,B 5,4,2
Contour 3:
- Eliminate redundant ... ...
segements (contour „2“)
Data structure for Data structure for
contours regions
Contour approximation
With polygons
With line segments
With curve and ellipsoid segements
combined
p p
s s
P
s
p p
b e
Advantages
Fast and efficient
Resulting polygons are equal to a sub group of contour points
no shift of the polygon
Disadvantages:
The surface area is in general getting smaller for closed contours
Polygon point are not always in the corners of a contour
( = points with maximum curvature)
Ramer algorithm
Sometimes oversegmentation
sequential splitt-algorithm
Approximation of contour sequentially with line segments
Conditions
Minimal lenght nS of the line segements nkl of the contour
Break if quality criterion dappr is reached
p p p
k=1 k=1 l=47
p
17
p p p
k=17 k=33 33
p
l=33
p p p
l=47 b e
Adjustment:
Observations: contour points xi , yi
Unknown: Parameter of the line a, b, c
Condition: a 2 + b2 = 1
Approximation: deduce from contour points
Determination of start and end points of circle and ellipsoid segments like
for lines
Image Processing
7 Feature extraction
2014 WS
Uwe Stilla
[email protected]
Ludwig Hoegner
[email protected]
Image Processing
Introduction
Digital image characterization
Image transformation
Segmentation
Binary image processing
Vectorization and geometric primitives
Feature extraction
b f h
d2b d2b db db
b rr b cc b rc
dr 2 dc 2 dr dc
1 b r, c
arctan rc für b cc r, c b rr r, c
2 t r, c
r, c
1 arctan b rc r, c für bcc r, c b rr r, c
2 t r, c 2
mit t r, c
1
2
bcc r, c brr r, c
Detection of line points with adaptation of a 1d parable in direction and
determination of the subpixel accurate position (see chapter 4.3.1.1). The
curvature (2nd derivation) is a degree for the conspicuity of the line at
this position.