Reference Book
Reference Book
eCognition Developer
Reference Book
Trimble Documentation:
Typeset by Wikipublisher
Contents
1 Introduction
1.1 Symbols and Expressions . . . . . . . . . . . . . . .
1.1.1 Basic Mathematical Notations . . . . . . . .
1.1.2 Image Layer and Scene . . . . . . . . . . . .
1.1.3 Region . . . . . . . . . . . . . . . . . . . .
1.1.4 Pixel Set . . . . . . . . . . . . . . . . . . .
1.1.5 Image Object . . . . . . . . . . . . . . . . .
1.1.6 Image Objects Hierarchy . . . . . . . . . . .
1.1.7 Class-Related Set . . . . . . . . . . . . . . .
1.2 Coordinate Systems Used in eCognition Software . .
1.2.1 Pixel Coordinate System . . . . . . . . . . .
1.2.2 User Coordinate System . . . . . . . . . . .
1.3 Image Layer Related Features . . . . . . . . . . . .
1.3.1 Scene . . . . . . . . . . . . . . . . . . . . .
1.3.2 Maps . . . . . . . . . . . . . . . . . . . . .
1.3.3 Region . . . . . . . . . . . . . . . . . . . .
1.3.4 Image Layer . . . . . . . . . . . . . . . . .
1.3.5 Image Layer Intensity on Pixel Sets . . . . .
1.4 Image Object Related Features . . . . . . . . . . . .
1.4.1 Image Object . . . . . . . . . . . . . . . . .
1.4.2 Image Object Hierarchy . . . . . . . . . . .
1.5 Class-Related Features . . . . . . . . . . . . . . . .
1.5.1 Class-Related Sets . . . . . . . . . . . . . .
1.6 Shape-Related Features . . . . . . . . . . . . . . . .
1.6.1 Parameters . . . . . . . . . . . . . . . . . .
1.6.2 Expression . . . . . . . . . . . . . . . . . .
1.6.3 Shape Approximations Based on Eigenvalues
1.6.4 Elliptic Approximation . . . . . . . . . . . .
1.7 About Algorithms . . . . . . . . . . . . . . . . . . .
1.7.1 Creating a Process . . . . . . . . . . . . . .
1.7.2 Specifying Algorithm Parameters . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
2
2
3
3
3
4
4
5
5
6
7
7
8
9
9
10
11
11
15
17
17
18
18
18
18
19
20
20
20
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
22
24
24
24
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ii
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
25
25
25
25
25
26
3 Segmentation Algorithms
3.1 Chessboard Segmentation . . . . . . . . . . . .
3.1.1 Supported Domains . . . . . . . . . . .
3.1.2 Algorithm Parameters . . . . . . . . .
3.2 Quadtree-Based Segmentation . . . . . . . . .
3.2.1 Supported Domains . . . . . . . . . . .
3.2.2 Algorithm Parameters . . . . . . . . .
3.2.3 Thematic Layer Weights . . . . . . . .
3.3 Contrast Split Segmentation . . . . . . . . . .
3.3.1 Supported Domains . . . . . . . . . . .
3.3.2 Settings . . . . . . . . . . . . . . . . .
3.3.3 Advanced Settings . . . . . . . . . . .
3.4 Multiresolution Segmentation . . . . . . . . .
3.4.1 Supported Domains . . . . . . . . . . .
3.4.2 Level Settings . . . . . . . . . . . . . .
3.4.3 Segmentation Settings . . . . . . . . .
3.4.4 Composition of Homogeneity Criterion
3.5 Spectral Difference Segmentation . . . . . . .
3.5.1 Supported Domains . . . . . . . . . . .
3.5.2 Level Settings . . . . . . . . . . . . . .
3.5.3 Segmentation Settings . . . . . . . . .
3.6 Multi-Threshold Segmentation . . . . . . . . .
3.6.1 Supported Domains . . . . . . . . . . .
3.6.2 Level Settings . . . . . . . . . . . . . .
3.7 Contrast Filter Segmentation . . . . . . . . . .
3.7.1 Supported Domains . . . . . . . . . . .
3.7.2 Chessboard Settings . . . . . . . . . .
3.7.3 Input Parameters . . . . . . . . . . . .
3.7.4 Shape Criteria Settings . . . . . . . . .
3.7.5 Classication Parameters . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
28
29
29
29
30
30
32
32
32
32
34
35
37
37
38
39
40
40
40
41
41
42
42
43
43
43
44
45
45
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
47
47
47
47
48
48
48
48
48
2.4
2.5
2.6
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
49
50
50
51
51
51
52
52
53
53
53
54
54
54
55
56
57
57
57
57
58
58
58
58
59
59
60
61
61
62
62
62
62
63
63
64
64
6 Template Matching
6.1 Overview Template Editor
6.2 Select Samples . . . . . .
6.3 Generate Templates . . . .
6.4 Test Template . . . . . . .
6.4.1 Subset Selection .
6.4.2 Template Selection
6.4.3 Test . . . . . . . .
6.4.4 Test Parameters . .
6.4.5 Review Targets . .
6.4.6 Template Quality .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
66
66
67
68
69
70
70
70
70
71
72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
6.5
Negative Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
73
73
73
73
74
74
74
74
76
76
76
76
77
77
77
78
78
78
80
80
80
80
81
81
82
82
83
83
83
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
84
84
84
84
85
85
86
86
86
87
87
87
87
87
87
88
89
89
Reference Book
11 June 2014
CONTENTS
8.6.2
vi
Algorithm Parameters . . . . . . . . . . . . . . . . . . . . . . . 89
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
91
91
92
92
92
92
92
93
93
93
94
95
95
96
96
96
97
97
98
98
98
99
100
100
100
100
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
102
102
102
102
104
104
104
106
109
109
109
110
110
111
113
113
113
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
120
121
121
121
123
123
123
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
124
124
124
124
124
124
125
125
125
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
126
126
126
126
129
129
130
130
130
131
131
131
133
133
133
133
133
134
134
134
134
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
135
135
135
135
136
136
136
137
137
137
137
Reference Book
11 June 2014
CONTENTS
14.4.1 Supported Domains . . . . . . .
14.4.2 Algorithm Parameters . . . . .
14.4.3 Kernel Parameters . . . . . . .
14.4.4 Layers Parameters . . . . . . .
14.5 Layer Normalization . . . . . . . . . .
14.5.1 Supported Domains . . . . . . .
14.5.2 Algorithm Parameters . . . . .
14.5.3 Layers Parameters . . . . . . .
14.6 Median Filter . . . . . . . . . . . . . .
14.6.1 Supported Domains . . . . . . .
14.6.2 Kernel Parameters . . . . . . .
14.6.3 Layers Parameters . . . . . . .
14.7 Sobel Operation Filter . . . . . . . . . .
14.7.1 Supported Domains . . . . . . .
14.7.2 Kernel Parameters . . . . . . .
14.7.3 Layers Parameters . . . . . . .
14.8 Pixel Freq. Filter . . . . . . . . . . . .
14.8.1 Supported Domains . . . . . . .
14.8.2 Kernel Parameters . . . . . . .
14.8.3 Layers Parameters . . . . . . .
14.9 Pixel Min/Max Filter (Prototype) . . . .
14.9.1 Supported Domains . . . . . . .
14.9.2 Algorithm Parameters . . . . .
14.9.3 Kernel Parameters . . . . . . .
14.9.4 Layers Parameters . . . . . . .
14.10Edge Extraction Lee Sigma . . . . . . .
14.10.1 Supported Domains . . . . . . .
14.10.2 Algorithm Parameters . . . . .
14.11Edge Extraction Canny . . . . . . . . .
14.11.1 Supported Domains . . . . . . .
14.11.2 Algorithm Parameters . . . . .
14.12Edge 3D Filter . . . . . . . . . . . . . .
14.12.1 Supported Domains . . . . . . .
14.12.2 Algorithm Parameters . . . . .
14.12.3 Kernel Parameters . . . . . . .
14.12.4 Layer Parameters . . . . . . . .
14.13Surface Calculation . . . . . . . . . . .
14.13.1 Supported Domains . . . . . . .
14.13.2 Algorithm Parameters . . . . .
14.14Layer Arithmetics . . . . . . . . . . . .
14.14.1 Supported Domains . . . . . . .
14.14.2 Algorithm Parameters . . . . .
14.15Line Extraction . . . . . . . . . . . . .
14.15.1 Supported Domains . . . . . . .
14.15.2 Algorithm Parameters . . . . .
14.16Abs. Mean Deviation Filter (Prototype)
14.16.1 Supported Domains . . . . . . .
14.16.2 Algorithm Parameters . . . . .
14.16.3 Kernel Parameters . . . . . . .
14.16.4 Layers Parameters . . . . . . .
14.17Contrast Filter (Prototype) . . . . . . .
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
138
139
139
140
141
141
141
141
142
142
142
143
143
143
143
144
144
145
145
146
146
146
146
147
147
148
148
149
149
149
150
151
151
151
152
152
152
152
154
154
154
156
156
156
157
157
157
158
158
159
11 June 2014
CONTENTS
ix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
159
159
159
160
160
160
160
161
161
162
162
162
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
163
163
163
165
165
165
166
166
166
168
168
168
169
169
169
171
171
171
172
172
172
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
174
174
174
174
176
176
177
177
177
177
178
178
178
178
178
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
178
179
179
179
180
180
180
180
180
181
181
181
182
182
182
182
182
183
183
183
184
184
184
185
185
185
185
185
185
185
186
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
187
187
187
187
188
188
188
190
190
190
190
190
190
192
192
193
193
193
193
Reference Book
11 June 2014
CONTENTS
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
197
197
197
197
197
197
197
198
198
198
198
199
199
199
199
200
200
200
200
200
200
200
201
201
201
202
202
202
202
202
203
203
203
203
203
204
205
205
205
205
205
206
206
206
207
207
207
11 June 2014
CONTENTS
18.15.2 Algorithm Parameters
18.16Change Visible Layers . . . .
18.16.1 Supported Domains . .
18.16.2 Algorithm Parameters
18.17Change Visible Map . . . . .
18.17.1 Supported Domains . .
18.17.2 Algorithm Parameters
18.18Show Slide . . . . . . . . . .
18.18.1 Algorithm Parameters
18.19Ask Question . . . . . . . . .
18.19.1 Supported Domains . .
18.19.2 Algorithm Parameters
18.20Set Project State . . . . . . . .
18.20.1 Supported Domains . .
18.20.2 Algorithm Parameters
18.21Save/Restore Project State . .
18.21.1 Supported Domains . .
18.21.2 Algorithm Parameters
18.22Show HTML Help . . . . . .
18.22.1 Supported Domains . .
18.22.2 Algorithm Parameters
18.23Congure Image Equalization
18.23.1 Supported Domains . .
18.23.2 Algorithm Parameters
18.24Create/Update Class . . . . . .
18.24.1 Supported Domains . .
18.24.2 Algorithm Parameters
xii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
207
210
211
211
211
211
211
212
212
212
212
212
213
213
213
213
213
213
214
214
214
214
214
214
216
216
216
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
218
218
218
218
218
218
219
219
219
219
219
220
220
220
220
220
220
221
221
221
221
221
221
Reference Book
11 June 2014
CONTENTS
xiii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
223
223
223
223
223
223
223
224
224
224
224
224
224
224
224
225
225
225
225
225
21 Export Algorithms
21.1 Export Classication View . . . .
21.1.1 Supported Domains . . . .
21.1.2 Algorithm Parameters . .
21.2 Export Current View . . . . . . .
21.2.1 Supported Domains . . . .
21.2.2 Algorithm Parameters . .
21.2.3 Slices Parameters . . . . .
21.2.4 Frames Parameters . . . .
21.3 Export Thematic Raster Files . . .
21.3.1 Supported Domains . . . .
21.3.2 Algorithm Parameters . .
21.4 Export Existing Vector Layer . . .
21.4.1 Supported Domains . . . .
21.4.2 Algorithm Parameters . .
21.5 Export Domain Statistics . . . . .
21.5.1 Supported Domains . . . .
21.5.2 Algorithm Parameters . .
21.5.3 Statistical Operations . . .
21.6 Export Project Statistics . . . . . .
21.6.1 Supported Domains . . . .
21.6.2 Algorithm Parameters . .
21.7 Export Object Statistics . . . . . .
21.7.1 Supported Domains . . . .
21.7.2 Algorithm Parameters . .
21.7.3 Report Parameters . . . .
21.8 Export Object Statistics for Report
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
226
226
226
226
227
227
227
228
229
229
229
229
230
230
231
231
231
232
232
233
233
233
233
234
234
235
235
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
xiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
235
235
236
236
236
236
237
237
237
238
238
239
239
239
240
240
240
241
242
242
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
243
243
244
244
245
245
245
245
23 About Features
246
23.1 About Features as a Source of Information . . . . . . . . . . . . . . . . . 246
23.1.1 Conversions of Feature Values . . . . . . . . . . . . . . . . . . . 246
23.2 Object Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
24 Vector Features
248
24.1 Vector object attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
24.1.1 Editable Parameters . . . . . . . . . . . . . . . . . . . . . . . . 248
25 Object Features: Customized
25.1 Create Customized Features . . . . . . . . . .
25.2 Arithmetic Customized Features . . . . . . . .
25.3 Relational Customized Features . . . . . . . .
25.3.1 Relations Between Surrounding Objects
25.3.2 Relational Functions . . . . . . . . . .
25.4 Finding Customized Features . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
249
249
249
251
252
253
255
257
Reference Book
11 June 2014
CONTENTS
27.1 Mean . . . . . . . . . . . . . . . . . . .
27.1.1 Brightness . . . . . . . . . . . .
27.1.2 Layer 1/2/3 . . . . . . . . . . . .
27.1.3 Max. Diff. . . . . . . . . . . . .
27.2 Quantile . . . . . . . . . . . . . . . . . .
27.2.1 Editable Parameters . . . . . . .
27.2.2 Parameters . . . . . . . . . . . .
27.2.3 Feature Value Range . . . . . . .
27.3 Mode . . . . . . . . . . . . . . . . . . .
27.3.1 Editable Parameters . . . . . . .
27.4 Standard Deviation . . . . . . . . . . . .
27.4.1 Layer 1/2/3 . . . . . . . . . . . .
27.5 Skewness . . . . . . . . . . . . . . . . .
27.5.1 Layer Values . . . . . . . . . . .
27.6 Pixel Based . . . . . . . . . . . . . . . .
27.6.1 Ratio . . . . . . . . . . . . . . .
27.6.2 Min. Pixel Value . . . . . . . . .
27.6.3 Max. Pixel Value . . . . . . . . .
27.6.4 Mean of Inner Border . . . . . . .
27.6.5 Mean of Outer Border . . . . . .
27.6.6 Border Contrast . . . . . . . . . .
27.6.7 Contrast to Neighbor Pixels . . .
27.6.8 Edge Contrast of Neighbor Pixels
27.6.9 StdDev. to Neighbor Pixels . . . .
27.6.10 Circular Mean . . . . . . . . . .
27.6.11 Circular StdDev . . . . . . . . .
27.6.12 Circular Std Dev/Mean . . . . . .
27.7 To Neighbors . . . . . . . . . . . . . . .
27.7.1 Mean Diff. to Neighbors . . . . .
27.7.2 Mean Diff. to Neighbors (Abs) . .
27.7.3 Mean Diff. to Darker Neighbors .
27.7.4 Mean Diff. to Brighter Neighbors
27.7.5 Number of Brighter Objects . . .
27.7.6 Number of Darker Objects . . . .
27.7.7 Rel. Border to Brighter Neighbors
27.8 To Superobject . . . . . . . . . . . . . .
27.8.1 Mean Diff. to Superobject . . . .
27.8.2 Ratio to Superobject . . . . . . .
27.8.3 Std. Dev. Diff. to Superobject . .
27.8.4 Std. Dev. Ratio to Superobject . .
27.9 To Scene . . . . . . . . . . . . . . . . . .
27.9.1 Mean Diff. to Scene . . . . . . .
27.9.2 Ratio To Scene . . . . . . . . . .
27.10Hue, Saturation, Intensity . . . . . . . . .
27.10.1 HSI Transformation . . . . . . .
xv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
257
257
258
259
259
260
260
260
260
260
261
261
261
262
262
262
263
264
265
266
266
267
268
270
270
271
271
272
272
273
274
275
276
276
276
277
277
277
278
279
279
279
280
281
281
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
282
282
282
283
284
Reference Book
11 June 2014
CONTENTS
xvi
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
284
285
285
285
286
287
287
287
288
288
288
289
289
290
291
292
292
293
293
294
295
296
296
297
298
299
299
300
301
302
302
303
303
303
304
305
306
307
307
308
308
308
309
309
309
310
311
311
311
312
312
11 June 2014
CONTENTS
xvii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
312
312
312
313
314
314
314
315
315
315
316
316
316
317
317
317
317
318
318
318
318
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
320
320
320
321
322
322
322
323
323
324
325
325
325
325
325
326
326
327
327
328
328
329
330
331
331
332
332
332
332
Reference Book
11 June 2014
CONTENTS
xviii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
333
333
333
334
335
335
335
336
337
337
338
338
339
339
341
341
342
342
342
343
343
344
344
345
345
346
346
347
347
348
348
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
350
350
350
350
350
350
350
351
351
351
351
351
351
351
351
11 June 2014
CONTENTS
32.5 Number of Sub-Objects . . .
32.5.1 Parameters . . . . .
32.5.2 Expression . . . . .
32.5.3 Feature Value Range
32.6 Number of Sublevels . . . .
32.6.1 Parameters . . . . .
32.6.2 Expression . . . . .
32.6.3 Feature Value Range
xix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
351
352
352
352
352
352
352
352
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
353
353
353
353
353
354
354
354
354
354
354
354
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
355
355
355
357
357
358
358
360
360
361
361
363
363
364
364
366
366
367
367
369
369
371
371
372
372
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35 Class-Related Features
375
35.1 Relations to Neighbor Objects . . . . . . . . . . . . . . . . . . . . . . . 375
35.1.1 Existence of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Reference Book
11 June 2014
CONTENTS
xx
35.1.2 Number Of . . . . . . .
35.1.3 Border To . . . . . . . .
35.1.4 Rel. Border to . . . . .
35.1.5 Rel. Area of . . . . . . .
35.1.6 Distance to . . . . . . .
35.1.7 Mean Diff. to . . . . . .
35.1.8 Overlap of two Objects .
35.2 Relations to Sub-Objects . . . .
35.2.1 Existence Of . . . . . .
35.2.2 Number of . . . . . . .
35.2.3 Area of . . . . . . . . .
35.2.4 Rel. Area of . . . . . . .
35.2.5 Clark Aggregation Index
35.3 Relations to Superobjects . . . .
35.3.1 Existence of . . . . . . .
35.4 Relations to Classication . . .
35.4.1 Membership to . . . . .
35.4.2 Classied as . . . . . .
35.4.3 Classication Value of .
35.4.4 Class Name . . . . . . .
35.4.5 Class Color . . . . . . .
35.4.6 Assigned Class . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
376
376
377
378
379
379
380
381
381
381
382
382
383
383
384
384
384
385
385
386
386
386
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
388
388
388
388
388
389
390
37 Scene Features
37.1 Scene Variables . . . . . . . . . . . . . . . .
37.1.1 Editable Parameters . . . . . . . . .
37.2 Class-Related . . . . . . . . . . . . . . . . .
37.2.1 Number of Classied Objects . . . .
37.2.2 Number of Samples Per Class . . . .
37.2.3 Area of Classied Objects . . . . . .
37.2.4 Layer Mean of Classied Objects . .
37.2.5 Layer Std. Dev. of Classied Objects
37.2.6 Statistic of Object Value . . . . . . .
37.2.7 Class Variables . . . . . . . . . . . .
37.3 Scene-Related . . . . . . . . . . . . . . . . .
37.3.1 Existence of Object Level . . . . . .
37.3.2 Existence of Image Layer . . . . . .
37.3.3 Existence of Thematic Layer . . . . .
37.3.4 Existence of Map . . . . . . . . . . .
37.3.5 Mean of Scene . . . . . . . . . . . .
37.3.6 Standard deviation . . . . . . . . . .
37.3.7 Smallest Actual Pixel Value . . . . .
37.3.8 Largest Actual Pixel Value . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
391
391
391
391
392
392
392
393
393
394
394
395
395
395
395
396
396
397
397
397
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
11 June 2014
CONTENTS
xxi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
398
398
398
399
399
399
400
400
400
400
400
400
401
401
401
401
401
402
402
403
403
403
403
404
404
404
404
405
405
405
405
405
405
405
406
406
406
406
406
406
407
407
407
407
407
407
407
408
408
408
408
11 June 2014
CONTENTS
xxii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
408
408
408
409
409
409
410
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
411
411
411
412
413
413
413
413
413
413
414
414
414
414
414
414
414
415
415
415
415
415
415
415
415
416
416
39 Region Features
39.1 Region-Related . . . . . . . . . . .
39.1.1 Number of Pixels in Region
39.1.2 T Extent . . . . . . . . . . .
39.1.3 T Origin . . . . . . . . . . .
39.1.4 X Extent . . . . . . . . . .
39.1.5 X Origin . . . . . . . . . .
39.1.6 Y Extent . . . . . . . . . .
39.1.7 Y Origin . . . . . . . . . .
39.1.8 Z Extent . . . . . . . . . . .
39.1.9 Z Origin . . . . . . . . . . .
39.2 Layer-Related . . . . . . . . . . . .
39.2.1 Mean . . . . . . . . . . . .
39.2.2 Standard Deviation . . . . .
39.3 Class-Related . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
417
417
417
418
418
418
419
419
419
420
420
421
421
421
422
Reference Book
11 June 2014
CONTENTS
xxiii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
424
424
424
424
424
41 Metadata
41.1 Active Slice Metadata item .
41.1.1 Editable Parameters
41.2 Image Layer Metadata item .
41.2.1 Editable Parameters
41.3 Metadata Item . . . . . . . .
41.3.1 Editable Parameters
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
425
425
425
425
425
426
426
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
42 Feature Variables
427
42.1 Feature Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
42.1.1 Editable Parameters . . . . . . . . . . . . . . . . . . . . . . . . 427
43 Widget Parameters for Architect Action Libraries
43.1 Add Checkbox . . . . . . . . . . . . . . . . .
43.2 Add Drop-down List . . . . . . . . . . . . . .
43.3 Add Button . . . . . . . . . . . . . . . . . . .
43.4 Add Radio Button Row . . . . . . . . . . . . .
43.5 Add Toolbar . . . . . . . . . . . . . . . . . . .
43.6 Add Editbox . . . . . . . . . . . . . . . . . . .
43.7 Add Editbox With Slider . . . . . . . . . . . .
43.8 Add Select Class . . . . . . . . . . . . . . . .
43.9 Add Select Feature . . . . . . . . . . . . . . .
43.10Add Select Multiple Features . . . . . . . . . .
43.11Add Select File . . . . . . . . . . . . . . . . .
43.12Add Select Level . . . . . . . . . . . . . . . .
43.13Add Select Image Layer . . . . . . . . . . . .
43.14Add Select Thematic Layer . . . . . . . . . . .
43.15Add Select Folder . . . . . . . . . . . . . . . .
43.16Add Slider . . . . . . . . . . . . . . . . . . . .
43.17Add Edit Layer Names . . . . . . . . . . . . .
43.18Add Layer Drop-down List . . . . . . . . . . .
43.19Add Manual Classication Buttons . . . . . . .
43.20Add Select Array Items . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
428
428
428
429
429
430
430
431
431
432
432
433
433
434
434
435
435
436
436
436
437
44 General Reference
44.1 Use Variables as Features . . . . . . . . . . . . . . . . .
44.2 About Metadata as a Source of Information . . . . . . .
44.2.1 Convert Metadata and Add it to the Feature Tree
44.3 General Reference . . . . . . . . . . . . . . . . . . . .
44.3.1 Rendering a Displayed Image . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
438
438
438
438
439
439
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Acknowledgments
443
The Visualization Toolkit (VTK) Copyright . . . . . . . . . . . . . . . . . . . 443
Reference Book
11 June 2014
CONTENTS
ITK Copyright . . . . . . . . . . .
python/tests/test_doctests.py
src/Verson.rc . . . . . . . .
src/gt_wkt_srs.cpp . . . . .
xxiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Reference Book
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
444
444
445
445
11 June 2014
1 Introduction
eCognition Documentation is divided into two different books: the User Guide and the
Reference Book.
This Reference Book is a list of algorithms and features where you can look up how
they are calculated in case you need detailed background information and explanations
on supported parameters and domains. If you need to understand more of the calculations, if you want to understand what is behind all this e.g. why does the Multiresolution
Segmentation lead to great results or how is distance calculated in eCognition then the
Reference Book is the right choice to work through.
The User Guide is a good starting point to get familiar with workows using eCognition
software. You get to know the world of object based image analysis where classication
and feature extraction offer new possibilities and lead an image analyst to new horizons.
The User Guide is meant as a manual to start using eCognition software realizing simple workows but also advanced classication concepts. Work through the chapters to
improve your image analysis knowledge and get an idea of a new language in image
analysis not only object but also in combination with pixel based image analysis.
For additional help please also refer to our:
User community including e.g. guided tour examples based on data provided, the
latest webinars and the possibility for rule set exchange
(http://www.ecognition.com/community)
Training team offering open trainings, in-company and customized trainings
(http://www.ecognition.com/learn/trainings)
Consultancy unit helping you to solve your image analysis challenges e.g. in feasibility studies or delivering the complete solution development for your projects
(http://www.ecognition.com/products/consulting-services)
Support team (http://www.ecognition.com/support)
Additional geospatial products and services provided by Trimble can be found here:
http://www.trimble.com/imaging/
Installation and licensing guides can be found in the installation directory in the
document InstallationGuide.pdf
This chapter will now introduce some symbols, expressions and basics needed to understand the calculations described in the following chapters of this book.
Introduction
1.1
1.1.1
Therefore
Empty set
aA
a is an element of a set A
b
/B
AB
AB
A\B
#A
It follows
For all
Equivalent
ui=1
[a, b]
Interval with {x | a x b}
1.1.2
Image layer k
t = 1, . . . , T
Thematic layer t
(x, y, z,t)
Coordinates of a pixel/voxel
cmax
k
cmin
k
range
ck
ck
N4 (x, y)
N8 (x, y)
N6 (x, y, z)
N26 (x, y, z)
Reference Book
11 June 2014
Introduction
1.1.3
Region
Extent of region R
cmax
k (R)
cmin
k (R)
ck (R)
k (R)
1.1.4
Pixel Set
Set of pixels
ck (S)
k (S)
c(S)
Brightness
wBk
Brightness weight of image layer k
k (v, O) Mean difference of an image object v to image objects in a set O
1.1.5
Image Object
Reference Book
11 June 2014
Introduction
u, v
Image object
Pv
#Pv
ck (v)
k (v)
Standard deviation of intensity values of image layer k of all pixels forming image
object v
PvInner
PvOuter
vi (P)
m(v)
Bv
Bv (d)
xmin (v)
Minimum x coordinate of v
Minimum y coordinate of v
b(v, u)
1.1.6
Image objects
Uv (d)
Sv (d)
Nv (d)
e(u, v)
1.1.7
Class-Related Set
Reference Book
11 June 2014
Introduction
A class, (m M)
(v, m)
(v, m)
Pi (R, m)
1.2
1.2.1
The pixel coordinate system is used to identify pixel position within an image. It is used
for calculating position features such as x -center and y -center.
This coordinate system is oriented from bottom to top and from left to right. The origin
position is (0, 0), which is at the bottom-left corner of the image. The coordinate is
dened by the offset of the bottom-left corner of the pixel from the origin.
Reference Book
11 June 2014
Introduction
1.2.2
The user coordinate system enables the use of geocoding information within a scene. The
values of the separate user coordinate system are calculated from the pixel coordinate
system. In the user interface, the user coordinate system is referred to as the coordinate
system.
This coordinate system is dened by geocoding information:
The bottom-left X position
The bottom-left Y position
Resolution the size of a pixel in coordinate system unit. For example, if the coordinate system is metric, the resolution is the size of a pixel in meters. If the
coordinate system is lat/long, then the resolution is the size of a pixel in degrees
Coordinate system name
Coordinate system type.
The origin of the coordinate system is at the bottom-left corner of the image (x0 , y0 ) .
The coordinate denes the position of the bottom-left corner of the pixel within the user
coordinate system.
To convert a value from the pixel coordinate system to the user coordinate system and
Reference Book
11 June 2014
Introduction
back, the following transformations are valid, where (x, y) are the coordinates in user
coordinate system and u is the pixel size in units:
1.3
x = x0 + xpixel u
xpixel = (x x0 )/u
y = y0 + ypixel u
ypixel = (y y0 )/u
1.3.1
Scene
A scene is a collection of combined input image data, as represented in eCognition software. A scene comprises at least one image layer. In addition, it can include more image
layers and thematic layers. You are likely to encounter this concept when importing image or other data into eCognition software.
A scene can also include metadata, such as image creation information or geoinformation, and hold settings such as layer aliases, unit information, or geocoding.
Within eCognition software, image layers, thematic layers, and metadata are loaded by
reference to the respective data les. Each scene is represented by a related map.
A 2D image in a 3D data set is called a slice.
A 2D image in a time series data set is called a frame.
A 4D data set consists of a series of frames where each frame is a 3D data set.
Depending on the related data set, a scene can be one of the following:
Data Set
Scene
2D image
3D data set
4D data set
Extents
A scene has an origin (x0 , y0 , z0 ,t0 ) , the extent sx in the x -direction, the extent sy in the
y -direction, the extent sz in the z -direction, and the extent st in the t -direction.
The expression is (sx, sy, sz, st) , where:
Reference Book
11 June 2014
Introduction
sx = #(pixels)x u
sy = #(pixels)x u
sz = #(slices) uslices
st = #(frames) uframes
Coordinates
geo
x = x0 + xpxl u
geo
y = y0 + ypxl u
geo
z = z0 + zslices uslices
geo
t = t0 + t frames uframes
Figure 1.4. Coordinates of a multidimensional scene with four slices and three frames
Layers
Scenes can consist of an arbitrary number of image layers (k = 1, ..., K) and thematic
layers (t = 1, ..., T ) .
1.3.2
Maps
A map represents the combination of a scene and an image object hierarchy. It is the
structure that represents the data that the rule set operates on. eCognition software can
deal with multiple maps and provides algorithms to rescale and copy maps, as well as to
synchronize image objects between them.
Reference Book
11 June 2014
Introduction
1.3.3
Region
Expression
2D
(xG , yG ), [Rx , Ry ]
3D
(xG , yG , zG ), [Rx , Ry , Rz ]
4D
1.3.4
Image Layer
A scene refers to at least one image layer of an image le. Image layers are referenced
by a layer name (string), which is unique within a map.
The pixel/voxel value that is the layer intensity of an image layer k at pixel/voxel (x, y, z,t)
is denoted as ck (x, y, z,t) . The dynamic range of image layers is represented as follows:
cmin
is the smallest possible intensity value of an image layer k
k
cmax
is the largest possible intensity value of an image layer k
k
range
range
ck
is the data range of image layer k with ck
= cmax
cmin
k
k
The dynamic range depends on the image layer data type. The supported image layer
data types 1 are:
1. Full support for image layer data types is dependent on drivers
Reference Book
11 June 2014
Introduction
10
range
Type
cmin
k
cmax
k
ck
255
256
65535
65536
32767
65535
32767
4294967295 4294967296
2147483647
1.17
10 38
2147483647 4294967295
3.40 1038
n/a
1
ck (x, y)
sx sy (x,y)
k
k
k
sx sy (x,y)
sx sy (x,y)
(x,y)
Neighborhood
On raster pixel/voxels there are two ways to dene the neighborhood: as 6-neighborhood
or 26-neighborhood.
1.3.5
1
ck (x, y)
#s (x,y)S
An overall intensity measurement is given by the brightness which is the mean value of
ck (S) for selected image layers.
c(S)
=
1 K B
wk ck (S)
wB k=1
If v is an image object and O a set of other image objects, then the mean difference of the
objects within O to an image object v is calculated by:
1
k (v, 0) = wu ck (v) ck (u)
w u0
Reference Book
11 June 2014
Introduction
1.4
1.4.1
11
An image object is a set of pixels. The set of pixels of an image object v is denoted as Pv .
Image Object Size
At a basic level, the size of an image object is measured by the number of pixels in image
object v. The number of pixels is denoted by #Pv , the cardinality of the respective pixel
set.
Bounding Box of an Image Object
The bounding box Bv of an image object v is the most common form of measurement
of the extent of an image in a scene. It is the smallest rectangular area that encloses all
pixels of v along x- and y-axes. The bounding box is therefore dened by the minimum
and maximum values of the x and y co-ordinates of an image object v xmin (v) , xmax (v)
and ymin (v) , ymax (v) .
The extended bounding box Bv (d) is created by enlarging the bounding box with the same
number of pixels in all directions.
Two pixels are considered to be spatially connected if they neighbor each other in the
pixel raster. On 2D raster images, the neighborhood between pixels can be dened as a
4- or 8-neighborhood:
With reference to the green pixel, the light-blue pixels are the 4-neighborhood. In combination with the dark blue pixels, they form the 8- or diagonal neighborhood of the
green pixel. eCognition software uses the 4-neighborhood for all spatial neighborhood
calculations. As it assumes Image objects to be a set of spatially connected pixels, an
Reference Book
11 June 2014
Introduction
12
Two image objects are considered neighbors if they contain pixels that neighbor each
other according to the 4-neighborhood principle. In other words, two image objects u
and v are considered neighboring each other if this is at least on pixel (x, y) Pv and
one pixel (x , y ) Pu so that (x , y ) is part of N4 (x, y) . The set of all image objects
neighboring v is denoted by Nv (d) :
Nv = {u Vi : (x, y) Pv (x , y ) Pu : (x , y )N4 (x, y)}
The entire border line between u and v is called the neighborhood relation and is represented as {$e(u,v).$ The neighborhood relations between image objects are automatically
determined and maintained by the eCognition software.
Border of an Image Object
11 June 2014
Introduction
13
by the slice distance. The border of image object slices is counted by the number of the
elementary pixel borders along the common border. Image objects are basically pixel
sets. The number of pixels belonging to an image object v and its pixel set Pv is denoted
by #Pv .
The set of all pixels in Pv belonging to the inner border pixels of an image
/ Pv }
object v is dened by PvInner with PvInner = {(x, y) Pv : (x , y ) N4 (x, y) : (x , y )
Inner Border
The set of all pixels in Pv belonging to the outer border pixels of an image
/ Pv : (x , y ) N4 (x, y) : (x , y ) Pv }
object v is dened by PvOuter with PvOuter = {(x, y)
Outer Border
11 June 2014
Introduction
14
There is no special image object type to represent the movement of an image object in
time. Instead, image object links are used to track identical image objects at different
points in time (a time series).
Disconnected Image Objects
In addition to normal 2D and 3D image objects, eCognition Developer 9.0 allows you
to work with image objects that are spatially disconnected. While a connected image
object covers one contiguous region of a scene, a disconnected image object can consist
of arbitrary pixels in two or more potentially disconnected parts within a scene.
It is important to know that image objects can be dened as disconnected even though
they are not disconnected in reality. An image object is dened as disconnected if it lacks
information about its spatial connectivity. The major motivation for this lack of information is the high calculation effort that is necessary to ensure spatial connectivity. If, for
example, you remove some pixels from an image object, it may divide into several parts.
To connect the resulting image objects, a special algorithm needs to analyze the remaining objects and separate them properly into several sub-image objects. If the resulting
image objects remain disconnected, they can be simply marked as there is no need for an
Reference Book
11 June 2014
Introduction
15
analysis. Disconnected image objects are therefore useful for fast object processing when
spatial information is not relevant. If you use threshold segmentation, for example, to separate an image into two disconnected image objects, you can save a lot of computing time
and memory since the number of image objects is considerably reduced.
Figure 1.12. Disconnected image objects (left) and spatially connected image object (right)
The main purpose of working with disconnected image objects is the representation of
simple pixel crowds, where several properties that can be measured for normal image
objects might not make sense or are meaningless. You should therefore note that the
following information is not available for disconnected image objects:
Neighborhood relations and all features based on these relations
Shape features
Polygons.
1.4.2
Image objects are organized into levels, where each object on each level creates a partition
of the scene S . This can be described using the following fundamental conditions. The
totality of all image objects covers the entire scene:
U Pv = {(x, y)}
vVi
u,vVi
The image object levels are hierarchically structured. This means that all image objects
on a lower level are complete contained in exactly one image object of a higher level.
i< j
vVi
uVi
Pv
Pu
Reference Book
11 June 2014
Introduction
16
Level Distance
The level distance represents the hierarchical distance between image objects on different
levels in the image object hierarchy. Starting from the current image object level, the
level distance indicates the hierarchical distance of image object levels containing the
respective image objects (sub-objects or superobjects).
Since each object has exactly 1 or 0 superobjects on the higher level, the superobject of
v with a level distance d can be denoted as Uv (d). Similarly, all sub-objects with a level
distance d are denoted as Sv (d).
Spatial Distance
The spatial distance represents the distance between image objects on the same level in
the image object hierarchy. If you want to analyze neighborhood relations between image
objects on the same image object level in the image object hierarchy, the feature distance
expresses the spatial distance (in pixels) between the image objects. The default value is
0 (in other words, only neighbors that have a mutual border are considered). The set of
all neighbors within a distance d are denoted by Nv (d).
Many features enable you to enter a spatial distance parameter. Distances are usually measured in pixel units. Because exact distance measurements
between image objects are very processor-intensive, eCognition software uses approximation approaches to estimate the distance between image objects.
Distance Measurements
There are two different approaches: the center of gravity and the smallest enclosing rectangle. You can congure the default distance calculations.
The Center of Gravity approximation measures the distance between
the center of gravity between two image objects. This measure can be computed very
efciently but it can be inaccurate for large image objects.
Center of Gravity
Reference Book
11 June 2014
Introduction
17
Figure 1.14. Distance calculation between image objects. (The black line is the center of
gravity approximation. The red line is the smallest enclosing rectangle approximation)
We recommend using the center of gravity distance for most applications although the
smallest enclosing rectangle may give more accurate results. A good strategy for exact
distance measurements is to use center of gravity and try to avoid large image objects, for
example, by creating border objects. To avoid performance problems, restrict the total
number of objects involved in distance calculations to a small number.
You can edit the distance calculation in the algorithm parameters of the Set Rule Set
Options algorithm and set the Distance Calculation option to your preferred value.
1.5
1.5.1
Class-Related Features
Class-Related Sets
Let M = {m1, . . . , ma } be a set of classes with m being a specic class and m M. Each
image object has a fuzzy membership value of (v, m) to class m . In addition each
image object also carries the stored membership value (u, m) that is computed during
execution of the last classication algorithm. By restricting a set of image objects O
to only the image object that belong to class m , various class-related features can be
computed:
m) = 1} Neighbors of class
Nv (d, m) = {u Nv (d) : (u,
m) = 1}
Sv (d, m) = {u Sv (d) : (u,
Sub-objects of class
m) = 1}
Vi (m) = {u Vi (m) : (u,
m) = 1} Superobjects of class
Uv (d, m) = {u Uv (d) : (u,
Pi (R, m) = {p R : (vi (p)) = m}
Pixels of class
Example: The mean difference of layer k to a neighbor image object within a distance d
and that image object belongs to a class m is dened as k (v, Nv (d, m)).
Reference Book
11 June 2014
Introduction
1.6
18
Shape-Related Features
Many of the eCognition form features are based on the statistics of the spatial distribution
of the voxels that form an 3D image object. As a central tool to work with these statistics,
eCognition Developer 9.0 uses the covariance matrix:
1.6.1
Parameters
1.6.2
Expression
C = Cov(XY ) Var(Y )
Cov(Y Z)
Cov(XZ) Cov(Y Z)
Var(Z)
Another frequently used technique to derive information about the form of image objects
is the bounding box approximation. Such a bounding box can be calculated for each
image object and its geometry can be used as the rst clue to the image object itself.
The main information provided by the bounding box is its length a , its width b , its
thickness c , its volume a b c and its degree of lling f , which is the volume V lled
by the image object divided by the total volume a b c of the bounding box.
1.6.3
1
x
#Pv (x,y,z)
ycenter =
1
y
#Pv (x,y,z)
zcenter =
1
z
#Pv (x,y,z)
2
1
1
2
y
y
i #P v i = Eyy Ey2
#Pv (x,y,z)
(x,y,z)
Reference Book
11 June 2014
Introduction
19
2
1
1
2
y
y
i #P v i = Ezz Ez2
#Pv (x,y,z)
(x,y,z)
1
1
1
x
y
x
y
Cxy =
i i #Pv i
i = Exy Ex Ey
#Pv (x,y,z)
#Pv (x,y,z)
(x,y,z)
1
1
1
Cxz =
xi zi #Pv xi #Pv zi = Exz Ex Ez
#Pv (x,y,z)
(x,y,z)
(x,y,z)
1
1
1
Cyz =
yi zi #Pv yi #Pv zi = Eyz Ey Ez
#Pv (x,y,z)
(x,y,z)
(x,y,z)
Czz =
1.6.4
Elliptic Approximation
The elliptic approximation uses the eigenvalues (1 , 2 , 3 ) of the covariance matrix and
computes an ellipsis with axis along the eigenvector e1 with length a , and along the
eigenvector e2 with length b , and along the eigenvector e3 with length c .
The formula is a : b : c = 1 : 2 : 3 .
Reference Book
11 June 2014
Introduction
1.7
20
About Algorithms
1.7.1
Creating a Process
A single process can be created using the Edit Process dialog box, in which you can
dene:
The method of the process from an algorithm list, for example the Multiresolution
Segmentation algorithm or a Classication algorithm.
The domain on which an algorithm should be performed, for example the pixel
level, an image object level or a vector layer.
Detailed algorithm parameter settings varying dependent on the algorithm selected.
1.7.2
11 June 2014
Introduction
21
2. To edit values of parameters, select the parameter name or its value by clicking on
it. Depending on the type of value, change the value in one of the following ways:
Edit values directly within the value eld.
Click the ellipsis button located inside the value eld. A dialog box opens,
enabling you to congure the value.
Select the value from a drop-down list. For many parameters, you can select
a feature by clicking the From feature item. or an existing variable. Alternatively, you can select a variable. To create a new variable, type a name for the
new variable and click OK, or press Enter to open the Create Variable dialog
box for additional settings.
Reference Book
11 June 2014
2 Process-Related Operation
Algorithms
The Process-Related Operation algorithms are used to control other processes.
2.1
Execute all child (subordinate) processes of a process. To dene a process using this
algorithm as a parent process, you can right-click it and choose the Insert Child command
in the context menu.
2.1.1
Supported Domains
Execute
Use the Execute Child Processes algorithm in conjunction with the Execute domain to
structure your process tree. A process with this setting serves as a parent process, providing a container for a sequence of functionally related child processes.
Pixel Level
Applies the algorithm at the pixel level. Typically used for initial segmentations.
Image Object Level
Use the Execute Child Processes algorithm in conjunction with other domains (for example, the image object level domain) to loop over a set of objects. All contained child
processes will be applied to the objects in the respective domain. In this case the child
processes usually use one of the following as image object domain: current image object,
neighbor object, super object, sub objects.
Current Image Object
Applies the algorithm to the current internally selected image object of the parent process.
22
23
Applies the algorithm to all neighbors of the current internally selected image object of
the parent process. The size of the neighborhood is dened by the Distance parameter.
Super Object
Applies the algorithm to the superobject of the current internally selected image object
of the parent process. The number of levels above in the image object level hierarchy is
dened by the Level Distance parameter.
Sub Object
Applies the algorithm to the sub objects of the current internally selected image object
of the parent process. The number of levels below in the image object level hierarchy is
dened by the Level Distance parameter.
Maps
Applies the algorithm to all specied maps of a project. You can select this domain in
parent processes with the Execute Child Process algorithm to set the context for child
processes that use the map parameter From Parent.
Linked Objects
Applies the algorithm to all linked image objects of the current internally selected image
object of the parent process.
Image Object List
Applies the algorithm to all image objects that were collected with the algorithm Update
Image Object List.
Array
Options for the array function are visible when Array is selected as Domain. For more
information on arrays, please consult also the eCognition Developer User Guide > Advanced Rule Set Concepts > Arrays and have a look at the examples in our user community:
http://community.ecognition.com/home/Arrays%20Example%20%231.zip/view or http:
//community.ecognition.com/home/Arrays%20Example%20%232.zip/view
Array: Select a predened array
Array Type: Displays the array type (this is not editable)
Variable: The variable holding the value of the current array item when the process
is executed
Reference Book
11 June 2014
24
Index Variable: The scene variable holding the index of the current array item
when the process is executed
Vectors
Applies the algorithm to a thematic vector layer that can be selected in Domain > Parameter > Thematic Vector Layer.
Vectors (Multiple layers)
Applies the algorithm to a set of thematic vector layers that can be selected in the Select
thematic layers dialog window opened by clicking the ellipsis button located inside the
value eld of Domain > Parameter > Thematic Vector Layer.
2.2
When items are exported during a multimap analysis they are held in memory, which can
often lead to out-of-memory problems during intensive analyses. The Execute Child as
Series algorithm allows partial results to be exported as soon as they are available, freeing
up memory for other tasks.
2.2.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Sub Object; Super Object; Linked Objects; Maps; Image Object List
2.2.2
Algorithm Parameters
Series Name
Enter the series name variables and strings are possible. All export algorithms receive a
new parameter: Export Series (Yes/No)
2.3
If the conditions in the If algorithm are connected, all the dened conditions must be true
to enter the Then path, otherwise the Else path is taken
If no condition is dened, the Then path is chosen
If there is more than one Then or Else condition, all are executed.
Reference Book
11 June 2014
2.3.1
25
Supported Domains
Execute; Pixel Level; Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub Objects; Linked Objects; Maps; Image Object List; Array
2.4
Throw
The Throw algorithm generates an error message (the Exception Message parameter) if a
certain condition occurs, for example the number of objects exceeds a dened number.
2.4.1
Supported Domains
Execute; Pixel Level; Image Object Level: Current Image Object; Neighbor Image object;
Super Object; Sub Objects; Linked Objects; Maps; Image Object List; Array
2.5
Catch
2.5.1
Supported Domains
Execute; Pixel Level; Image Object Level: Current Image Object; Neighbor Image object;
Super Object; Sub Objects; Linked Objects; Maps; Image Object List; Array
2.6
The Set Rule Set Options algorithm lets you control certain settings for the rule set, or
parts of rule sets. For example, you may want to apply particular settings to analyze large
objects and change them to analyze small objects. In addition, because the settings are
part of the rule set and not on the client, they are preserved when the rule set is run on a
server.
You can save the rule set or the associated project to preserve the settings chosen in this
algorithm, because saving the project also saves the rule set. The current settings can be
displayed under Tools - Options - Project Settings.
2.6.1
Supported Domains
Execute
Reference Book
11 June 2014
2.6.2
26
Algorithm Parameters
If the value is No, settings apply globally, persisting after completion of execution. If Yes
is selected, changes apply only to child processes.
Distance Calculation
When a value is undened, with respect to a condition you have specied, the software
can evaluate it as false, or perform the evaluation based on a value of zero.
Yes: Evaluate any condition with undened feature as false (this is the default).
No: Assign value zero to undened feature before evaluation of any condition
with undened feature.
Default reverts values to their defaults when the rule set is saved
Keep Current maintains the existing settings when the rule set is saved.
Polygons Base Polygon Threshold
This value sets the degree of abstraction for the base polygons. The default is 1.25.
Polygons Shape Polygon Threshold
This value determines the degree of abstraction for the shape polygons. Shape polygons
are independent of the topological structure and consist of at least three points. The
threshold for shape polygons can be changed any time without the need to recalculate the
base vectorization. The default value is one.
Reference Book
11 June 2014
27
Remove Slivers is used to avoid intersection of edges of adjacent polygons and selfintersections of polygons.
Sliver removal becomes necessary with higher threshold values for base polygon generation. Note that the processing time to remove slivers is high, especially for low thresholds
where it is not needed anyway.
No: enables the intersection of polygon edges and self-intersections
Yes: avoids the intersection of edges of adjacent polygons and self-intersections of
polygons
Default reverts values to their defaults when the rule set is saved
Keep Current maintains the existing settings when the rule set is saved
Update Topology
Selecting Yes saves temporary layers to your hard drive. The default values in eCognition
Developer 9.0 are:
Yes: for new rule sets.
No: for rule sets created with eCognition 8.8 or earlier.
Yes: for rule sets created with eCognition 9.0.
Polygon Compatibility Mode
Set this option to use the compatibility option for image layer resampling. After changing
this value the image objects may need to be recreated to reect the changes.
Switch to classification view after process execution
This parameter sets the maximum laser point distance to the sensor. The distance unit is
according to the unit of the point cloud projection.
Reference Book
11 June 2014
3 Segmentation Algorithms
Segmentation algorithms are used to subdivide entire images at a pixel level, or specic
image objects from other domains into smaller image objects.
Trimble provides several different approaches to segmentation, ranging from very simple
algorithms, such as chessboard and quadtree-based segmentation, to highly sophisticated
methods such as multiresolution segmentation and contrast lter segmentation.
Segmentation algorithms are required whenever you want to create new image objects
levels based on image layer information. But they are also a very valuable tool to rene
existing image objects by subdividing them into smaller pieces for more detailed analysis.
3.1
Chessboard Segmentation
The Chessboard Segmentation algorithm splits the pixel domain or an image object domain into square image objects.
A square grid aligned to the image left and top borders of xed size is applied to all
objects in the domain and each object is cut along these gridlines.
28
Segmentation Algorithms
3.1.1
29
Supported Domains
Pixel Level; Image Object Level: Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
3.1.2
Algorithm Parameters
Object Size
Object Size denes the size of the square grid in pixels. Variables are rounded to the
nearest integer
Level Name
In the Level Name eld, enter the name of a new image object level. This parameter is
only available, if the domain Pixel Level is selected in the process dialog.
Overwrite Existing Level
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Thematic Layer Usage
In the Thematic Layers eld, specify the thematic layers to be considered in addition
to segmentation. Each thematic layer used for segmentation will cause further splitting
of image objects while enabling consistent access to its thematic information. You can
segment an image using more than one thematic layer. The results are image objects representing proper intersections between the thematic layers. If you want to produce image
objects based exclusively on thematic layer information, you can select a chessboard size
larger than your image size.
3.2
Quadtree-Based Segmentation
The Quadtree-Based Segmentation algorithm splits the pixel domain or an image object
domain into a quadtree grid formed by square objects.
A quadtree grid consists of squares with sides each having a power of two and aligned
to the image left and top borders. It is applied to all objects in the domain and each
object is cut along these gridlines. The quadtree structure is built so that each square has
a maximum possible size and also fullls the homogeneity criteria dened by the mode
and scale parameters.
The maximum square object size is 256 x 256, or 65,536 pixels.
Reference Book
11 June 2014
Segmentation Algorithms
30
Figure 3.2. Result of quadtree-based segmentation with mode color and scale 40
3.2.1
Supported Domains
Pixel Level; Image Object Level: Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
3.2.2
Algorithm Parameters
Mode
Color: The maximal color difference within each square image object is less than
the Scale value
Super Object Form: Each square image object must completely t into the superobject. This mode only works with an additional upper image level
Scale
Scale denes the maximum color difference within each selected image layer inside
square image objects. It is only used in conjunction with the Color mode.
Level Name
In the Level Name eld, enter the name of a new image object level. This parameter is
only available if the domain Pixel Level is selected in the process dialog.
Overwrite Existing Level
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Reference Book
11 June 2014
Segmentation Algorithms
31
Image layers can be weighted depending on their importance or suitability for the segmentation result. The higher the weight assigned to an image layer, the more weight
will be assigned to that layers pixel information during the segmentation process, if it is
used. Consequently, image layers that do not contain the information intended for representation by the image objects should be given little or no weight. For example, when
segmenting a geographical LANDSAT scene using multiresolution segmentation or spectral difference segmentation, the segmentation weight for the spatially coarser thermal
layer should be set to 0 in order to avoid deterioration of the segmentation result by the
blurred transient between image objects of this layer.
1. In the Algorithm parameters area, expand the Image Layer weights list and set the
weight of the image layers to be considered by the algorithm. You can use both of
the following methods:
Select an image layer and edit the weight value
Select Image Layer weights and click the ellipsis button located inside the
value eld to open the Image Layer Weights dialog box
2. Select an image layer in the list. To select multiple image layers press Ctrl.
3. Enter a new weight in the New value text box and click Apply.
Options
Click the Calculate Stddev button to check the image layer dynamics. The calculated standard deviations of the image layer values for each single image layer are
listed in the Stddev. column.
To search for a specic layer, type the name into the Find text box.
Reference Book
11 June 2014
Segmentation Algorithms
3.2.3
32
In the Thematic Layers eld, specify the thematic layers to be considered in addition to
segmentation. Each thematic layer used for segmentation will cause additional splitting
of image objects while enabling consistent access to its thematic information. You can
segment an image using more than one thematic layer. The results are image objects representing proper intersections between the thematic layers. If you want to produce image
objects based exclusively on thematic layer information, you can select a chessboard size
larger than your image size between the thematic layers.
3.3
The Contrast Split Segmentation algorithm segments an image or image object into dark
and bright regions. It is based on a threshold that maximizes the contrast between the
resulting bright objects (consisting of pixels with pixel values above the threshold) and
dark objects (consisting of pixels with pixel values below the threshold).
The algorithm evaluates the optimal threshold separately for each image object in the domain. If the pixel level is selected in the domain, the algorithm rst executes a chessboard
segmentation, then performs the split on each square.
It achieves the optimization by considering different pixel values as potential thresholds.
The test thresholds range from the minimum threshold to the maximum threshold, with
intermediate values chosen according to the step size and stepping type parameter. If
a test threshold satises the minimum dark area and minimum bright area criteria, the
contrast between bright and dark objects is evaluated. The test threshold causing the
largest contrast is chosen as the best threshold and used for splitting.
3.3.1
Supported Domains
Pixel Level; Image Object Level: Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
3.3.2
Settings
This eld is available only if pixel level is selected in the Domain. Enter the chessboard
tile size (the default is 1,000).
Level Name
Select or enter the level to contain the results of the segmentation. Available only if the
pixel level is selected in the domain.
Reference Book
11 June 2014
Segmentation Algorithms
33
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Minimum Threshold
Enter the minimum gray value to be considered for splitting. The algorithm calculates the
threshold for gray values from the minimum threshold value to the maximum threshold
value (the default is 0).
Maximum Threshold
Enter the maximum gray value to be considered for splitting. The algorithm calculates the
threshold for gray values from the minimum threshold value to the maximum threshold
value (the default is 255).
Step Size
Enter the step size by which the threshold increases from the minimum threshold to the
maximum threshold. The value is either be added to the threshold or multiplied by the
threshold, according to the selection in the Stepping Type eld. The algorithm recalculates a new best threshold each time the threshold is changed by application of the values
in the Step Size and Stepping Type elds, until the maximum threshold is reached. Higher
values entered for step size tend to execute more quickly; smaller values tend to achieve
a split with a larger contrast between bright and dark objects.
Stepping Type
Create a class for image objects brighter than the threshold or select one from the dropdown list. Image objects are not classied if the value in the Execute splitting eld is
No.
Reference Book
11 June 2014
Segmentation Algorithms
34
Create a class for image objects darker than the threshold or select one from the dropdown list. Image objects are not classied if the value in the Execute splitting eld is
No.
3.3.3
Advanced Settings
Contrast Mode
Select the method the algorithm uses to calculate contrast between bright and dark objects.
The algorithm calculates possible borders for image objects (where a is the mean of bright
border pixels and b is the mean of dark border pixels):
Edge Ratio: (a b) (a + b)
Edge Difference: (a b)
Object Difference: The difference between the mean of all bright pixels and the
mean of all dark pixels.
Execute Splitting
Select Yes to split objects with the best-detected threshold. Select No to simply compute
the threshold without splitting.
Variable for Best Threshold
Enter a scene variable to store the computed pixel value threshold that maximizes the
contrast.
Variable for Best Contrast
Enter a scene variable to store the computed contrast between bright and dark objects
when splitting with the best threshold. The computed value is different for each contrast
mode.
Minimum Relative Area Dark
Enter the minimum relative dark area. Segmentation into dark and bright objects only
occurs if the relative dark area is higher than the value entered. Only thresholds that
lead to a relative dark area larger than the value entered are considered as best thresholds.
Setting this value to a number greater than 0 may increase speed of execution.
Minimum Relative Area Bright
Enter the minimum relative bright area. Only thresholds that lead to a relative bright area
larger than the value entered are considered as best thresholds. Setting this value to a
number greater than 0 may increase speed of execution.
Reference Book
11 June 2014
Segmentation Algorithms
35
Minimum Contrast
Enter the minimum contrast value threshold. Segmentation into dark and bright objects
only occurs if a contrast higher than the value entered can be achieved.
Minimum Object Size
Enter the minimum object size in pixels that can result from segmentation. Only larger
objects are segmented. Smaller objects are merged with neighbors randomly. (The default
value of 1 effectively inactivates this option.)
3.4
Multiresolution Segmentation
Figure 3.4. Result of multiresolution segmentation with scale 10, shape 0.1 and compactness
0.5
The multiresolution segmentation algorithm consecutively merges pixels or existing image objects. Thus it is a bottom-up segmentation algorithm based on a pairwise region
merging technique. Multiresolution segmentation is an optimization procedure which,
for a given number of image objects, minimizes the average heterogeneity and maximizes
their respective homogeneity.
The segmentation procedure works according the following rules, representing a mutualbest-tting approach:
1. The segmentation procedure starts with single image objects of one pixel and repeatedly merges them in several loops in pairs to larger units as long as an upper
threshold of homogeneity is not exceeded locally. This homogeneity criterion is
dened as a combination of spectral homogeneity and shape homogeneity. You
Reference Book
11 June 2014
Segmentation Algorithms
2.
3.
4.
5.
6.
36
can inuence this calculation by modifying the scale parameter. Higher values for
the scale parameter result in larger image objects, smaller values in smaller image
objects.
As the rst step of the procedure, the seed looks for its best-tting neighbor for a
potential merger.
If best tting is not mutual, the best candidate image object becomes the new seed
image object and nds its best tting partner.
When best tting is mutual, image objects are merged.
In each loop, every image object in the image object level will be handled once.
The loops continue until no further merger is possible.
Figure 3.5. Each image object uses the homogeneity criterion to determine the best neighbor
to merge with
Figure 3.6. If the first image objects best neighbor (red) does not recognize the first image
object (gray) as best neighbor, the algorithm moves on (red arrow) with the second image
object finding the best neighbor
Figure 3.7. This branch-to-branch hopping repeats until mutual best fitting partners are
found
The procedure continues with another image objects best neighbor. The procedure iterates until no further image object mergers can be realized without violating the maximum
allowed homogeneity of an image object.
With any given average size of image objects, multiresolution segmentation yields good
abstraction and shaping in any application area. However, it has higher memory requirements and signicantly slower performance than some other segmentation techniques and
therefore is not always the best choice.
Reference Book
11 June 2014
Segmentation Algorithms
37
Figure 3.8. If the homogeneity of the new image object does not exceed the scale parameter,
3.4.1
Supported Domains
3.4.2
Level Settings
Level Name
The Level Name eld lets you dene the name for the new image object level. It is only
available if a new image object level will be created by the algorithm. To create new
image object levels, use either the domain Pixel Level in the process dialog or set the
Level Usage parameter to Create Above or Create Below.
Overwrite Existing Level
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Level Usage
Select one of the available modes from the drop-down list. The algorithm is applied
according to the mode based on the image object level that is specied by the domain.
This parameter is not visible if pixel level is selected as domain in the Edit Process dialog
box.
Use Current applies multiresolution segmentation to the existing image object level.
Objects can be merged and split depending on the algorithm settings
Use Current (Merge Only) applies multiresolution segmentation to the existing image object level. Objects can only be merged. Usually this mode is used together
with stepwise increases of the scale parameter
Create Above creates a copy of the image object level as super objects
Create Below creates a copy of the image object level as sub objects.
Reference Book
11 June 2014
Segmentation Algorithms
3.4.3
38
Segmentation Settings
Image layers can be weighted depending on their importance or suitability for the segmentation result. The higher the weight assigned to an image layer, the more weight
will be assigned to that layers pixel information during the segmentation process, if it is
used. Consequently, image layers that do not contain the information intended for representation by the image objects should be given little or no weight. For example, when
segmenting a geographical LANDSAT scene using multiresolution segmentation or spectral difference segmentation, the segmentation weight for the spatially coarser thermal
layer should be set to 0 in order to avoid deterioration of the segmentation result by the
blurred transient between image objects of this layer.
In the Algorithm parameters area, expand the Image Layer weights list and set the
weight of the image layers to be considered by the algorithm. Select an image layer
and edit the respective weight value.
Alternatively you can insert the weights in the value eld of the parameter Image
layer weights, separated by a comma, e.g.: 0,0,1
You can also use a variable as a layer weight.
Compatibility mode
Specify the thematic layers to be candidates for segmentation. Each thematic layer that
is used for segmentation will lead to additional splitting of image objects while enabling
consistent access to its thematic information. You can segment an image using more
than one thematic layer. The results are image objects representing proper intersections
between the thematic layers.
Scale Parameter
The Scale Parameter is an abstract term that determines the maximum allowed heterogeneity for the resulting image objects. For heterogeneous data, the resulting objects for
a given scale parameter will be smaller than in more homogeneous data. By modifying
the value in the Scale Parameter value you can vary the size of image objects.
Reference Book
11 June 2014
Segmentation Algorithms
39
TIP: Always produce image objects of the biggest possible scale that still
distinguish different image regions (as large as possible and as ne as necessary). There is a tolerance concerning the scale of the image objects
representing an area of a consistent classication due to the equalization
achieved by the classication. The separation of different regions is more
important than the scale of image objects.
3.4.4
The object homogeneity to which the scale parameter refers is dened in the Composition
of Homogeneity criterion eld. In this circumstance, homogeneity is used as a synonym
for minimized heterogeneity. Internally, three criteria are computed: color, smoothness,
and compactness. These three criteria for heterogeneity may be applied in many ways
although, in most cases, the color criterion is the most important for creating meaningful
objects. However, a certain degree of shape homogeneity often improves the quality
of object extraction because the compactness of spatial objects is associated with the
concept of image shape. Therefore, the shape criteria are especially helpful in avoiding
highly fractured image object results in strongly textured data (for example radar data).
Shape
The value of the Shape eld modies the relationship between shape and color criteria; By
modifying the Shape criterion, 1 you dene the color criteria (color = 1 shape). In effect,
by decreasing the value assigned to the Shape eld, you dene to which percentage the
spectral values of the image layers will contribute to the entire homogeneity criterion.
1. A high value for the shape criterion operates at the cost of spectral homogeneity. However, spectral information
is in the end the primary information contained in image data. Using the shape criterion too intensively may
thus reduce the quality of segmentation results.
Reference Book
11 June 2014
Segmentation Algorithms
40
This is weighted against the percentage of the shape homogeneity, which is dened in the
Shape eld.
Changing the weight for the Shape criterion to 1 will result in objects more optimized for
spatial homogeneity. However, the shape criterion cannot have a value larger than 0.9,
due to the fact that without the spectral information of the image, the resulting objects
would not be related to the spectral information at all. The slider bar adjusts the amount
of Color and Shape to be used for the segmentation.
In addition to spectral information, the object homogeneity is optimized with regard to
the object shape, dened by the Compactness parameter.
Compactness
The compactness criterion is used to optimize image objects with regard to compactness.
This criterion should be used when different image objects which are rather compact, but
are separated from non-compact objects only by a relatively weak spectral contrast. Use
the slider bar to adjust the degree of compactness to be used for the segmentation.
3.5
The Spectral Difference Segmentation algorithm merges neighboring image objects according to their mean image layer intensity values. Neighboring image objects are
merged if the difference between their layer mean intensities is below the value given
by the maximum spectral difference.
This algorithm is designed to rene existing segmentation results, by merging spectrally
similar image objects produced by previous segmentations. It cannot be used to create
new image object levels based on the pixel level domain.
3.5.1
Supported Domains
3.5.2
Level Settings
Level Usage
Dene the image object level to be used: the current one or a new one to be created above
the current.
Level Name
Dene the Level Name of the image object level to be created. Not available if you use
the current image object level.
Reference Book
11 June 2014
Segmentation Algorithms
3.5.3
41
Segmentation Settings
Dene the maximum spectral difference in gray values between image objects that are
used during the segmentation. If the difference is below this value, neighboring objects
are merged.
Parameters
Expression
Enter weighting values the higher the weight assigned to an image layer, the more
weight will be given to that layers pixel information during the segmentation process.
You can also use a variable as a layer weight.
Thematic Layer Usage
Specify the thematic layers that are to be considered in addition for segmentation. Each
thematic layer used for segmentation will lead to additional splitting of image objects,
while enabling consistent access to its thematic information. You can segment an image
using more than one thematic layer. The results are image objects representing proper
intersections between the thematic layers.
3.6
Multi-Threshold Segmentation
Multi-Threshold Segmentation splits the domain based on pixel values. This creates image objects and classies them based on user-created thresholds. It can also be used to
create unclassied image objects based on pixel values thresholds.
Reference Book
11 June 2014
Segmentation Algorithms
3.6.1
42
Supported Domains
Pixel Level; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects
3.6.2
Level Settings
Level Name
Available only if the domain is pixel level. Select an existing level or enter a name.
Overwrite Existing Level
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Image Layer
Select the type of image objects that are created (the domain must be pixel level):
If Yes, all image objects are marked as connected
If No, image objects can be marked as disconnected
Merge Image Objects First
This parameter species the behaviour if the Ensure Connected Objects parameter is set
to Yes. It is only available if you select a domain that represents existing image objects.
If Yes, all image objects with the same classication are rst merged into one
disconnected object and then separated again into connected image objects. By
using this option you might lose some borders between image objects that existed
before executing the algorithm.
If No, only disconnected image objects are further separated into connected image
objects where necessary.
Min Object Size
Reference Book
11 June 2014
Segmentation Algorithms
43
Thresholds
You can set multiple thresholds to classify image objects based on pixel values. An
additional threshold eld is added as each threshold is created. Multiple thresholds must
be in ascending order:
Class 1
Pixels below the threshold will be classied as the class selected or entered.
Enter a pixel value below which pixels will be classied as dened in the
Class eld. Alternatively, you can select a feature or a variable. To create a new variable,
type a name for the new variable and click OK to open the Create Variable dialog box.
Threshold 1
3.7
Pixel lters detect potential objects by contrast and gradient, and create suitable object
primitives. An integrated reshaping operation modies the shape of image objects to help
form coherent and compact image objects. The resulting pixel classication is stored in
an internal thematic layer. Each pixel is classied as one of the following classes: No
Object, Object in First Layer, Object in Second Layer, Object in Both Layers and Ignored
by Threshold. Finally a chessboard segmentation is used to convert this thematic layer
into an image object level.
Contrast lter segmentation, as a rst step, improves overall image analysis performance
substantially.
3.7.1
Supported Domains
Pixel Level
3.7.2
Chessboard Settings
Object Size
Level Name
Reference Book
11 June 2014
Segmentation Algorithms
44
This parameter is only available when Pixel Level is selected. It allows you to automatically delete an existing image level above the pixel level and replace it with a new level
created by the segmentation.
Thematic Layer
3.7.3
Input Parameters
These parameters are identical for the rst and second layers.
Layer
Choose the image layers to analyze. You can disable the second image layer by selecting
No Layer. If you select No Layer, then the other parameters in the second layer group are
inactive.
Scale 14
Several scales can be dened and analyzed at the same time. If at least one scale is
tested positive, the pixels are classied as image objects. The default value is no scale,
indicated by a scale value of 0. To dene a scale, edit the scale value.
The scale value n denes a square (gray) with a side length of 2d (d =
{all pixels with distance to the current pixel |n| 2 + 1 but (|n| 2) 2 + 1)} with
the current pixel in its center. The mean value of the pixels inside the outer square
is compared with the mean value of the pixels inside a inner square (red) with a
side length of 2d (d = {all pixels with distance to the current pixel |n| 2 2 + 1
but not the pixel itself}) . In the case of |n| 3 it is just the pixel value.
Select a positive scale value to nd objects that are brighter than their surroundings on
the given scale; select a negative scale value to nd darker objects.
Reference Book
11 June 2014
Segmentation Algorithms
45
Gradient
Use additional minimum gradient criteria for objects. Using gradients can increase the
computing time for the algorithm. Set this parameter to 0 to disable the gradient criterion.
Lower Threshold
Pixels with layer intensity below this threshold are assigned to the Ignored by Threshold
class.
Upper Threshold
Pixels with layer intensity above this threshold are assigned to the Ignored by Threshold
class.
3.7.4
If you expect coherent and compact image objects, shape criteria parameters provide an
integrated reshaping operation that modies the shape of image objects by cutting off
protruding parts and lling indentations and hollows.
Shape Criteria Value
Protruding parts of image objects are declassied if a direct line crossing the hollow is
smaller or equal than the ShapeCriteria value. Indentations and hollows of image objects
are classied as the image object if a direct line crossing the hollow is smaller or equal
than the shape criteria value. If you do not want any reshaping, set the value to 0.
Working on Class
Select a class of image objects for reshaping. The pixel classication can be transferred
to the image object level using the classication parameters.
3.7.5
Classification Parameters
Select Yes or No in order to use or disable the classication parameters. If you select No,
then the other parameters in the group are inactive.
No Objects
Pixels failing to meet the dened lter criteria are assigned the selected class.
Reference Book
11 June 2014
Segmentation Algorithms
46
Ignored By Threshold
Pixels with a layer intensity below or above the threshold values are assigned the selected
class.
Object in First Layer
Pixels that match the lter criteria in rst layer, but not the second layer, are assigned the
selected class.
Objects in Both Layers
Pixels that match the lter criteria value in both layers are assigned the selected class.
Objects in Second Layer
Pixels that match the scale value in second layer, but not the rst layer, are assigned the
selected class.
Reference Book
11 June 2014
4 Basic Classification
Algorithms
Classication algorithms analyze image objects according to dened criteria and assign
them to a class that best meets them.
4.1
Assign Class
Assign all objects of the domain to the class specied by the Use Class parameter. The
membership value for the assigned class is set to 1 for all objects independent of the class
description. The second and third-best classication results are set to 0.
4.1.1
Use Class
Select the class for the assignment from the drop-down list. You can also create a new
class for the assignment.
4.2
Classification
Evaluates the membership value of an image object against a list of selected classes. The
classication result of the image object is updated according to the class evaluation result.
The three best classes are stored in the image object classication result. Classes without
a class description are assumed to have a membership value of 1.
4.2.1
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
47
48
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
4.3
Hierarchical Classification
4.3.1
Active Classes
4.3.2
Enable to evaluate all class-related features in the class descriptions of the selected classes.
If it is disabled these features will be ignored.
4.4
Remove Classification
4.4.1
Classes
Reference Book
11 June 2014
5 Advanced Classification
Algorithms
Advanced classication algorithms classify image objects that fulll special criteria, such
as being enclosed by another image object, or being the smallest or largest object in a set.
5.1
Find Domain Extrema classies image objects with the smallest or largest feature values
within the domain, according to an image object feature.
Figure 5.1. Result of Find Domain Extrema, with Extrema Type set to Maximum and Feature
Set to Area
5.1.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
49
5.1.2
50
Extrema Settings
Extrema Type
Choose Minimum to classify image objects with the smallest feature values and Maximum for image objects with the largest feature values.
Feature
This feature enables the algorithm to accept equal extrema. This parameter denes the
behavior of the algorithm if more than one image object is fullling the extreme condition.
If enabled, all image objects will be classied; if not, none of the image objects will be
classied.
4.02 Compatibility Mode
Select Yes from the Value eld to enable compatibility with older software versions (version 4.02 and older). This parameter will be removed from future versions.
5.1.3
Classification Settings
Specify the classication that will be applied to all image objects fullling the extreme
condition. At least one class needs to be selected in the active class list for this algorithm.
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Reference Book
11 June 2014
51
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
5.2
Find Local Extrema classies image objects that fulll a local extrema condition, according to image object features within a search domain in their neighborhoods.
Image objects with either the smallest or the largest feature value within a specic neighborhood will be classied according to the classication settings.
5.2.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
5.2.2
Search Settings
The search settings let you specify a search domain for the neighborhood around the
image object.
Class Filter
Choose the classes to be searched. Image objects will be part of the search domain if
they are classied with one of the classes selected in the class lter. Always add the class
selected for the classication to the search class lter. Otherwise cascades of incorrect
extrema resulting from the reclassication during the execution of the algorithm may
appear.
Search Range
Dene the search range in pixels. All image objects with a distance below the given
search range will be part of the search domain. Use the drop-down arrows to select zero
or positive numbers.
Connected
Enable to ensure that all image objects in the search domain are connected with the analyzed image object via other objects in the search range.
Reference Book
11 June 2014
52
Select Yes from the Value eld to enable compatibility with older software versions (version 3.5 and older). This parameter will be removed in future versions.
5.2.3
Conditions
Choose Minimum for classifying image objects with the smallest feature values and Maximum for classifying image objects with largest feature values.
Feature
This parameter denes the behavior of the algorithm if more than one image object is
fullling the extrema condition:
Do Not Accept Equal Extrema means no image objects will be classied
Accept Equal Extrema means all image objects will be classied
Accept First Equal Extrema will result in the classication of the rst of the image
objects.
5.2.4
Classification Settings
Specify the classication that will be applied to all image objects fullling the extreme
condition. At least one class needs to be selected in the active class list for this algorithm.
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Reference Book
11 June 2014
53
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
5.3
Find and classify image objects that are completely enclosed by image objects belonging
to certain classes.
If an image object is located at the border of the image, it will not be found and classied
by Find Enclosed by Class. The shared part of the outline with the image border will not
be recognized as the enclosing border.
Figure 5.2. Left: Input of Find Enclosed by Class: Domain: Image Object Level, Class Filter:
N0, N1. Enclosing class: N2. Right: Result of Find Enclosed by Class: Enclosed objects get
classified with the class enclosed. Note that the objects at the upper image border are not
classified as enclosed
5.3.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
5.3.2
Search Settings
Enclosing Classes
11 June 2014
54
Compatibility Mode
Select Yes from the Value eld to enable compatibility with older software versions (version 3.5 and 4.0). This parameter will be removed with future versions.
5.3.3
Classification Settings
Choose the classes that should be used to classify enclosed image objects.
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
5.4
Find and classify image objects that are completely enclosed by image objects from the
domain. Enclosed image objects located at the image border will be found and classied
by the Find Enclosed by Image Object algorithm. The shared part of the outline with the
image border will be recognized as the enclosing border.
In gure 5.3 on the following page, the inputs for the left-hand image in Find Enclosed by
Image Object are domain: image object level, class lter: N2. On the right-hand image,
using the same algorithm, enclosed objects are classied with the class enclosed. Note
that the objects at the upper image border are classied as enclosed.
5.4.1
Classification Settings
Choose the class that will be used to classify enclosed image objects.
Reference Book
11 June 2014
55
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
5.5
Connector
Classify the image objects that make up the shortest connection between the current image object and another image object that meets the conditions described by the connection
settings.
The process starts to search from the current image object along objects that meet the
conditions as specied by Connect Via and Super Object Mode Via, until it reaches an
Reference Book
11 June 2014
56
image object that meets the conditions specied by Connect To and Super Object Mode
To. All image objects that are part of the resulting connection are assigned to the selected
class.
You can dene the maximum search range under Search Range In Pixels.
5.5.1
Connector Settings
Connector Via
Choose the classes whose image objects you want to be searched when determining the
connection. Image objects assigned to other classes will not be taken into account.
Super Object Mode Via
Limit the number of image objects that are taken into account for the connection by
specifying a superordinate object. Choose one of the following options:
Dont Care: Use any image object
Different Super Object: Use only images with a different superobject than the seed
object
Same Super Object: Use only image objects with the same superobject as the seed
object
Connect To
Choose the classes whose image objects you want to be searched when determining a
destination for the connection. Image objects assigned to other classes will not be taken
into account.
Super Object Mode To
Limit the number of image objects that are taken into account for the destination of the
connection by specifying a superordinate object. Choose one of the following options:
Dont Care use any image object
Different Super Object use only images with a different superobject than the seed
object
Same Super Object use only image objects with the same superobject as the seed
object.
Search Range
Reference Book
11 June 2014
5.5.2
57
Classification Settings
Choose the class that should be used to classify the connecting objects.
Erase Old Classification If There Is No New Classification
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
5.6
5.6.1
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
5.6.2
Algorithm Parameters
Use Class
Create 3D objects
Reference Book
11 June 2014
58
Slices Up
Describes how many slices the object is copied up. Use 1 for no limit.
Slices Down
Describes how many slices the object is copied down. Use 1 for no limit.
5.7
Generate member functions for classes by looking for the best separating features based
upon sample training.
5.7.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
5.7.2
Select a class or create a new class that provides samples for the target class (the class to
be trained).
For Rest Samples
Select a class or create a new class that provides samples for the rest of the domain.
5.7.3
Select a class or create a new class that receives membership functions after optimization
for target. If set to unclassied, the target sample class is used.
For Rest Samples Into
Select a class or create a new class that receives inverted similarity membership functions
after optimization for target. If set to unclassied, the rest sample class is used.
Reference Book
11 June 2014
59
When inserting new membership functions into the active class, choose whether to clear
all existing membership functions or clear only those from input feature space:
No, Only Clear if Associated with Input Feature Space: Clear membership functions only from the input feature space when inserting new membership functions
into the active class.
Yes, always clear all membership functions: Clear all membership functions when
inserting new membership functions into the active class.
5.7.4
Input set of descriptors from which a subset will be chosen. Click the ellipsis button
to open the Select Multiple Features dialog box. The Ensure Selected Features are in
Standard Nearest Neighbor Feature Space checkbox is selected by default.
Minimum Number of Features
5.7.5
Enter a number greater than 0 to decrease weighting with increasing distance. Enter 0 to
weight all distances equally. The default is 2.
Simplification Factor Mode
cur dimension
total dim
Reference Book
11 June 2014
60
cur dimension
total dim
5.7.6
Select an existing image object list variable, to which image objects associated with false
positive samples are added. If a value is not set, no output occurs. Please note that
samples stored in the workspace have no associated objects, so cannot be included in the
list.
False Negatives Object List
Select an existing image object list variable, to which image objects associated with false
negative samples are added. If a value is not set, no output occurs. Please note that
samples stored in the workspace have no associated objects, so cannot be included in the
list.
Separated Positives Object List (For Selected Feature Space)
This list contains image objects associated with positive samples, which are separated
from other positive samples by false positives, using the generated feature space.
Separated Positives Object List (From For Target Samples Into Class)
This list contains image objects associated with positive samples, which are separated
from other positive samples by false positives, using a feature space from For Target
Samples Into.
Show Info in Message Console
Reference Book
11 June 2014
5.8
61
Classifier
The Classier algorithm lets you apply machine-learning functions to your analysis in
a two-step process: First a classier is trained using the classied objects of the domain as training samples. The trained classier is stored as a string variable in the
conguration settings. In a second step, the trained classier from the rst step is applied to the domain, classifying the image objects according to the trained parameters.
For a more detailed description and example project of selected classiers please refer
to our user community: http://community.ecognition.com/home/CART%20-%20SVM
%20Classier%20Example.zip/view
The Classier algorithm can be applied either object based or image layer based (pixel
based).
5.8.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
Reference Book
11 June 2014
5.8.2
62
General Parameters
Operation
5.8.3
Configuration
You can enter or select a string variable to store or load the classier conguration.
Use Samples Only
Choose Yes or No depending on whether you want to use sample image objects only.
5.8.4
5.8.5
Select a classier to train from Decision Tree, Random Trees, Bayes, KNN or SVM.
The following options are displayed selecting the Algorithm classier - Parameter Operation Train - Type:
KNN
Features: Select the features for classication.
Normalize: Change to Yes to normalize the selected features(No default).
Reference Book
11 June 2014
63
Random Trees
Depth: Maximum tree depth (0 default).
Min sample count: Minimum number of samples per node (0 default).
Use surrogates: Use surrogates for missing data. If yes then surrogate splits
will be built to be able to work with missing data.
Max categories: Cluster possible values of a categorical variable into K <
max_categories clusters (16 default).
Active variables: The size of the randomly selected subset of features at each
tree node and that are used to nd the best split(s). (0 default - if you set it
to 0 then the size will be set to the square root of the total number of features).
Max tree number: The maximum number of trees (50 default).
Forest accuracy: Sufcient accuracy of trained forest in % (0.01 default).
Termination criteria type: The type of learning termination criteria allows
to decide how training should be stopped: by max number of trees or forest
accuracy or both of them (both default).
5.8.6
Configuration
Select the string variable to load and apply your classier conguration.
5.8.7
Reference Book
11 June 2014
64
5.8.8
Configuration
5.8.9
Type
Select from:
Bayes
KNN
SVM
The following options are displayed selecting the Algorithm classier - Parameter Operation Train - Type:
Decision Tree
Operation. Select from:
* Query Parameter:
Depth
Min Sample Count
Cross validation folds
Max categories
Importance Feature Array: Creates an array that contains importance features based on a trained classier.
Importance Value Array: Creates a double array that contains only
importance values within the importance feature array.
Export Importance Table: Exports all features of the input feature
space with corresponding importance values.
Query
Node:
*
ld: Specify the tree node to query properties from.
ld leaf: Specify the variable to receive a property.
Assigned Class: Specify the class variable to receive a property.
Left Child Node: Specify the variable to receive a property.
Right Child Node: Specify the variable to receive a property.
Split Feature Index: Specify the variable to receive a property.
Split Threshold: Specify the variable to receive a property.
Reference Book
11 June 2014
65
1. docs.opencv.org/modules/ml/doc/ml.html
Reference Book
11 June 2014
6 Template Matching
High quality templates are critical for achieving optimal results with the Template Matching algorithm (p 162). To facilitate the generation of such templates, eCognition offers
a sophisticated template editor. In this editor the user can conveniently select and adjust
sample representations of the template in the image. A fully automatic algorithm is then
applied to extract generic templates that capture what the samples have in common. It
is further possible to test the generated templates on a subregion, giving the user a quick
overview over the number and quality of targets detected. Targets that the user identies
as correct are automatically added as samples, thus increasing the sample number and
allowing for iterative renement of template quality. To open the dialog select View >
Windows > Template Editor. The Template Matching algorithm (p 162) can be applied
in the Process Tree window.
See also the template matching demo projects provided in the eCognition community
http://www.ecognition.com/community.
6.1
To create a new template open the Template Editor and click the folder button.
You can create several templates for a given project if there are different kinds of targets
you would like to nd on a given image. The generated templates are stored with your
project when you save it. Note that activating the Select sample mode without an existing
template will open the Create New Template dialog automatically.
To select the root folder to store the templates that you generate, select the Browse button.
Default location is the root folder of your workspace directory, if you work with projects,
your templates are stored in the image folder by default. It is generally not necessary
to change this folder location later on, but if you do, you may need to regenerate your
templates using the Generate Templates tab.
To delete a template including its samples select the respective button:
To select the zoom size of selected samples, templates or targets use the Thumbnail zoom
in the respective tab.
66
Template Matching
67
6.2
Select Samples
Use the Select samples tab to select examples of your template in the image. Selected
samples are stored within the project when you save your project.
To select samples you must activate the Select Samples button. The mouse pointer
changes to precision selection mode and the samples can be inserted by a single click
into the image. The extent of the sample used for template calculation can be inserted
in the eld Size (in pixels). A red rectangle of the selected sample size is drawn around
the samples center point. You can also adapt the sample size after the selection of samples. The eld Context lets you adjust the size of the displayed image context of your
thumbnail preview for better visual evaluation. The context is not considered in the template calculation. The image layer mixing of the thumbnail preview corresponds with the
settings of the main view.
To delete samples right click and select Delete Sample(s) either in the main view (single
selection only) or in the thumbnail preview window of the Template Editor (multiple
selection possible).
To adjust the exact position of the sample crosshair for selected samples:
1. In the Template Editor dialog activate the button Adjust Center and click on a
thumbnail to obtain an enlarged visualization of the sample. Now you can shift the
crosshair.
2. In the main view move the mouse over the crosshair so that it changes from red to
black. (The select samples button must be active.)
To obtain convincing results you should try to capture the variability that the template
Reference Book
11 June 2014
Template Matching
68
displays in actual images. If variability cannot be addressed in a single image, the user is
advised to create a multi-scene project and collect samples from different scenes. Note
that it is important for template generation that samples are all centered on the same
corresponding location.
6.3
Generate Templates
This tab allows you to generate template images. You can select the layer for which
the template is generated, and a group size parameter. When the group size is set to 0
(default value), the generated template reects the mean of all samples. For group size >
0 the algorithm attempts to nd subgroups in the samples and creates a separate template
for each subgroup. To generate templates click on the Generate Template button. All
Reference Book
11 June 2014
Template Matching
69
templates generated are stored separately according to the image layer name selected and
the group size chosen e.g. \New Template\Layer2_Group5. (If you want to visualize the
templates properly outside eCognition please select an image viewer that supports 32-bit
oating image les.)
The generated templates are also displayed in the thumbnail view of the Template Matching dialog. The average correlation coefcient between the template and all samples is
displayed at the bottom of the window. For template groups the maximum correlation
between this sample and any of the templates are considered for each sample before averaging across samples.
6.4
Test Template
In this tab you can test a given template on a subregion and identify the targets that are
detected with a user-dened threshold. You can then assign the targets as correct or false
to obtain an estimate of error rates.
Reference Book
11 June 2014
Template Matching
70
6.4.1
Subset Selection
To select a subset for testing activate the Select Region button and then either draw a
rectangle in the main view or insert coordinates. To obtain meaningful results for the
Template Quality select a region that includes most of your samples (see Template Quality
on page 72).
6.4.2
Template Selection
Select a layer and group size used when generating the template.
6.4.3
Test
To start the template matching algorithm, select Execute Test. If Update template is
activated, the current template is regenerated based on the current samples before the test
is executed. This means that targets that have been classied as correct in a previous
test run are reected in the template as well.
6.4.4
Test Parameters
By changing the values for the Ground Truth tolerance (in pixels) you can dene
how accurately a found candidate must match a sample to be identied as correct.
A higher value means more candidates will be found.
Reference Book
11 June 2014
Template Matching
71
Threshold r for candidates. (Trimble recommends starting with the sample correlation value of the Generate Samples tab on page 68. If you insert low values the
computation time increases as more candidates will be considered.)
6.4.5
Review Targets
After executing, you can assign the test candidates as Correct, False or Not Sure using the
respective buttons. The results are shown in the three small windows below: Unclassied,
Correct and False. Targets that are classied as correct are automatically added to the
samples (see Select Samples tab on page 67). The assignments are stored with the project
after saving and do not have to be repeated in subsequent tests.
Reference Book
11 June 2014
Template Matching
6.4.6
72
Template Quality
This section of the dialog provides information on the number of correctly, false, unclassied and missed targets and the corresponding false alarm and missed target rate in
percent. Note: The missed target rate is accurate only if you have identied all possible samples in your test region. (Otherwise, the software cannot know if targets were
missed.)
6.5
Negative Templates
In this tab, you can create negative templates. Negative templates allow you to improve
your results by eliminating false positives.
Before creating a negative template, the user should optimize the actual positive (normal) template. The nal test with the positive template should be done on a large subregion, using a relatively low threshold, so that many false positives are obtained. Also
make sure that all unclassied items that do not reect correct targets are indeed classied
as false.
In the Negative Templates tab click on the Generate Templates button to generate a negative template or template group. The template is generated according to the same algorithm as the positive template, but instead of using the selected samples as a base, it
uses the false positives of the last test as a base. You can create negative templates with
different group sizes, but you cannot change the layer, as the layer always corresponds to
the layer of the current (positive) template. The negative templates are stored inside the
folder of the corresponding positive template.
The functionality is otherwise identical to the Generate Samples tab on page 68.
Reference Book
11 June 2014
7 Variables Operation
Algorithm
Variable operation algorithms are used to modify the values of variables. They provide
different methods for performing computations based on existing variables and image
object features, and for storing the results within variables.
7.1
Timer
The Timer algorithm adds to a variable the elapsed time of its sub-processes (with the exception of customized algorithms). It is useful for debugging and improving performance
in rule sets.
7.1.1
Supported Domains
Execute
7.1.2
Algorithm Parameters
Timer Variable
Select the variable to which the process execution time and all its subprocesses will be
added.
7.2
This algorithm calculates a random value and stores it in a variable. To initialize (seed)
the randomization set this variable to 0 (outside a loop).
73
7.2.1
74
Algorithm Parameters
Variable to store the computed random value. Please initialize this randomization algorithm by setting this variable to 0.
7.3
Update Variable
7.3.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List; Array
7.3.2
Algorithm Parameters
Variable Type
Object, scene, feature, class, level, image layer, thematic layer, or map variables must be
selected as the vriable type. Select the variable assignment, according to the variable type
selected in the Variable Type eld. To select a variable assignment, click in the eld and
do one of the following depending on the variable type:
For object variables, use the drop-down arrow to open the Select Single Feature
dialog box and select a feature or create a new feature variable
For scene variables, use the drop down arrow to open the Select Single Feature
dialog box and select a feature or create a new feature variable
For feature variables, use the ellipsis button to open the Select Single Feature dialog
box and select a feature or create a new feature variable
For class variables, use the drop-down arrow to select from existing classes or
create a new class
For level variables, use the drop-arrow to select from existing levels
For image layer variables, select the image layer you want to use for the update
operation
For thematic layer variables, select the thematic layer you want to use for the update
operation
For map variables, select the map name you want to use for the update operation
For array variables, select the array you want to use for the update operation.
Variable
Select an existing variable or enter a name to create a new one. If you have not already
created a variable, the Create Variable dialog box will open. Only scene variables can be
used in this eld.
Reference Book
11 June 2014
75
Operation
This eld displays only for object and scene variables. Select one of the arithmetic operations:
Value Description
=
Assign a value.
+=
Increase by value.
-=
Decrease by value.
x=
Multiply by value.
/=
Divide by value.
Assignment
If Scene Variable or Object Variable is selected, you can assign either a value or a feature.
This setting enables or disables the remaining parameters. If Image Layer Variable or
Thematic Layer Variable is selected, you can assign either a layer or index.
Value
This eld displays only for Scene and Object variables. If you have selected to assign by
value, you may enter a value or a variable. To enter text use quotes. The numeric value
of the eld or the selected variable will be used for the update operation.
Feature
This eld displays only for scene and object variables. If you have chosen to assign by
feature you can select a single feature. The feature value of the current image object will
be used for the update operation.
Comparison Unit
This eld displays only for Scene and Object variables. If you have chosen to assign
by feature, and the selected feature has units, then you may select the unit used by the
process. If the feature has coordinates, select Coordinates to provide the position of the
object within the original image or select Pixels to provide the position of the object
within the currently used scene.
Arithmetic Expression
For all variables, you may assign an arithmetic expression to calculate a value.
Reference Book
11 June 2014
76
Array Item
The Array Item parameter appears when by array item is selected in the Assignment
parameter. You may create a new array item or assign a Domain.
7.4
Perform a statistical operation on the feature distribution within a domain and store the
result in a scene variable.
7.4.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
7.4.2
Active Classes
7.4.3
Algorithm Parameters
Variable
Select an existing variable or enter a name to create a new one. If you have not already
created a variable, the Create Variable dialog box will open. Only scene variables can be
used in this eld.
Operation
Select one of the statistical operations listed in table 7.0 on the next page, Available
Operations for Compute Statistical Value Algorithm.
Parameter
If you have selected the quantile operation, specify the percentage threshold.
Feature
Select the feature that is used to perform the statistical operation. (This parameter is not
used if you select Number as your operation.)
Reference Book
11 June 2014
77
Value
Description
Number
Sum
Return the sum of the feature values from all objects of the selected
domain
Maximum
Return the maximum feature value from all objects of the selected domain
Minimum
Return the minimum feature value from all objects of the selected domain
Mean
Return the mean feature value of all objects from the selected domain
Standard
Deviation
Return the standard deviation of the feature value from all objects of the
selected domain
Median
Return the median feature value from all objects of the selected domain
Quantile
Return the feature value, where a specied percentage of objects from the
selected domain have a smaller feature value
Unit
7.5
Compose Text
Assign a string variable text parts. Content of variable will be replaced by the text assigned with parameters text prex and text sufx.
7.5.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
7.5.2
Algorithm Parameters
Result
Enter the name of the variable that will store the composed text.
Reference Book
11 June 2014
78
Text Prefix
Edit the beginning of the composed text: the text sufx is attached afterwards. You can
enter one of the following:
Text (between quotation marks)
A number value (without quotation marks)
A variable (without quotation marks)
Text Suffix
Edit the end of the composed text: it is attached after the text prex. You can enter one
of the following:
Text (between quotation marks)
A number value (without quotation marks)
A variable (without quotation marks)
Examples
Type
class
Text
CLASSNAMEID Variable ID
7.6
Composed Text
class1
Update Region
Modify a region variable. You can resize or move the region dened by a region variable,
or enter its coordinates. Alternatively, you can use the coordinates of an image object
bounding box or the active pixel for a region update.
7.6.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Array
7.6.2
Algorithm Parameters
Variable
Select an existing region variable or create a new one. To create a new variable, type a
name for the new variable and click OK to open the Create Region Variable dialog box
for further settings.
Reference Book
11 June 2014
79
Mode
Select the operation for modifying the region dened by the region variable (see table 7.1,
Update Region Algorithm Modes).
Table 7.1. Update Region Algorithm Modes
Value
Description
Set min/max
coordinates
Set each existing coordinate of the region by entering values for minimum
and maximum coordinates, see below.
Set by
origin/extent
Set each existing coordinate of the region by entering values for the Origin
and the Extent, see below.
Move
Resize
From object
Use coordinates of the bounding box including all image objects in the
domain.
From array
Select a user-dened array (for more information, please consult the user
guide).
Check bounds
A region can be fully or partly outside the scene, for example after
initializing a region variable from a main map to use it in a rescaled map.
This mode makes sure that region is tted within the scene specied in the
process domain.Examples:
If a region (100, 100), [9999, 9999] should be applied to a scene of
(500,500), select Check bounds to truncate the region to (100, 100),
[500, 500]
If a region (100, 100),[9999, 9999] should be applied to a scene of
(500,500), select Check bounds to truncate the region to (0, 0),
[500, 500].
Active Pixel
Assign
Set the region to the same values like the region specied in Assign
Region.
Reference Book
11 June 2014
7.7
80
The Update Image Object List algorithm allows you to modify list variables, add objects,
remove objects, and clear and compact lists.
7.7.1
Supported Domains
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Superobject; Sub-objects; Linked Objects
7.7.2
Algorithm Parameters
Variable
Select an existing image object list variable or create a new one. The computed value will
be stored in the image object list variable.
Mode
7.8
Update Feature List lets you to create a list that contains features, which can be later
exported (for example as project statistics). The feature list can contain any combination
of features, all of which can be added and removed from a list. Features can also be added
and removed between lists and entire lists can be deleted.
Reference Book
11 June 2014
7.8.1
81
Supported Domains
Execute
7.9
Automatic Threshold
Figure 7.2. Result of multi-threshold segmentation with rule set as described above
The threshold can either be determined for an entire scene or for individual image objects.
Depending on this setting, it is stored in a scene variable or in an image object variable.
You can use the variables as a parameter for multi-threshold segmentation.
Using the automatic threshold algorithm in combination with multi-threshold segmentation allows you to create fully automated and adaptive image analysis algorithms based
on threshold segmentation. A manual denition of xed thresholds is not necessary.
Reference Book
11 June 2014
82
The algorithm uses a combination of histogram-based methods and the homogeneity measurement of multi-resolution segmentation to calculate a threshold dividing the selected
set of pixels into two subsets, so that heterogeneity is increased to a maximum.
7.9.1
Supported Domains
Pixel Level; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
7.9.2
Algorithm Parameters
Image Layer
Select the image layer you want to be used for automatic threshold determination.
Value Range
Dene the value range that is taken into account for automatic threshold determination.
Entire Value Range the complete threshold value range is taken into account.
Restricted Value Range the threshold values within the specied interval (Min.
Value Max.Value) are taken into account.
Min. Value
If the value range parameter is set to Restricted Value Range, dene a minimum value for
the automatically determined threshold.
Max. Value
If the value range parameter is set to Restricted Value Range, dene a maximum value
for the automatically determined threshold.
Threshold
Select the variable where the threshold value is to be saved. You can either select a numeric scene variable or a numeric image object variable (type Double). Saving threshold
values in scene variables is most useful in combination with the Pixel Level domain. Saving threshold values in image object variables allows you to differentiate between the
image objects of different domains.
Reference Book
11 June 2014
83
Quality
Select the variable in which a quality control parameter for the threshold value is to be
stored. The quality control parameter reects the internal criterion according to which
the threshold is determined. The higher its value, the more distinct the segmentation
into image objects. You can use the quality control parameter to implement fully automated segmentation processes splitting up the domain into image objects that meet the
predened requirements.
Selecting options 13 will display the Add Value parameter. Select only once for a
single value; selecting always allows duplicates.
Reference Book
11 June 2014
8.1
Remove Objects
Merge image objects in the image object domain. Each image object is merged into
the neighbor image object with the largest common border. This algorithm is especially
helpful for clutter removal.
8.1.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
8.1.2
Algorithm Parameters
Target Class
Click the ellipsis button to open the Edit Classication Filter dialog box and select the
target classes to which the image objects in the image object domain will be merged.
Show Advanced Parameters
84
85
Enter a value in Border Threshold only image objects with a common border
length longer than or equal to this threshold will be merged
If Use Legacy Mode is set to Yes, the algorithm will not look for a common
superobject. (If a superobject level exists, objects may not be completely
removed.)
Merging by color activates the following parameters:
If Use Threshold has a value of Yes, the Color Threshold parameter is activated.
In Layer Usage, select the layers to be analyzed
Enter a value in Color Threshold only image objects with a color difference
smaller than or equal to neigboring image objects will be merged. For all image objects in the domain the sum of absolute mean layer differences over all
selected layers is calculated SUM_ALL_LAYERS(ABS(MeanColor(object)
- MeanColor(neighbour))). An object is merged with the neighboring object
where the resulting value is the smallest (minimum color difference).
If Use Legacy Mode is set to Yes, the algorithm will not look for a common
superobject. (If a superobject level exists, objects may not be completely
removed.)
8.2
Merge Region
Figure 8.1. Result of merge region algorithm on all image objects classified as Seed
8.2.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
Reference Book
11 June 2014
8.2.2
86
Algorithm Parameters
Enable to keep borders dened by thematic layers that were active during the initial segmentation of this image object level.
8.3
Grow Region
Enlarge image objects dened in the image object domain by merging them with neighboring image objects (candidates) that match the criteria specied in the parameters.
The grow region algorithm works in sweeps. That means each execution of the algorithm
merges all direct neighboring image objects according to the parameters. To grow image
objects into a larger space, you may use the Loop While Something Changes checkbox
or specify a specic number of cycles.
Figure 8.2. Result of grow region algorithm on image objects of class Seed and candidate class
not Vegetation.
8.3.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
Reference Book
11 June 2014
8.3.2
87
Algorithm Parameters
Candidate Classes
Choose the classes of image objects that can be candidates for growing the image object.
Fusion Super Objects
Choose an optional feature to dene a condition that neighboring image objects need to
fulll in addition to be merged into the current image object.
Use Thematic Layers
Enable to keep borders dened by thematic layers that were active during the initial segmentation of this image object level.
8.4
Convert to Sub-objects
Split all image objects of the image object domain into their sub-objects.
8.4.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
8.4.2
Algorithm Parameters
None
8.5
Convert image objects to a specied type, based on the differences among image object
types with regard to their spatial connectivity and dimensions.
8.5.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects
Reference Book
11 June 2014
8.5.2
88
Algorithm Parameters
Convert all image objects in the image object domain to image objects of the specied
type:
Type
Description
Connected 2D Convert all image objects in the image object domain to connected 2D image
objects. The conversion is performed only for image objects that are not of the
type connected 2D.
Connected 3D Convert all image objects in the image object domain to connected 3D image
objects. Internally, all image objects in the image object domain are converted
to connected 2D image objects. Connected 3D image objects are then created
based on the overlaps of connected 2D image objects over the slices.
Connected 3D image objects are continuous with respect to slices. They may
have several disconnected parts within a single slice.
A special situation occurs if you have multiple image object levels and some
parts of a connected 3D image object belong to different superobjects. In that
case, the superobjects are merged automatically. If merging is not possible,
then single disconnected superobjects are generated.
Disconnected
Convert all image objects in the image object domain to disconnected image
objects. The algorithm tries to create a single image object per class. If some
3D image objects to be merged belong to different superobjects, the conversion
works according to the Fusion of Superobjects settings; see below.
Reference Book
11 June 2014
89
This parameter enables you to specify the effect on super objects. If this parameter is set
to Yes, super objects will also be fused. This will usually have the effect that affected
super objects will be converted to disconnected objects. If this parameter is set to No, the
super objects stay untouched. In this case, fusion of the active image objects is restricted
by the extent of the super objects.
If the value is yes, superobjects are merged. If superobjects cannot be merged into
a single connected image object they are merged into single disconnected image
objects (default)
If the value is no, superobjects are not merged and only image objects having the
same superobject are merged. Consequently, there can be several disconnected
image objects per class.
8.6
Cut all image objects within the image object domain that overlap the border of a given
region. Each image object to be cut is split into two image objects, called pieces: one is
completely within the region and one is completely outside.
8.6.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects
8.6.2
Algorithm Parameters
Region
Select or enter the name of an existing region. Alternatively, you can enter the coordinates of a region specied by its origin (xG , yG ) , which is the lower left corner, and
its size [Rx , Ry , Rz , Rt ]. The input pattern is: (xG , yG , zG ,tG ), [Rx , Ry , Rz , Rt ]. Alternatively,
you can select a variable. To create a new variable, type a name for the new variable and
click OK or press Enter to open the Create Variable dialog box for further settings.
Object Type Inner Pieces
Dene the type of all image objects (pieces) located inside the region, no matter if they
are cut or not:
Reference Book
11 June 2014
Description
Keep current
90
Select Yes to classify all image objects (pieces) located inside the region, no matter if
they are cut or not.
Object Type Outer Pieces
Dene the type of all cut image objects (pieces) located outside the region. The same
settings are available for Object Type Inner Pieces.
Classify Outer Pieces
Select Yes to classify all cut image objects (pieces) located outside the region.
Class for Inner Pieces
Select or create a class. To create a new class, type a name for the new class and click
OK to open
Class for Outer Pieces
Select or create a class. To create a new class, type a name for the new class and click
OK to open.
Reference Book
11 June 2014
9.1
9.1.1
Execute; Image Object Level; Current Image Object; Neighbor Image Object; Super
Object; Sub Objects; Linked Objects; Image Object List
9.1.2
Algorithm Parameters
Measures the relative border length of the cut line for both resulting objects. Cutting is
only executed if the measured value is below this threshold for both objects. Use a zero
value to disable this feature. This parameter is only used for contraction mode.
Maximum Cut Point Distance
Measures the distance of the cutting points on the object border. Cutting is only performed if this value is below this threshold. Enter a zero value to disable this feature.
Maximum Border Length
Restricts the maximum border length of the smaller objects. Enter a zero value to disable
this feature.
91
9.2
92
9.2.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
9.2.2
Enter weighting values the higher the weight assigned to an image layer, the more weight
will be given to that layers pixel information during the segmentation process.
You can also use a variable as a layer weight.
9.2.3
Scale parameter
Candidate Classes
9.2.4
Shape
Reference Book
11 June 2014
9.3
93
Dene a variety of growing and merging methods and specify in detail the conditions for
merger of the current image object with neighboring objects.
Image object fusion uses the term seed for the current image object. All neighboring
image objects of the current image object are potential candidates for a fusion (merging).
The image object that would result by merging the seed with a candidate is called the
target image object.
A class lter enables users to restrict the potential candidates by their classication. For
each candidate, the tting function will be calculated. Depending on the tting mode, 1
one or more candidates will be merged with the seed image object. If no candidate meets
all tting criteria no merge will take place.
Figure 9.1. Example for image object fusion with seed image object S and neighboring objects
A , B , C and D
9.3.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
9.3.2
Candidate Settings
Select Yes to activate candidate classes. If the candidate classes are disabled the algorithm
will behave like a region merging.
1. If you do not need a tting function, we recommend that you use the algorithms Merge Region and Grow
Region. They require fewer parameters for conguration and provide higher performance.
Reference Book
11 June 2014
94
Candidate Classes
Choose the candidate classes you wish to consider. If the candidate classes are distinct
from the classes in the domain (representing the seed classes), the algorithm will behave
like a growing region.
9.3.3
Fitting Function
The fusion settings specify the detailed behavior of the Image Object Fusion algorithm.
Fitting Mode
Choose the tting mode (see table 9.1, Fitting Mode Options for Image Object Fusion
Algorithm).
Table 9.1. Fitting Mode Options for Image Object Fusion Algorithm
Value
Description
All tting
Merges all candidates that match the tting criteria with the seed.
First tting
Merges the rst candidate that matches the tting criteria with the seed.
Best tting
Merges the candidate that matches the tting criteria in the best way with
the seed.
Merges all candidates that match the tting criteria in the best way with
the seed.
Best tting if
mutual
Merges the best candidate if it is calculated as the best for both of the two
image objects (seed and candidate) of a combination.
Search mutual
tting
Executes a mutual best tting search starting from the seed. The two
image objects tting best for both will be merged. Note: These image
objects that are nally merged may not be the seed and one of the original
candidate but other image objects with an even better tting
Select the feature and the condition you want to optimize. The closer a seed candidate
pair matches the condition, the better the tting.
Use Absolute Fitting Value
Enable to ignore the sign of the tting values. All tting values are treated as positive
numbers independent of their sign.
Reference Book
11 June 2014
9.3.4
95
Weighted Sum
Dene the tting function. The tting function is computed as the weighted sum of
feature values. The feature selected in Fitting Function Threshold will be calculated for
the seed, candidate, and the target image object. The total tting value will be computed
by the formula. Fitting Value = (Target * Weight) + (Seed * Weight) + (Candidate *
Weight) To disable the feature calculation for any of the three objects, set the according
weight to 0.
Target Value Factor
Description
1, 0, 0
0, 1, 0
0, 0, 1
2, 1, 1
9.3.5
Merge Settings
This parameter denes the behaviour when the seed and candidate objects selected for
merging have different super objects. If enabled, the super objects will be merged with
the sub objects. If disabled the merge will be skipped.
Thematic Layers
Specify the thematic layers that are to be considered in addition for segmentation. Each
thematic layer used for segmentation will lead to additional splitting of image objects
while enabling consistent access to its thematic information. You can segment an image
using more than one thematic layer. The results are image objects representing proper
intersections between the thematic layers.
Reference Book
11 June 2014
96
Compatibility Mode
Select Yes from the Value eld to enable compatibility with older software versions (version 3.5 and 4.0). This parameter will be removed with future versions.
9.3.6
Classification Settings
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class.
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
9.4
Border Optimization
Change the image object shape by either adding sub-objects from the outer border to the
image object or removing sub-objects from the inner border of the image object.
9.4.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Super Object; Sub
Objects; Linked Objects; Image Object List
Reference Book
11 June 2014
9.4.2
97
Candidates
Choose the classes you wish to consider for the sub-objects. Sub-objects need to be
classied with one of the selected classes to be considered by the border optimization.
Destination
Choose the classes you wish to consider for the neighboring objects of the current image
object. To be considered by the Dilatation, sub-objects need to be part of an image object
classied with one of the selected classes. To be considered by the Erosion sub-objects
need to be moveable to an image object classied with one of the selected classes. This
parameter has no effect for the Extraction.
Operation
Description
Dilatation
Removes all Candidate sub-objects from its Destination superobject inner border
and merges them to the neighboring image objects of the current image object.
Erosion
Removes all Candidate objects from its Seed superobject inner border and merges
them to the neighboring image objects of Destination domain.
Extraction Splits an image object by removing all sub-objects of the Candidate domain from
the image objects of Seed domain.
9.4.3
Classification Settings
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Reference Book
11 June 2014
98
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class If you do not use the class
description, we recommend you use the Assign Class algorithm instead.
9.5
Morphology
Smooth the border of image objects by the pixel-based binary morphology operations
Opening or Closing. This algorithm refers to image processing techniques based on mathematical morphology.
9.5.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Superobject; Linked
Object; Image Object List
9.5.2
Morphology Settings
Operation
Decide between the two basic operations: Opening or Closing. Conceptually, imagine using Opening for sanding image objects and Closing for coating image objects. Both will
result in a smoothed border of the image object. Open Image Object removes pixels from
an image object. Opening is dened as the area of an image object that can completely
contain the mask. The area of an image object that cannot contain the mask completely
is separated.
Close Image Object adds surrounding pixels to an image object. Closing is dened as
the complementary area to the surrounding area of an image object that can completely
contain the mask. The area near an image object that cannot contain completely the mask
is lled; thus comparable to coating. Smaller holes inside the area are lled.
Reference Book
11 June 2014
99
Mask
Dene the shape and size of mask you want. The mask is the structuring element, on
which the mathematical morphology operation is based. In the Value eld, the chosen
Mask pattern will be represented on one line. To dene the binary mask, click the ellipsis
button. The Edit Mask dialog box opens.
Select Yes from the Value eld to enable compatibility with older software versions (version 3.5 and 4.0). This parameter will be removed with future versions.
9.5.3
Classification Settings
When the operation Open Image Object is active, a classication will be applied to all
image objects sanded from the current image object. When using the Close Image Object
operation, the current image object will be classied if it gets modied by the algorithm.
Active Classes
11 June 2014
100
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
9.6
Watershed Transformation
Calculate an inverted distance map based on the inverted distances for each pixel to the
image object border. Afterwards, the minima are ooded by increasing the level (inverted
distance). Where the individual catchment basins touch each other (watersheds), the
image objects are split.
The watershed transformation algorithm is commonly used to separate image objects
from others. Image objects to be split must already be identied and classied.
9.6.1
Supported Domains
Image Object Level; Current Image Object; Neighbor Image Object; Superobject; Linked
Object; Image Object List
9.6.2
Watershed Settings
Length Factor
The Length factor is the maximal length of a plateau, which is merged into a catchment
basin. Use the toggle arrows in the Value eld to change to maximal length. The Length
Factor must be greater or equal to zero.
9.6.3
Classification Settings
Reference Book
11 June 2014
101
Active Classes
If you select Yes and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is deleted.
If you select No and the membership value of the image object is below the acceptance threshold (see classication settings) for all classes, the current classication
of the image object is kept.
Use Class Description
If Yes is selected, class descriptions are evaluated for all classes. The image object
is assigned to the class with the highest membership value.
If No is selected, class descriptions are ignored. This option delivers valuable
results only if Active Classes contains exactly one class
If you do not use the class description, we recommend you use the Assign Class algorithm
instead.
Reference Book
11 June 2014
10 Pixel-Based Object
Reshaping Algorithms
Pixel-based reshaping algorithms modify the shape of image objects by adding or removing pixel/voxels according to given criteria.
Grow each seed image object. The starting extents of the seed image objects
are lost.
Figure 10.1. Sample starting classification (left) and after 10x growing (right)
102
103
Add an new image object around each seed image objects and grows it. The
seed image objects continue to exist as separate image objects with unchanged extent.
Coating
Figure 10.2. Sample starting classification (left) and 10x coating (right). Coating works from
Add an new image object inside each candidate image objects and grows it. A
candidate image object shrinks by the extent of the new image object. This mode works
similar like coating, but inside of candidate image objects.
Shrinking
Figure 10.3. Sample starting classification (left) and after 10x shrinking (right). Shrinking
works from outside the Candidate image objects to the center
Select a class to be assigned to the new image objects. This feature is available for
Coating or Shrinking modes, but not for Growing.
Preserve Current Object Type
Select the type of image object to determine how image objects of the target class are
merged.
Value Description
Yes
Newly created image objects are 2D-connected. Other image objects maintain their
connectivity (default)
No
Newly created image objects are disconnected. Any overwritten objects or split
sub-objects are disconnected.
Reference Book
11 June 2014
104
Select a candidate class. Image objects that are selected in the domain are automatically
excluded from the candidates.
Threshold Condition
Dene an additional threshold condition to dene the candidate domain. Only pixels that
belong to image objects that fulll the threshold condition are considered for resizing.
Select any image layer to be used for the pixel layer constraint.
Reference Book
11 June 2014
105
Operation
<=
Equal to
>
Greater than
>=
Reference
Select the type of value used for comparing to the pixel layer intensity value. An image
layer must be selected
Value
Description
Absolute value Compare with an absolute layer intensity value that you can dene; see Value
below. It also can be represented by a variable.
Value of
current pixel
Compare with the layer intensity value of the current pixel. You can dene a
Tolerance, see below.
Value
Enter a layer intensity value used as threshold for the comparison operation. Alternatively,
you can select a feature or a variable. To create a new variable, type a name for the new
variable and click OK to open the Create Variable dialog box for further settings. An
absolute value has to be selected as a reference option.
Tolerance
Enter a value used as tolerance for the threshold Value of the comparison operation. The
value of the current pixel has to be selected as a reference option. Alternatively, you can
select a feature or a variable.
Tolerance Mode
Select a calculation mode for the Tolerance value of the threshold value of the comparison
operation. The value of the current pixel must be selected as a reference option.
Reference Book
11 June 2014
106
Value
Description
Absolute
Percentage The Tolerance value represents a percentage value. For example, 20 means a
tolerance of 20% of the threshold value.
Figure 10.4. Sample classification after 10x growing without (left) and with (right) surface
tension
Figure 10.5. Sample classification after 10x coating without (left) and with (right) surface
tension
Figure 10.6. Sample classification after 10x shrinking without (left) and with (right) surface
tension
Additionally, you can edit the settings so that the shapes of image objects are smoothed
with no signicant growing or shrinking.
Surface tension uses the Relative Area of Classied Objects region feature of pixels of a
given class to optimize the image object shape while resizing. Within a cube of given size
(gure 10.8) around the current candidate pixel, the ratio of relative area of seed pixels to
all pixels inside the box is calculated.
If the result is according to a given comparison operation (gure 10.9) with a given value,
the current candidate is classied as seed; otherwise it keeps its current classication.
Reference Book
11 June 2014
107
Figure 10.7. Sample classification after 10x coating without (left) and with 10x smoothing
Figure 10.8. Example of calculation of the surface tension of seed objects (gray) is based on
Reference
Choose to use either seed pixels of a current image object or pixels of a given class for
surface tension calculation. Your choice inuences smoothing while resizing.
None
Surface tension is calculated based on image objects: Within the calculation box,
only pixels of the candidate image objects are mentioned. This allows you to smooth
the border of each seed image object without taking neighbor seed image objects into
account.
Object
Figure 10.9. Object-based surface tension calculation while growing keeps image objects more
separated
Class Surface tension is calculated based on a given class (gure 10.10); within the
calculation box, all pixels of a given candidate class are taken into account. This enables
you to smooth the border around neighboring image objects of the same class.
Reference Book
11 June 2014
108
Figure 10.10. Class-based surface tension calculation while growing smooths multiple image
objects of a class
Class Filter
Select a candidate class used to measure the relative area. A class must be dened as
Reference.
Operation
<
Less than
<=
Equal to
>
Greater than
Value
Enter a value for the surface tension calculation. Alternatively, you can select a feature
or a variable. To create a new variable, type a name for the new variable and click OK to
open the Create Variable dialog box for further settings.
Box Size in X and Y
Enter the pixel size of the square box around the current candidate pixel in x - and y
-directions to use for the surface tension calculation. The integer value must be uneven;
even integer values are changed to the next higher integer. You can enter a value as a
threshold. Alternatively, you can select a feature or a variable. To create a new variable,
type a name for the new variable and click OK to open the Create Variable dialog box for
further settings. The default value is 5.
Box Size in Z
Reference Book
11 June 2014
109
The Minimum Object Size parameter stops an object shrinking when it reaches a minimum size. It is very useful for preventing a shrinking object from disappearing (that is
attaining a size of zero).
Max Object Size
The Maximum Object Size parameter stops an object from growing when it reaches a
maximum size. It is very useful for preventing a growing object from leaking.
Figure 10.11. Example of calculation of the class (gray) density is based on pixels within a box
If the class density is according to given comparison operations with a given value the
current candidate pixel/voxel is classied as target class, otherwise it keeps its current
classication. Image objects of the target class are merged with each execution of the
algorithm.
Reference Book
11 June 2014
110
Select a class to be assigned to image objects according to density criteria on the following page.
Preserve Current Object Type
Select the type of image object to determine how image objects of the target class are
merged.
Value Description
Yes
Newly created image objects are 2D-connected. Other image objects maintain their
connectivity (default)
No
Newly created image objects are disconnected. Any overwritten objects or split
sub-objects are disconnected.
X, Y and Z Directions
Dene the search order around active pixels. You can do this for each co-ordinate axis
separately:
Reference Book
11 June 2014
111
Value
Description
Yes
Only Positive
Figure 10.13. Example of 10x growing in x -direction is restricted to the positive direction and
If you work with a three-dimensional scene that has a different voxel size for the zdirection compared to the x - and y -directions, you can choose to take the voxel resolution into account. This ensures that image objects are resized evenly in all directions.
Resizing must be enabled for z -direction.
Value Description
No
Yes
In this case the growing takes the extension of the voxel into account. For each
growing step in the z direction the number of growing steps in the xy direction equals
the ratio of the voxel dimensions, for example the xy extend to z extend.
Reference Book
11 June 2014
112
Value
Description
None
Object Class density is calculated based on the image object that contains the current
candidate pixel/voxel. Within the calculation box, only pixel/voxels of this image
object, are mentioned. This allows you to smooth the border of the image object
without taking neighbor image objects into account.
Class
Class density is calculated based on a given class (see below). Within the calculation
box, all pixel/voxels of the given class are taken into account. This allows you to
smooth the border around neighboring image objects of the same class.
Class Filter
Select a candidate class to use for calculation of class density. A class must be dened as
Reference.
Operation
<
Less than
<=
Equal to
>
Greater than
Value
Enter a value, or select a feature, or a variable used a threshold for the class density
calculation. To create a new variable, type a name for the new variable and click OK to
open the Create Region Variable dialog box for further settings.
Box Size in X and Y
Enter the pixel/voxel size of the square box around the current candidate pixel/voxel in x
and y -directions, used for the class density calculation. The integer value must be an odd
number; even-numbered integer values are changed to the next higher integer.
You can enter a value, or select a feature, or a variable used as threshold. To create a
new variable, type a name for the new variable and click OK to open the Create Region
Variable dialog box for further settings. The default value is 1.
Reference Book
11 June 2014
113
Box Size in Z
Description
Pixel Count
Corner Pixels Detects the corner pixel of an image object. In conjunction with the Connect lter, it
can be used to cut image objects according to their shape.
Connect
Creates a pixel connection between two linked image objects. Can be used to
connect pixels detected by the Corner Pixels or Hole Cutting Pixels lter and thus to
cut image objects.
Inner Border
Outer Border
Hole
Cutting
Pixels
Detects pairs of pixels at the inner and outer borders of image objects. In
conjunction with the Connect lter, it can be used to cut holes out of image objects.
Ensures that the image object type of processed image objects does not change due to
pixel removal.
No
Changes the type of all modied image objects to disconnected, regardless of their
current state. You can use the Convert Image Objects algorithm (p 87) to make sure
processed image objects are assigned the desired image object type after processing.
Reference Book
11 June 2014
114
Filter Type
Select a lter.
The Pixel Count lter detects basic pixel patterns in the 4-neighborhood of
a pixel and reclassies all pixels in the domain where a pattern occurs. It can be useful
to connect border pixels to a 4-connected structure or to detect pixel bridges in image
objects. In addition to the basic parameters mentioned above, you can set the following
parameters:
Pixel Count
The Corner Pixels lter detects the corners of an image object specied
by the image object domain. The detected corners can be concave or convex.
Corner Pixels
The lter enables you to further connect detected concave corner pixels using image
object links. In conjunction with the Connect lter mode, these connections can be used
to create lines along which the image objects can be cut according to their shape. In
addition to the basic parameters mentioned above, you can set the following parameters:
The Connect lter creates a pixel line between two linked image objects. It
checks all image objects specied in the image object domain for image object links to
another image object. If an image object link exists, it creates a one pixel line between
the two image objects according to the lter parameters and deletes the image object link.
Connect
This mode is specically designed to be used in conjunction with the Corner Pixel lter
detection of concave corners.
In addition to the basic parameters mentioned above, you can set the following parameters:
Candidate Class Filter species a class lter for candidate objects. Only pixels that
belong to an object of one of the specied classes will be taken into account for the
connection
Exclude Border Pixels 1 species whether pixels located at the border of areas dened by the Candidate Class Filter parameter are taken into account for the connection:
If the value is Yes, border pixels are not taken into account
If the value is No, all pixels of the specied areas are taken into account
Distance Mode species how distances between two pixels are calculated:
Spatial Distance determines the shortest connection along the candidate pixels that can be found between two image objects. Uses predened values
1 for 4-connected pixels and 2 for diagonally connected pixels to calculate
pixel distance
Color Contrast determines the shortest connection along the candidate pixels
that can be found between two image objects. Uses the difference between the
pixel intensities of two pixels for an image layer to calculate pixel distance.
Image Layer species the layer used to calculate pixel distances. It is only available
if the Distance Mode parameter is set to Color Contrast
1. If you enable the Exclude Border Pixels parameter, it might not be possible to establish a valid connection
between two image objects. To avoid endless loops or other unwanted effects, make sure that the rule set you
use can handle this situation properly.
Reference Book
11 June 2014
115
The Inner Border lter reclassies all neighboring pixels of an image object that touch an inner border of the object. This lter is useful to detect holes within
image objects. There are no additional parameters for this lter.
Inner Border
The Hole Cutting Pixels lter detects inner border pixels with close
proximity to the outer border and related pixels on the outer border.
Hole Cutting Pixels
Both pixels are linked so that you can use the Connect lter to cut out so-detected inner
holes from the object. In order to avoid structures that are quite similar to a nearly closed
hole the lter always detects two pairs of inner and outer pixels that have a maximum
distance to each other.
Reference Book
11 June 2014
116
Mode Parameters
Corner Type
Value
Description
All
Figure 10.18. Image object (left) and its results after detection of concave (middle) and convex
(right) corner pixels
For each border pixel, the lter analyzes the angle of the border by creating
an angle between the pixel and the adjacent border pixels to the left and to the right. The
leg length describes the length of the angle legs in border pixels. Increase the leg length
value to apply a smoothening to the real image object border. Typical values lie within
the range 2 to 9.
Leg Length
Specify the minimum angle (based on the leg length) that must be
present at a border pixel so that it will be considered as a corner.
Minimum Angle
If you enter a value of 0, every border pixel with an angle different to 180 is considered
as a potential corner. The larger the value, the more acute the minimum angle that is
needed for a corner to be accepted.
Reference Book
11 June 2014
117
Figure 10.19. Angle measurement at a border pixel with leg length 3 pxls (approx. 45 for a
Value Description
Yes
No
Detects and connects matching corners using image object links. The connections can
be used as input for further processing with the Connect lter. Unconnected corners are
not instantiated.
Specify the maximum length of the direct connection between two corner pixels. Corners
are only be linked for connection if their distance is below this threshold. Values are
measured in pixels. A value of 0 will disable this restriction.
This parameter is only available if Corner Type parameter is
set to Concave and if Connect Corners for Cutting parameter is set to Yes. Specify the
maximum ratio of the direct distance between two corners and the distance along the
object border. Corners are only linked for connection if the ratio is below this threshold.
Cut Length/Border Length
Values must lie within the range 0 to 1. A value of 1 will link all corner pairs, a value of
0 will reject all corner pairs.
The example below shows the result of concave corner pixel detection with a leg length
of 3 and a minimum angle of 35. The Connect Corners for Cutting parameter is set to
Yes; a cut length/border length ratio of up to 0.3 is allowed.
The selected pixel objects (marked in red and green) are connected using an image object
link.
Reference
Reference Book
11 June 2014
118
Figure 10.20. Result of Pixel Corner Pixels filter processing with the settings described above
Value
Description
This parameter is only available if Reference parameter is set to Class. Specify the class lter used for pixel pattern detection in Class reference mode.
Class Filter
Pixel Pattern
Value
Description
2 (angle)
Exactly two neighboring pixels forming an angle with the center pixel.
Reclassies pixels that are part of a diagonal pixel bridge. Use this mode
to convert 8-connected pixel structures into 4-connected pixel structures
2 (line)
Exactly two neighboring pixels forming a line with the center pixel.
Reclassies pixels (except end points) being part of lines with a width of
one pixel
The example below shows the result of a pixel count lter processing on the blue area
using the 2 (angle) pixel pattern. The orange pixels are used as a reference.
Reference Book
11 June 2014
119
Figure 10.21. Result of Pixel Count filter processing with the settings described in table 10.-14
In the course of pixel processing, the orange pixel structure is converted into a 4connected structure that can be merged into one connected image object.
Class lter for candidate objects. Only pixels that belong to an
object of one of the specied classes will be considered for the shortest path.
Candidate Class Filter
When this option is set to yes, the connection must not go along
pixels at the border of the selected class
Exclude Border Pixels
Search in Z Direction
Select the way pixel distances are computed using spatial distance, color
contrast or color value.
Distance Mode
Reference Book
11 June 2014
11 Linking Operation
Algorithms
Use linking operations algorithms to work with image object links. Image object links can
be used to group multiple image objects in different areas of the image without creating
a common superobject.
For z = 1 slices, n and n + 1 are scanned and some overlap area is determined. Usually
parameters x, y and t are zero. It may be useful to have values different from zero if
structures such as road networks or time series are to be analyzed. For x, y,t 6= 0 , the
overlap region gets calculated with respect to the shifted template.
121
Select a class that will be assigned to the new links. This class represents the group name
of the links and is used to compute statistics of the image objects belonging to this link
class.
Specify for the classes of candidate image objects to be linked with the seed image objects.
Threshold Condition
Specify an additional condition for candidate objects. Only image objects meeting the
specied condition will be considered for linking.
Map
You can specify a different map to the map selected in the image object domain. In this
case, image object links between different maps will be created. This can be useful to
express object relations between maps.
Candidate PPO
Select the PPO level to use for the objects from the next parent process. This parameter
is only valid if domain and candidate maps are the same.
Reference Book
11 June 2014
122
Overlap Calculation
Description
Relative to larger
object
The ratio of the overlap area to the area of the larger object (between seed
and target object) is calculated
Relative to smaller
object
The ratio of the overlap area to the area of the smaller object is calculated
Relative to current
object
The ratio of the overlap area to the area of the current objects is calculated
Relative to
candidate object
The ratio of the overlap area to the area of the candidate objects is
calculated
X-Position Shift
Lower threshold for image object overlap. A link will only be created if the calculated
overlap will exceed the specied threshold. Use 0 to disable this parameter.
If the overlap calculation is set to Do Not Use Overlap, Min. Required Overlap is
not available
If the overlap calculation is set to Absolute [in pixels], enter any integer
If the overlap calculation is set to any other option, enter a percentage represented
by a oat number between 0 and 1.
Reference Book
11 June 2014
123
Figure 11.2. Incoming and outgoing links over multiple time frames. The red circles represent
This parameter is only available if the map specied in the image object domain is different to the map specied in the candidate object domain.
If you dont know the transformation parameters, you can use the Image Registration
algorithm to perform an automatic afne registration and store the transformation matrix
in a parameter set. Using this transformation information, you can then link objects that
relate to each other in the original maps.
Classication lter for links to delete. Only links grouped under the selected classes will
be deleted.
Class Filter
Classication lter for linked image objects. Only links that link to an object of one of
the selected classes will be deleted.
Reference Book
11 June 2014
Select whether the copied level is placed above or below the input level specied by the
domain.
124
125
Select or edit an image object level to be changed, and select or edit the new name for the
level. If the new name is already assigned to an existing level, that level will be deleted.
This algorithm does not change names already existing in the process tree.
Reference Book
11 June 2014
Dene a region within the source map. Select or enter the name of an existing region. Alternatively, you can enter the coordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner , and its size [Rx , Ry , Rz , Rt ] . The input pattern is
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] .
Alternatively, you can select a variable. To create a new variable, type a name for the new
variable and click OK or press Enter to open the Create Variable dialog box for further
settings.
126
127
The map to be created by copying. Select a map name from the drop-down list or enter
a new name. If you select an existing map, the copy will overwrite it. Alternatively, you
can create a map variable.
Use Variable As Scale
Specify the scale for copying a map using a variable, rather than dening a numerical
value.
Scale Relative to Main Map
If you choose to resample, the Scale will refer to the original image data. If you choose
the default Use Current Scene Scale, the map copy will have the same scale as the map
(or part of a map) being copied. For example, if the main map is copied to map2 with the
Scale at 50%, and map2 is copied to map3 with the Scale at 50%, map3 will be scaled to
50% of the main map, and not 50% of map2:
1. If you do not want to keep the current scale of the map for the copy, click the
ellipsis button to open the Select Scale dialog box.
2. Selecting a scale different to the current scene scale lets you work on the map copy
at a different magnication/resolution.
3. If you enter an invalid scale factor, it will be changed to the closest valid scale as
displayed in the table below.
4. To change the current scale mode, select from the drop-down list. We recommend
that you use the scaling mode consistently within a rule set as the scaling results
Reference Book
11 June 2014
128
may differ. Scaling results can differ depending on the scale mode; for example if
you enter 40, you work at the following scales, which are calculated differently:
Options dialog box setting Scale of the scene copy or subset to be created
Units (m/pixel)
Magnication
40x
Percent
Pixels
This feature lets you rotate a map that you have copied, by a xed value or with respect
to a variable.
Resampling
Description
Image Layers
Select an image layer to use in the new map. If no layer is selected, all will be used.
Layers are copied if downsampling occurs and Smooth is selected in the Resampling
eld.
Camera View to Top Down
If set to Yes then point clouds are converted from the Camera View perspective to the
Top Down perspective in the destination map. Raster layers are not transferred to the
destination map.
Copy Thematic Layers
Select a thematic layer to use in the new map. If no layer is selected, all will be used. Thematic vector layers are always copied and converted to thematic raster layers. Thematic
raster layers are copied if downsampling occurs and Smooth is selected in the Resampling
eld.
Copying thematic vector layers can be performance-intensive because vector layers are
converted to raster layers.
Reference Book
11 June 2014
129
Thematic Layers
You may specify a thematic layer to use; if no layer is selected, all layers will be used.
Copy Image Object Hierarchy
Choose whether the image object hierarchy is copied with the map or not:
If Yes is selected, the image object hierarchy of the source map will be copied to
the new map.
If No is selected, only selected image and thematic data will be copied to the new
map.
Preserve Current Object Type
When this option is set to No, then created objects can be unconnected, so one object can
have more than one polygon.
Visiblity Flag
If the value is set to Yes (the default), all maps are available from the map drop-down box.
If it is set to No, the created map can be accessed, but cannot be displayed.
Compatibility Mode
Plane only
Plane and z
Plane and time
All directions
Reference Book
11 June 2014
130
Required. Use the dropdown list to select an existing map or map variable, or create a
new map variable and assign a value to it.
Region
Dene a target region to which image objects are copied. Select or enter the name of an
existing region. Alternatively, you can enter the co-ordinates of a region specied by its
origin (xG , yG ) , which is the lower left corner , and its size [Rx , Ry , Rz , Rt ] . The input
pattern is (xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To
create a new variable, type a name for the new variable and click OK or press Enter to
open the Create Variable dialog box for further settings.
Alternatively, you can create a region by entering numbers with that exact syntax: (origin
x , origin y ) - (extent x , extent y ); for example (10,20) - (100,110).
Level
Required. Select the target image object level in the target map.
Class Filter
Select objects of classes on the target map (these can be overwritten). Click the ellipsis
button to open the Edit Classication Filter dialog box. The default is none, which means
objects of any class can be overwritten.
Threshold Condition
Select a threshold condition. Image objects matching the threshold will be overwritten.
Click the ellipsis button to open the Select Single Feature dialog box. The default is none,
which means objects of any class can be overwritten.
Reference Book
11 June 2014
131
This feature lets you rotate a map that you have synchronized, by a xed value or with
respect to a variable. If you have rotated a copied map using the Copy Map algorithm,
you can restore it with the Synchronized Map algorithm by using the negative value of
the angle of rotation.
Preserve Current Object Type
If Yes is selected, the current object type is preserved for all affected objects
If No is selected, modied objects can become disconnected objects.
Synchronize Complete Hierarchy
When this option is set to yes, then all levels on the target map will be affected.
Compatibility Mode
Select a mode:
Reference Book
11 June 2014
132
Description
Noninvasive The map size in x is assumed to be the 2D slice size in x and y. This is used to
compute the number of slices.
Example: Imagine a map with the size 2561024. The noninvasive mode uses the
map size x=256 to determine the slice size y=256. Thus, the map is handled as a
3D map consisting of 1024/256=4 slices of size 256256 each.
2D extent
4D layout
Depending on the data set, you can enter the number of slices and the number of
frames; see below.
Depending on the dataset, enter the x and y sizes per single slice or frame. (2D Extent
must be selected.)
Number of Slices
Enter the number of slices of the data set. (4D Layout must be selected.)
Number of Frames
Enter the number of frames of the data set. (4D Layout must be selected.)
Distance Between Slices
Enter the distance between slices the distance you enter is relative to the xy resolution.
The default is 0.5. For example:
Slice Distance 1 means x = y = z = 1
Slice Distance 2 means 2x = 2y = 1z
Slice Distance 0.5 means x = y = 0.5z
Time Between Two Frames
Enter the co-ordinate of the rst slice, which determines the co-ordinates of other slices.
The default is 0.
Reference Book
11 June 2014
133
Start Time
Enter the time of the rst frame, which determines the time of other frames. The default
is 0.
If appropriate, enter the magnication settings. This is most often used in microscope
images, where a known magnication was used and the information was not embedded
in the image format. You can also enter a variable in this eld. Magnication can only
be set for a scene that was not rescaled.
Scene Unit
Set the default unit for calculating feature values. Choose from pixels, kilometers,
hectometers, decameters, meters, decimeters, centimeters, millimeters, micrometers,
nanometers, angstroms, inches, feet, yards, statute miles and nautical miles.
Pixel Size
If you wish to link the size of your objects to a known scale (for instance in a geographical
image) enter the scene unit that corresponds to one pixel.
Reference Book
11 June 2014
134
Insert a new map name in quotation marks (New Map Name) or create a new variable.
Select the destination map where the image layer will be copied to.
Output - Name
Insert a name for the new layer in the destination map. If no name is dened the same
name as in the source layer will be used.
Reference Book
11 June 2014
The Distance to parameter lets you select a class; the distance is then calculated to all
objects belonging to this class. The 8-Neighborhood (p 11) calculation is used as it
allows diagonal distance measurements. If no class is selected, the distance of each pixel
to its next image object border is calculated.
135
136
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Allows compatibility with previous software versions. In previous versions only the distance to neighboring objects was calculated, now the distance to all image objects belonging to the selected class is considered.
Select the default name for the temporary image layer or edit it.
Feature
Select a single feature that is used to compute the pixel values lled into the new temporary layer.
Value for Undefined
Reference Book
11 June 2014
137
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Reference Book
11 June 2014
138
The Gauss Blur is a convolution operator used to remove noise and detail. The Custom
Kernel enables the user to construct a kernel with customized values.
Expression
Displays for Gauss Blur. Enter a value for the reduction factor of the standard deviation.
A higher value results in more blur.
Custom Kernel
Displays only when Custom Kernel is selected. Click the ellipsis button on the right to
open the Kernel dialog box (gure 14.1) and enter the numbers for the kernel.
The number of entries should equal the square of the kernel size entered in the 2D kernel
size eld. Use commas, spaces or lines to separate the values.
Reference Book
11 June 2014
139
Enter an odd number only for the lter kernel size. The default value is 3.
Number of Slices
This is available if Type is set to Gauss Blur. Enter the number of slices to be considered
as part of the kernel. If a region is specied in the domain, the algorithm will use the
region values in x slices above and x slices below ( x being the number of slices entered).
If there is no region, the entire area of the slices above and below will be considered
part of the kernel. If there are insufcient slices or regions, only those available will be
considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ), which is the lower left corner and its size [Rx , Ry , Rz , Rt ]. The input pattern is:
(xG , yG , zG ,tG ), [Rx , Ry , Rz , Rt ]. Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Reference Book
11 June 2014
140
Select the data type of the output layer. Available options are:
As input layer
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
32-bit signed
32-bit oat
Reference Book
11 June 2014
141
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Select the data type of the output layer. Available options are:
As input layer
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
32-bit signed
32-bit oat
Reference Book
11 June 2014
142
Enter a number to set the kernel size in one slice. The default value is 3.
Number of Slices
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a name for the output layer or use the drop-down list to select a layer name to be
used for output. If left empty, a temporary layer will be created. If a temporary layer is
selected it will be deleted and replaced.
Reference Book
11 June 2014
143
Select an output layer type from the drop-down list. Select As Input Layer to assign the
type of the input layer to the output layer.
Enter a number to set the kernel size in one slice. The default value is 3.
Number of Slices
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Reference Book
11 June 2014
144
Input Region
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Select the data type of the output layer. Available options are:
As input layer
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
32-bit signed
32-bit oat
Reference Book
11 June 2014
145
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Output Layer Visible
Reference Book
11 June 2014
146
Select the data type of the output layer. Available options are:
As input layer
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
32-bit signed
32-bit oat
Choose the min/max lter mode diff. brightest to center, diff. center to darkest or diff.
brightest to darkest.
Enter a number to set the kernel size in one slice. The default value is 3.
Number of Slices
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Reference Book
11 June 2014
147
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] .
Alternatively, you can select a variable. To create a new variable, type a name for the new
variable and click OK or press Enter to open the Create Variable dialog box for further
settings.
Output Layer
Enter a name for the output layer or use the drop-down list to select a layer name to be
used for output. If left empty, a temporary layer will be created. If a temporary layer is
selected it will be deleted and replaced.
Output Layer Visible
Reference Book
11 June 2014
148
Set the Sigma value. The Sigma value describes how far away a data point is from its
mean, in standard deviations. A higher Sigma value results in a stronger edge detection;
the default value is 5. The sigma value for a given window is:
s
2
= (x) 2
If the number of pixels P within the moving window that satisfy the criteria in the formula
below is sufciently large (where W is the width, a user-dened constant), the average of
these pixels is output. Otherwise, the average of the entire window is produced.
1 w PCenter P 1 +W PCenter
Edge Extraction Mode
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] .
Alternatively, you can select a variable. To create a new variable, type a name for the new
variable and click OK or press Enter to open the Create Variable dialog box for further
settings.
Output Layer
Enter a name for the output layer or use the drop-down list to select a layer. If a temporary
layer is selected it will be deleted and replaced.
Reference Book
11 June 2014
149
Lower threshold is applied after higher threshold. During the rst step, edges are detected
and pixels with values lower than Higher Threshold are removed from detected edges.
During the nal step, non-edge pixels (those previously removed because values were
less than higher threshold) with values higher than lower threshold are marked as edge
nodes again. After applying the algorithm the rst time, you can check results (edge pixel
values) and the value for the threshold. Usually values for this eld are from 05 the
default is 0.
Higher Threshold
After edges are detected, pixels with values lower than this threshold will not be marked
as edge pixels. This allows removal of low intensity gradient edges from results. After
applying the algorithm once, users can check the results (values of edge pixels) and nd
the correct value for the threshold. Usually values for this eld are from 05 the default
is 0.
Gauss Convolution FWHM
Enter the width of the Gaussian lter in relation to full width at half maximum of the
Gaussian lter. This eld determines the level of details covered by Gaussian lter. A
higher value will produce a wider Gaussian lter and less detail will remain for edge
detection. Therefore, only high intensity gradient edges will be detected by Cannys
algorithm. The range of the eld is 0.000115. The default value is 1.
Reference Book
11 June 2014
150
Input Layer
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Use the drop-down list to select a layer to use for output or enter a new name. Output is
32-bit oat. If the name of an existing 32-bit oat temporary layer is entered or selected,
it will be used. If a temporary layer is selected it will be deleted and replaced.
Sample Results
Original Layer
Higher Threshold: 0
Gauss Convolution
FWHM: 0.2
Reference Book
11 June 2014
151
This parameter denes the sharpness of the detected edges. The value range is 01
the larger the value, the more sharp an edge is displayed in the resulting image. Smaller
values will progressively blur the detected edges until they are unrecognizable.
Return Option
Enter a number to set the kernel size in one slice. The default value is 3.
Number of Slices
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Reference Book
11 June 2014
152
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ), which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ), [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a name for the output layer or use the drop-down list to select a layer name to be
used for output. If left empty, a temporary layer will be created. If a temporary layer is
selected it will be deleted and replaced.
Output Layer Visible
Select the layer to which the lter will be applied. Gradient Unit and Unit of Pixel
parameters apply to slope calculations only.
Reference Book
11 June 2014
153
Algorithm
Gradient Unit
Available for slope. Select Percent or Degree from the drop-down list for the gradient
unit.
Unit of Pixel Values
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ) , which is the lower left corner and its size [Rx , Ry , Rz , Rt ] . The input pattern is:
(xG , yG , zG ,tG ) , [Rx , Ry , Rz , Rt ] . Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a name for the output layer or use the drop-down list to select a layer name to be
used for output. If left empty, a temporary layer will be created. If a temporary layer is
selected it will be deleted and replaced.
Output Layer Type
Select an output layer type from the drop-down list. Select As Input Layer to assign the
type of the input layer to the output layer.
1. Zevenbergen LW, Thorne CR (1987). Quantitative Analysis of Land Surface Topography. Earth Surface
Processes and Landforms, 12(1):4756
2. Horn BKP (1981). Hill Shading and the Reectance Map. Proceedings of the IEEE, 69(1):1447
Reference Book
11 June 2014
154
Enter the lowest value of the value range that will be replaced by the output value. The
default is 0.
Maximum Input Value
Enter the highest value of the value range that will be replaced by the output value. The
default is 255.
Output Value
The value that will be written in the new calculated raster layer. May be a number or
an expression. For example, to add Layer 1 and Layer 2, enter Layer 1 + Layer 2. The
following operations can be used in the expression:
Reference Book
11 June 2014
155
Constant (pi)
Formula Examples for Output Value
Enter a name for the output layer or use the drop-down list to select a layer name to be
used for output. If left empty, a temporary layer will be created. If a temporary layer is
selected it will be deleted and replaced.
Output Region
This eld is available if the output layer does not yet exist. Select a data type for the raster
channel if it must be created:
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
Reference Book
11 June 2014
156
32-bit signed
32-bit oat
Enter the direction of the extracted line in degrees, between 0 and 179. The default value
is 0.
Line Length
Enter the length of the extracted line. The default value is 12.
Line Width
Enter the width of the homogeneous border at the side of the extracted line. The default
value is 4.
Max Similarity of Line to Border
Enter a value to specify the similarity of lines to borders. The default value is 0.9.
Min Pixel Variance
Enter a value to specify the similarity of lines to borders. Use 1 to use the variance of
the input layer. The default value is 0.
Reference Book
11 June 2014
157
Enter a value for the minimum mean difference of the line pixels to the border pixels. If
positive, bright lines are detected. Use 0 to detect bright and dark lines.
Input Layer
Use the drop-down list to select the layer where lines are to be extracted.
Output Layer
Enter or select a layer where the maximal line signal strength will be written. If a temporary layer is selected it will be deleted and replaced.
Output Layer Visible
Reference Book
11 June 2014
158
Elliptic difference:
Gauss reduction ( y axis) is the reduction factor of the standard deviation
along the (unrotated) y axis
Rotation angle ( xy axis) is the y axis rotation angle in the xy plane
Enter an odd number only for the lter kernel size. The default value is 3.
Number of Slices
Enter the number of slices to be considered as part of the kernel. If a region is specied in
the domain, the algorithm will use the region values in x slices above and x slices below
( x being the number of slices entered). If there is no region, the entire area of the slices
above and below will be considered part of the kernel. If there are insufcient slices or
regions, only those available will be considered.
Class Filter
Species the class lter for source pixels. Pixels outside selected class objects are assumed to have 0 value.
Dene a region within the input image layer. Select or enter the name of an existing
region. Alternatively, you can enter the co-ordinates of a region specied by its origin
(xG , yG ), which is the lower left corner and its size [Rx , Ry , Rz , Rt ]. The input pattern is:
(xG , yG , zG ,tG ), [Rx , Ry , Rz , Rt ]. Alternatively, you can select a variable. To create a new
variable, type a name for the new variable and click OK or press Enter to open the Create
Variable dialog box for further settings.
Output Layer
Enter a layer name to be used for output. A temporary layer will be created if there is no
entry in the eld or if the entry does not exist. If an existing layer is selected it will be
deleted and replaced.
Reference Book
11 June 2014
159
Select the data type of the output layer. Available options are:
As input layer
8-bit unsigned
16-bit unsigned
16-bit signed
32-bit unsigned
32-bit signed
32-bit oat
Input Layer
Temp Channel Alias
Calculation Mode.
Channels Are Temp
11 June 2014
160
Select the image layer used for ltering. For some object-based lters, no input layer is
needed.
Filtered Layer
Kernel size used for sliding window lter, which should always be an odd number. If the
kernel size is 1, this lter axis is not used.
Kernel Size in y
Kernel size used for sliding window lter, which should always be an odd number. If the
kernel size is 1, this lter axis is not used.
Kernel Size in z
Kernel size used for sliding window lter, which should always be an odd number. If the
kernel size is 1, this lter axis is not used.
Reference Book
11 June 2014
161
image layers these can be created or copied using the Layer Arithmetics on page 154
algorithm).
=0
This value corresponds to the exponent p in the formula for IDW distance weight.
Reference Book
11 June 2014
162
Name of the raster layer that will be created by the template matching algorithm. The
values of this layer will lie between 1 and 1 and correspond to the correlation coefcient
of the image and the (centered) template at that position.
Threshold
Insert a template match threshold in a range from 1 to 1 with 1 meaning perfect match.
A good indication for the value to use can be obtained in the Test Template tab of the
template editor.
Thematic layer
Name of the thematic layer with template matches marked as points. This thematic layer
has two thematic attributes:
TemplateID: corresponds to 0, 1, etc. depending which template of a group
achieved the best match.
TemplateName: name of the .tif le of the template that achieved the best match.
Reference Book
11 June 2014
mation.
Converter Parameters
Point property parameters dene the information to be used when generating new (raster) image layers. The following parameters can be set:
Point Property
163
164
Distance to camera: Calculates the distance values to the camera position. The
distance unit is according to the unit of the point cloud projection.
Since the data is resampled into a 2D grid with a xed spacing, several LiDAR points can be within one pixel. The calculation mode allows allows computation of
the resulting pixel value out of several LiDAR points. The following modes are available:
Result Mode
Because data is resampled into a 2D grid with xed spacing, several points
can be within one pixel. The Returns parameter lets you dene the points to use when
generating the raster image. The following settings are available.
Returns
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified value of red value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
165
Enter a value or select the mode for
ltering classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature. . .
From Array Item. . .
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature. . .
From Array Item. . .
Output Data
Output Layer
The output layer denes the name of the new raster layer.
Dene the two point cloud layers you want to merge and set the naming of the output
layer
Reference Book
11 June 2014
166
Insert the name of the new created point cloud layer, keep the default name Merge or
enter an individual name. Make sure to set double-quotes. If you select the same name
as one of the input layers it replaces the original input layer.
Select the point cloud layer from which to create the new
Select which returns should be used for the calculation. The following properties are
available.
Returns
Reference Book
11 June 2014
167
<none>
Selected classes
All not selected classes
Point Filter - By Image Layer Comparison
Point Attribute
<none>
Intensity: The point intensity values
Elevation: The point elevation values.
Select the image layer that will provide values to be compared with point
attributes in the .las le.
Image Layer
Upper Value
Lower Value
<none>
From Feature
From Array
You can dene to ignore elevation above the specied value.
Enter the value manually or choose:
Maximum allowed elevation
<none>
From Feature
From Array
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
168
Enter a value or select the mode for ltering
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Output
Dene the name of the new temporary point cloud layer. The default
name is temporary. If you select one of the image layers from the drop down menu the
respective layer will be overwritten.
Output Layer
Select a map for the new temporary point cloud layer. The default setting <Use map from Domain> inserts the temporary point cloud to the Domain specied.
Map (output map)
Reference Book
11 June 2014
169
The returns parameter lets you dene the points to use when exporting the .las
le.
<none>
Intensity: The point intensity values
Reference Book
11 June 2014
170
Upper Value
Lower Value
<none>
From Feature. . .
From Array Item. . .
You can dene to ignore elevation above the specied value.
Enter the value manually or choose:
Maximum allowed elevation
<none>
From Feature. . .
From Array Item. . .
Point Filter - By Distance
<none>
From Feature. . .
From Array Item. . .
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature. . .
From Array Item. . .
Output - LiDAR Class
Dene the class with which the points are labeled with or use the option leave original
class(es) to store original class(es).
Reference Book
11 June 2014
171
Choose whether to overwrite the original input le, or create a new one. The options are:
classify points in original LAS le
classify points in new LAS le (complete point cloud)
classify points in new LAS le (subset only)
Dene the point cloud layer from which to export the .las
le.
Detail(0. . . 1)
Smoothness
number of pulses
performance with precalculation
performance without precalculation
choose value from fature
choose value from array
Point Filter
Returns
The returns parameter lets you dene the points to use when exporting the .las
le.
All: All points are used
First: Only the rst return points are used
Last: Only the last return points are used
Reference Book
11 June 2014
172
Classification Filter
Filtering
Point Filter
Use property
11 June 2014
173
Neighbors
LiDAR Class
Dene the class with which the points in one cluster are labeled with
Reference Book
11 June 2014
Select the name of the new temporary vector layer. The default name is temporary.
Output Map
Select a map for the new temporary thematic layer. The default setting <Use map from
Domain> inserts the temporary layer to the Domain specied.
Output vector type
Type of the vector layer to create (only 3D point is supported in this version).
Output layer visible
174
175
Vector content
Type of the source data for vector layer contents. The following selections are possible:
Create empty
Create from point cloud
When selecting Create from point cloud the following options are available:
Source point cloud data
Source point cloud
Select the statistical operation to compute the X coordinate from the point
cloud with the statistical operation applied per image object.
X coordinate
Select the statistical operation to compute the Y coordinate from the point
cloud with the statistical operation applied per image object.
Y coordinate
Select the statistical operation to compute the Z coordinate from the point
cloud with the statistical operation applied per image object.
Z coordinate
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified value of red value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
176
Enter a value or select the mode for
ltering classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature. . .
From Array Item. . .
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature. . .
From Array Item. . .
Rasterization Settings - Point size
Reference Book
11 June 2014
177
Specify the thematic layer attribute column that contains classication values.
Class Mode
Specify the behavior when the class specied in the thematic layer is not present in the
rule set.
Skip if class does not exist class assignment is skipped if the class specied in
the thematic layer is not present in the rule set
Use default class use a default class if the class specied in the thematic layer is
not present in the rule set
Create new class a random, unique color will be assigned to the new class in the
Class Hierarchy window.
Default Class
Only available if class mode is Use Default Class. Specify the default class for this mode.
Reference Book
11 June 2014
178
Choose attributes from the thematic layer for the algorithm. You can select any numeric
attribute from the attribute table of the selected thematic layer.
Variable
Select an existing variable or enter a new name to add a new one. If you have not already
created a variable, the Create Variable dialog box will open.
Reference Book
11 June 2014
179
Feature
If the thematic layer is linked with a shapele, the changes can be updated to the le.
Column name: Enter column name to add. Consider that shape les (*.shp) do not
support more than 9 characters.
Column type: Choose column type (double, integer, string)
Default value: Specify default value
Rename Column
Reference Book
11 June 2014
180
Select a name for the thematic layer to be created. The default name is converted_objects.
Shape type
Reference Book
11 June 2014
181
Union
Intersection
Difference
Substraction
Only available for the boolean operation substraction to select the vector layers to be
substracted from the domain vector layers.
Output vector layer
Select the name of the output vector layer. Dependent on the selected operation the default names are vector_union, vector_intersection, vector_difference or vector_substract.
Overwrite output layer
The default setting to overwrite the output layer is No. If the output vector layer exists
already and you want to overwrite it select Yes.
Reference Book
11 June 2014
182
Insert the output vector layer name for the smoothed vectors. The default name is
vector_smoothed.
Approximation tolerance
Select the output vector layer name for the simplied vectors. The default name is
vector_simplied.
Reference Book
11 June 2014
183
Stop criteria
Insert the Douglas-Peucker distance parameter in scene units. (Only available for Stop
criteria - Max distance.)
Percentage
Percentage of points which should be kept from the original line or polygon shape. (Only
available for Stop criteria - Percentage.)
Select the output vector layer name for the generalized polygons. The default name is
vector_rectilinear.
Chessboard size (pixels)
Select the chessboard size in pixels. The algorithm applies a chessboard segmentation
within a bounding box of each polygon assigning each tile either to the polygon or to the
background depending on the merge ratio.
Merge threshold
This threshold denes if a chessboard tile belongs to the polygon or not, it is applied to
the ratio of the number of polygon pixels and number of outside pixels. The value is in a
range between 0 and 1.
Reference Book
11 June 2014
184
Main directon
Polygon direction vector angle with X axis in degrees (range 90,90). The resulting
polygon will be orthogonal and rotated at this angle. If auto-detect option is selected,
then direction is determined by the largest sum of edges with the same angle.
Select the output vector layer name. The default name is vector_integrated.
Overwrite output layer
Snap distance in geo units. This is the largest allowed distance between 2 vectors.
Reference Book
11 June 2014
185
Select the output vector layer name. The default name is vector_clean.
Select the output vector layer name. The default name is vector_dissolve.
Overwrite output layer
Select the attribute to use as a merge criteria. An empty value eld results in dissolving
all polygons.
Reference Book
11 June 2014
186
Reference Book
11 June 2014
17 Workspace Automation
Algorithms
Workspace automation algorithms are used for working with subroutines of rule sets.
These algorithms enable you to automate and accelerate the processing of workspaces
with particularly large images. Using workspace automation algorithms you can create
multi-scale workows, which integrate analysis of images at different scales, magnications, or resolutions.
See Scale (p 127) for an explanation of the Select Scale dialog box.
Additional Thematic Layers
Edit the thematic layers you wish to load to a scene copy. This option is used to load
intermediate result information that has been generated within a previous subroutine and
exported to a geocoded thematic layer. Use semicolons to separate multiple thematic
layers, for example, ThematicLayer1.tif;ThematicLayer2.tif.
187
188
Specify the scale for copying a map using a variable, rather than dening a numerical
value.
Scale
See Scale (p 127) for an explanation of the Select Scale dialog box.
Border Size
Extends the bounding box of the image object by the entered border size in pixels. (Only
available when image object level is selected as Domain.)
Include Maps With Objects Linked Via
Class lter for object link class that links objects on different maps. the subset will
contain all maps where the current object has linked objects on. (Only available when
image object level is selected as Domain.)
Exclude Other Image Objects
Dene a no-data area outside of the current image object to exclude other image objects
from further processing. (Only available when image object level is selected as Domain.)
Selecting yes brings up two further parameters:
Customize Path if set to yes, the Export Path parameter is activated
Reference Book
11 June 2014
189
Specify the le export folder used for desktop processing. If the algorithm is run in
desktop mode, les will be stored at this location. In server processing mode, the le
location is dened in the export settings specied in the workspace.
Minimum and Maximum X, Y, Z & T Co-ordinates
Edit the co-ordinates of the subset. For the default co-ordinates orientation (0,0) in the
bottom left-hand corner, the different co-ordinates are dened as follows:
Alternatively, click the drop-down arrow button to select available variables. Entering a
letter will open the Create Variable dialog box.
Co-ordinates Orientation
You can change the corner of the subset that is used as the calculation base for the coordinates. The default is (0,0) in the lower-left corner.
Additional Thematic Layers
Edit the thematic layers you wish to load to a scene copy. This option is used to load
intermediate result information that has been generated within a previous subroutine and
exported to a geocoded thematic layer. Use semicolons to separate multiple thematic
layers, for example, ThematicLayer1.tif;ThematicLayer2.tif.
Reference Book
11 June 2014
190
Edit the height of the tiles to be created. Minimum height is 100 pixels.
Tile Width
Edit the width of the tiles to be created. Minimum with is 100 pixels.
Select the type of scene to submit to analysis the top level scenes, tiles, or subsets and
copies.
Reference Book
11 June 2014
191
If you select top level scenes, they can only be submitted on the client, and not on a server.
This option is designed for use with actions that are used in the Analysis Builder window
and that will not be analyzed on the server.
State Filter
This eld only displays for top level scenes. The default is any state. use the drop-down
list to select a processing state: created, edited, processed, failed, canceled or rejected.
Submit Recursively
Enter the prex of the names of scene copies to be selected for submitting. A prex is
dened as the complete scene name or the beginning of it. Enter the unique part of the
name to select only that scene, or the beginning of the name to select a group with similar
or sequential names. For example, if you have scene names 7a, 7b and 7c, you can select
them all by entering a 7, or select one by entering 7a, 7b or 7c.
Process Name
Select a parameter set to transfer variables to the following subroutines. Click the ellipsis
button to open the Select Parameter Set dialog box.
Percent of Tiles to Submit
If you do not want to submit all tiles for processing but only a certain percentage, you
can edit the percentage of tiles to be processed. If you change the default value of 100,
the tiles are selected randomly. If the calculated number of tiles to be submitted is not an
integer it is rounded up to the next integer.
If the value entered is less than or equal to 0, 1.0 will be used. If the value entered is
greater than 100, 100 will be used. Tiles that are not selected are automatically assigned
the status skipped.
Reference Book
11 June 2014
192
Select Yes to stitch the results of subscenes together and add them to the complete scene
within its original dimensions. Only the main map of a tile projects can be stitched.
Overlap Handling
If Subsets and Copies are stitched, the overlapping must be managed. You can opt to
create Intersection image objects (default) or select Union to merge the overlapping image
objects.
Class for Overlap Conflict
Overlapping image objects may have different classications. In that case, you can dene
a class to be assigned to the image objects resulting from overlap handling.
Ignore Failed Tiles or Subsets
If set to Yes tiles or subsets that have status failed will be ignored by the stitching process.
Transparency value for temporary image layers
A value to be ignored when stitching temporary image layers. Enter the value manually
or choose:
From Feature. . .
From Array Item. . .
Reference Book
11 June 2014
193
Select a parameter set to transfer variables to the following subroutines. Click the ellipsis
button to open the Select Parameter Set dialog box.
Select the type of scene copy to be deleted: tiles or subsets and copies.
Scene Name Prefix
Enter the prex of the names of scene copies to be selected for deleting. A prex is
dened as the complete scene name or the beginning of it. Enter the unique part of the
name to select only that scene, or the beginning of the name to select a group with similar
or sequential names. For example, if you have scene names 7a, 7b and 7c, you can select
them all by entering a 7, or select one by entering 7a, 7b or 7c.
Reference Book
11 June 2014
194
This function lets you use the text of a variable containing a scene type, rather than the
scene type itself. Selecting yes activates the Scene Type Variable parameter.
Scene Type Variable
Select the type of scene to be submitted (this parameter only appears if Use Variable as
Scene Type is set to no):
Current scene
Subsets and copies
Subset and copies recursively
Tiles
Tiles recursively
Tiles from subset.
Enter the prex of the names of the scenes to be selected for reading. A prex is dened
as the complete or the beginning of the scene name. Enter the unique part of the name
to select only that scene, or the beginning of the name to select a group with similar or
sequential names. For example, if you have scene names 7a, 7b and 7c, you can select
them all by entering a 7, or select one individually by entering 7a, 7b or 7c.
Export Item
Enter the name of the export item statistics to be summarized (for example
ProjectStatistics or ObjectStatistics).
Output Type
Reference Book
11 June 2014
195
All Columns reads all available columns, creates a variable for each one, then puts
these variables into a specied feature list
Column
If single column is selected as the Output Type, specify which column should be used
by the algorithm for the statistical summary operation.
Feature List
The Feature List parameter appears when all columns is selected as the output type.
Enter the feature list variable to receive the columns.
Add Summary Prefix
The Add Summary Prex option appears when the Output Type parameter is set to all
columns. It adds the summary type as a prex to each variables name.
Description
Count
Sum
Mean
Max
If yes is selected, a variable value is used to calculate a summary; if set to no, a xed
value is used
Calculate Summary
Reference Book
11 June 2014
196
Result Variable
Enter a variable to store the resulting value. (This parameter is only visible if Single
Column is selected and Calculate Summary is set to yes.)
Reference Book
11 June 2014
18 Interactive Operation
Algorithms
Interactive operation algorithms are used to provide user interaction with the user of
actions in eCognition Architect.
198
Browse for an image le containing the image layers. Alternatively you can edit the path.
Image Layer ID
Change the image layer ID within the le. Note, that the ID is zero-based.
Image Layer Alias
Browse for a thematic le containing the thematic layers. Alternatively, you can edit the
path.
Attribute Table File
Browse for an attribute le containing thematic layer attributes. Alternatively, you can
edit the path.
Attribute ID Column Name
Edit the name of the column of the attribute table containing the thematic layer attributes
of interest.
Reference Book
11 June 2014
199
Activate to select the bounding coordinates based on the respective geographical coordinate system.
The Use Brush parameter, when assigned the value Yes, activates a brush tool. Holding
down the left mouse button allows a user to manually classify objects by dragging the
mouse over them. If this action is performed while pressing the Shift key, objects are
unclassied.
Brush Size
Reference Book
11 June 2014
200
Select the features to display the feature values of the image objects.
Reference Book
11 June 2014
201
Click and hold the left mouse button as you drag the cursor across the image to
create a path with points in the image
To create points at closer intervals, drag the cursor more slowly or hold the Ctrl key
while dragging.
Release the mouse button to automatically close the polygon
Click along a path in the image to create points at each click. To close the polygon,
double-click or select Close Polygon in the context menu
To delete the last point before the polygon is complete, select Delete Last Point in
the context menu.
Select Full Editing or Classify Only. The Classify Only option enables the editing of
class and layer name parameters only.
Class
When this function is activated and the user draws a line around an image object with the
mouse, the line will automatically snap to the area of highest contrast.
Magnetic Snap Radius
Insert a value to specify a detection radius; the magnetic snap detects edges only within
the specied distance. A value of zero means no magnetic snap.
Thematic Layer Name
Reference Book
11 June 2014
202
Enter the name of the layer where thematic objects are to be selected.
Selection Mode
Reference Book
11 June 2014
203
Polygon Cut in polygon mode, a closed polygon is drawn and the area inside it is
cut
Line Cut in line mode, a line is drawn and rasterized by creating pixel width
image objects along the line
Reference Book
11 June 2014
204
Class
Select yes or no. If yes is selected, the drawn line will automatically snap to highcontrast areas, when the user drags the mouse while holding down the left mouse button
Magnetic Snap Radius
Insert a value to specify a detection radius; the magnetic snap detects edges only within
the specied distance. A value of zero means no magnetic snap. The default value is 30.
Class Filter
If selected, the image object list variable that will receive the cut image objects.
Callback Process
Select yes or no. If yes is selected, the image object type will be preserved for cut
image objects. If no is chosen, cut image objects will be marked as disconnected (this
option is less processor intensive).
Ensure Connected for Objects Created by the Cut
Select yes or no. If yes is selected, the resulting image objects will be converted to
connected 2D. If no is chosen, resulting image objects will be marked as disconnected.
Reference Book
11 June 2014
205
Save view settings in this mode the view settings can be saved. They will be
stored together with the algorithm. Use the View Settings parameter to save the
current view.
Restore view settings in this mode the view settings are restored to the state
represented by another process using this algorithm in the Save View Settings mode.
Use the Process Path parameter to specify the process that stores the view settings.
View Settings
This parameter is only available in the Save View Settings mode. Click the ellipsis in the
Value column (which displays Click to Capture View Settings) to store the current view
settings. In addition to main maps, the view settings for maps and window splits are also
saved.
Process Path
This parameter is only available in the Restore View Settings mode. Refer to another
process using this algorithm in Save View Settings mode to restore the view settings that
are stored in this algorithm. To refer to a process, right-click on it and select Go To to
open the Go To Process dialog box.
Select on which pane the map should be displayed. The following values are available:
Active
Reference Book
11 June 2014
206
First
Second
Third
Fourth
Apply to all
Map Name
No Split
Split Vertically
Split Horizontally
4 Panes
Comparison View for a 2D project, two vertical panes; for a 3D project, comparisons of XY, XZ or YZ views
View Type
Independent View
Side by Side View
Swipe View
Reference Book
11 June 2014
207
Synchronize Views
Dene how image views are laid out and what content is displayed, for each image view.
Select yes to synchronize all the map views with each other.
First
Second
Third
Fourth
Map Name
XY, XZ, YZ or 3D
Select on which pane the view settings should be applied. The following options are
available:
Active
First
Second
Third
Fourth
Apply to all
Reference Book
11 June 2014
208
Layer
Select to which Layer the view settings should be applied. Dependent on the project and
its available data the following selections can be made:
No Change
Image Data
Samples
temporary Layer
Thematic Layer
Mode
No Change
Layer
Classication
Samples
From Feature. . .
From Array Item. . .
Image Data
Select which outline display settings should be applied. Available options are:
No Change
None
Opaque
Transparent
Class Transparency
Select the desired opaqueness (in %) by either entering a certain value or one of the
following options:
No Change
Value
From Feature. . .
From Array Item. . .
Reference Book
11 June 2014
209
Zoom
Select the desired zoom factor (in %) by either entering a certain value or one of the
following options:
No Change
Value
Fit to Window
From Feature. . .
From Array Item. . .
Polygons
No Change
Off
Raster
Smoothed
Map Name
No Change
Map name
main
From Parent
No Change
Value
From Feature. . .
From Array Item. . .
Image Layer
Image Layer Mixing
No Change
Reference Book
11 June 2014
210
Thematic Layer
Thematic Vector Objects
No Change
Off
Manual editing
View
No change
Set Thematic Layer Mixing
When selecting Set Thematic Layer Mixing the following thematic layer visualization properties can be selected:
Layers
Show
Outline Color
Fill color and
Tranparency
Thematic Layer Transparency If you want to select one transparency value for all selected
thematic layers insert a transparency value here in a range of 0100. This parameter is
available for Thematic Layer Mixing > Set Thematic Layer Mixing.
Reference Book
11 June 2014
211
possible to turn this option on and off in Tools > Options (for more details consult the
user guide).
Select the image layer or thematic layer (depending on your selection in the Apply To
eld) to show or hide.
Reference Book
11 June 2014
212
Active Map
<Create New Variable>
Hide Map
Values for the Hide Map parameter are the same as those for Show Map.
Allow Empty Image View
Insert a value that will store the result. If the user selects yes, the value is 1; if no is
selected, the value is 0.
Reference Book
11 June 2014
213
Set this value to modied or unmodied. If modied is selected, the user will be prompted
to save; if unmodied is selected, this option will not appear.
Enter a name in quotes, or use the drop-down list to select a variable, feature or array
item.
Reference Book
11 June 2014
214
Dene the name of the help le. The HTML le must be created independently and
conform to the dened address.
Shapes File
Select on which pane the settings should be applied. The following options are available:
Active
First
Second
Third
Fourth
Apply to all
Reference Book
11 June 2014
215
Equalization
None
Linear
Standard Deviation
Gamma Correction
Histogram
Manual
Equalization factor
Default Value
From Feature. . .
From Array Item. . .
If you select Default Value for the Equalization factor the following values are applied:
Linear = 1%
Standard Deviation = 3.00
Gamma Correction = 0.50
Manual image equalization
If you select Manual for Equalization the following options are available:
If yes is selected, the software will expect a scene variable
of the string type as an equalization parameter. If no is selected, equalization has to be
specied explicitly.
Use variable for equalization
Equalization
Linear
Linear Inverse
Gamma Correction (Positive)
Gamma Correction (Negative)
Gamma Correction (Positive) Inverse
Gamma Correction (Negative) Inverse
Range Type There are two ways to dene the range Min/Max or Center/Width. The
value range will not exceed the limitations of the channel; for example, the value for an
8-bit channel will always be between 0 and 255.
min = center
width
width
, max = center +
2
2
Min/Max Dependent on the selection of the Range Type you can specify the minimum
and maximum value for the layer equalization range.
Reference Book
11 June 2014
216
Center/Width Dependent on the selection of the Range Type you can specify the center
and width values for the layer equalization range.
layers.
The value of the Interactive Range Editing Step represents
the step-size in pixels, for the interactive mouse action.
Interactive Range Editing Step
The name of the class to be created or modied. The default value <auto> will insert a
random name.
Default Value
From Feature. . .
From Array Item. . .
Superclass name
Reference Book
11 June 2014
217
Class comment
For the visualization of the class color select a value between [0. . . 255]. The default
value 1 selects a random red color value for this new class.
Default Value
From Feature. . .
From Array Item. . .
Class visualization green color
For the visualization of the class color select a value between [0. . . 255]. The default
value 1 selects a random green color value for this new class.
Default Value
From Feature. . .
From Array Item. . .
Class visualization blue color
For the visualization of the class color select a value between [0. . . 255]. The default
value 1 selects a random blue color value for this new class.
Default Value
From Feature. . .
From Array Item. . .
Scope
Reference Book
11 June 2014
218
219
Dene the le name and the path of the parameter set les to be loaded. You may use the
same settings as used for a related Save Parameter Set action.
You can use the suggested le name pattern {:Workspc.OutputRoot\}
parameter_sets\paramset.psf. It denes the folder parameter_sets located
in the workspace output root folder as displayed in the Workspace Properties dialog
box. The parameter set les are named paramset, followed by the le name ending as
described below and the le extension .psf.
Click the drop-down arrow to select text elements for editing the le name pattern.
File Name Ending
Dene the le name ending to add to the File Name eld. You may use the same settings
as used for a related Save Parameter Set action. Click the drop-down arrow to select an
available variable. Alternatively, you can insert text between the quotation marks.
Reference Book
11 June 2014
220
Click the drop-down arrow to select a parameter set to save to le. Alternatively, you can
enter a given parameter set name.
File Name
See the explanation contained in Load Parameter Set on the preceding page.
File Name Ending
See the explanation contained in Load Parameter Set on the previous page.
See the explanation contained in Load Parameter Set on the preceding page.
File Name Ending
See the explanation contained in Load Parameter Set on the previous page.
Reference Book
11 June 2014
221
Reference Book
11 June 2014
222
Reference Book
11 June 2014
20 Sample Operation
Algorithms
Use sample operation algorithms to handle samples for Nearest Neighbor classication
and to congure the Nearest Neighbor settings.
223
224
Select as many features as you like for the nearest neighbor feature space.
Function Slope
Reference Book
11 June 2014
225
The Use Brush parameter, when assigned the value Yes, activates a brush tool. Holding
down the left mouse button allows a user to manually classify objects by dragging the
mouse over them. If this action is performed while pressing the Shift key, objects are
unclassied.
Reference Book
11 June 2014
21 Export Algorithms
Export algorithms are used to export table data, vector data and images derived from the
image analysis results. 1
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Export Unclassified as Transparent
226
Export Algorithms
227
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will be stored in this format. In server processing mode, the le format is
dened in the export settings specied in the workspace.
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Enable Geo Information
Reference Book
11 June 2014
Export Algorithms
228
Click the ellipsis button to capture current view settings. Transparency settings may affect
the appearance of the exported view. 2
Scale
See Scale (p 127) for an explanation of the Select Scale dialog box.
Default File Format
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will be stored in this format. In server processing mode, the le format is
dened in the export settings specied in the workspace.
Export Multiple Slices As
Specify how the slices of a three-dimensional scene are exported. 3 You can export the
slices into a single montage image, into multiple les or into a multi-page .tif le. To
export only a selection of slices, you can additionally change the settings under Slices
and Frames:
Select Current to export the current slice or slices of the current time frame.
Select Single to export a certain slice or slices of a certain time frame. Indicate the
slice or time frame using the slice and frame index respectively.
Select Range to export a range of slices or slices of several time frames. Indicate
the range using the slice and time frame index respectively.
The settings can be made independently. By default, both values are set to all.
Reference Book
11 June 2014
Export Algorithms
229
Reference Book
11 June 2014
Export Algorithms
230
Use Explicit Path exports to the location dened in the Export Path eld. The
default export path is {:Workspc. OutputRoot} \results\{:Item. Name}
\{:Project. Name}.v {:Project.Ver}.{:Ext}
Export Series
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Export Type
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will be stored in this format. In server processing mode, the le format is
dened in the export settings specied in the workspace.
Geo-Coding Shift X
Shift the geocoding lower-left corner in the X direction. Use the drop-down list to shift
half a pixel to the left or right.
Geo-Coding Shift Y
Shift the geocoding lower-left corner in the Y direction. Use the drop-down list to shift
half a pixel up or down.
Reference Book
11 June 2014
Export Algorithms
231
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Thematic Layer
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
Instead of exporting the results from multiple projects into multiple les it also possible
to export the results in multiple feature classes of one FileGDB.
Reference Book
11 June 2014
Export Algorithms
232
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Domain Features
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
File Name Suffix
File Name Sufx allows users to select features or variables or enter a string. The value
of this parameter is then added to the name of the exported le.
When you dene a sufx, be aware that certain characters in Windows are invalid in
lenames invalid lename characters will result in Windows error code 123.
Reference Book
11 June 2014
Export Algorithms
233
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Features
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
Reference Book
11 June 2014
Export Algorithms
234
If set to Yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Features
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
File Name Suffix
File Name Sufx allows users to select features or variables or enter a string. The value
of this parameter is then added to the name of the exported le.
Reference Book
11 June 2014
Export Algorithms
235
This feature allows you to create a separate export le based on one or more features.
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Features
Reference Book
11 June 2014
Export Algorithms
236
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Attribute table.
Shape Type
Reference Book
11 June 2014
Export Algorithms
237
Export Type
Choose between geo coordinates (Yes) or pixels (No) for exported vertices in the shapele.
Coordinates Orientation
Species how the given coordinates should be interpreted (available for 3D data).
Click the drop-down arrow to select Shapele or GDB le format. Instead of exporting
the results from multiple projects into multiple les it also possible to export the results
in multiple feature classes of one FileGDB.
Name of Feature Class to Export
Click to open the Select Feature Class to Export dialog box and select a feature class.
This eld displays options if you use Trimble Data Management in combination with the
eCognition Server.
Reference Book
11 June 2014
Export Algorithms
238
If Static Export Item or Dynamic Export Item are selected and this parameter is set to
yes, you may enter an le export location in the Export Path eld.
Export Series
If set to yes, then multiple les per series will be exported, or additional columns will
be created for table exports.
Default File Format
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
Image File Name
Enter the le name for the exported image. If multiple les are exported, this name is
used as a prex, to which numbers are added.
21.10.3 Settings
View Settings Source
Reference Book
11 June 2014
Export Algorithms
239
Enter a value for the thickness of the pixels around the bounding box of the exported
image object (the default is 0).
Use Fixed Image Size
Reference Book
11 June 2014
Export Algorithms
240
Dynamic Export Item lets you enter or select a variable as an export item
Select a variable from the drop-down box or enter a name in the Export Item
Variable Name eld (entering a name launches the Create Variable dialog box,
where you can enter a value and variable type)
Use Explicit Path exports to the location dened in the Export Path eld. The
default export path is {:Workspc. OutputRoot} \results\{:Item. Name}
\{:Project. Name}.v {:Project.Ver}.{:Ext}
Export Series
If set to yes, then multiple les per series will be exported, or additional columns will be
created for table exports.
Scale
See Scale (p 127) for an explanation of the Select Scale dialog box.
Default File Format
Select the export le type used for desktop processing. If the algorithm is run in desktop
mode, les will stored in this format. In server processing mode, the le format is dened
in the export settings specied in the workspace.
Background Fill Color
Select the channel value of the required background ll color. You can enter any integer
value that represents an actual gray value. If you enter a value that is invalid for the
current image le, it will automatically be changed to the closest valid one.
Reference Book
11 June 2014
Export Algorithms
241
Export Mode
Select from the following formats if exporting in desktop mode (if you are using server
mode, then the export settings dened in the workspace will dene the le format):
Geo-Coding Shift X
Shift the geocoding lower-left corner in the X direction. Use the drop-down list to shift
half a pixel to the left or right.
Geo-Coding Shift Y
Shift the geocoding lower-left corner in the Y direction. Use the drop-down list to shift
half a pixel to the left or right.
Reference Book
11 June 2014
Export Algorithms
242
Create creates a new result preview layer from the domain level
Create from Subscenes creates a new result preview layer from specied subscenes
Append appends or creates a domain level into a result preview layer
Generate Overview generates an overview of an existing result preview layer.
Target Map
Reference Book
11 June 2014
22 Image Registration
Algorithms
Image registration algorithms are used to transform two or more sets of image data into
one system of co-ordinates.
Landmarks may be created manually or automatically with results similar to A2 and B2.
B1/A1 illustrates the result of the registration of B1 with respect to A1.
243
244
List of maps that should be registered to the map selected in the domain.
Transformation Type
This parameter is only available for afne transformations using the Automatic by pixel
brightness mode. Select the reference layer for pixel brightness optimization.
Iterations
This parameter is only available for afne transformations. Number of iterations when
optimizing the afne transformation parameters.
Save Transformation In Parameter Set
Name of the parameter set to save the afne transformation matrix in case of a single map
registration.
Reference Book
11 June 2014
245
Resample Type
Description
Linear
B-spline
Nearest Neighbor This resampling method is the fastest. It may produce results with less
quality than linear or B-spline.
Reference Book
11 June 2014
23 About Features
23.1 About Features as a Source of Information
Image objects have spectral, shape, and hierarchical characteristics. These characteristic
attributes are called features in Trimble software. Features are used as source of information to dene the inclusion-or-exclusion parameters used to classify image objects.
There are two major types of features:
Object features are attributes of image objects, for example area
Global features are not connected to an individual image object, for example the
number of image objects of a certain class.
Position values can be converted from one co-ordinate system to another. The following
position conversions are available:
If the unit is a pixel, a position within the pixel co-ordinate system is identied
If the unit is a co-ordinate, a position within the user co-ordinate system is identied
The position conversion is applied for image object features such as Y center, Y max and
X center.
Conversion of Unit Values
Distance values such as length and area are initially calculated in pixels. They can be
converted to a distance unit. To convert a pixel value to a unit, the following information
is needed:
246
About Features
247
Reference Book
11 June 2014
24 Vector Features
Vector features allow addressing vectors by their attributes. These features are based on
vectors objects and not on image objects.
248
249
250
Reference Book
11 June 2014
251
Figure 25.2. Creating an arithmetic feature in the Customized Features dialog box
Reference Book
11 June 2014
252
Figure 25.3. Creating a relational feature at the Customized Features dialog box
8. To create the new feature click Apply to create the feature without leaving the
dialog box or OK to create the feature and close the dialog box
9. After creation, the new relational feature can be found in the Feature View window
under Class-Related Features > Customized.
NOTE: As with class-related features, the relations refer to the groups
hierarchy. This means if a relation refers to one class, it automatically
refers to all subclasses of this class in the groups hierarchy.
Reference Book
11 June 2014
253
Relation
Description
Neighbors
Related image objects on the same level. If the distance of the image
objects is set to 0 then only the direct neighbors are considered. When the
distance is greater than 0 then the relation of the objects is computed using
their centers of gravity. Only those neighbors whose center of gravity is
closer than the distance specied from the starting image object are
considered. The distance is calculated either in metric units or pixels. For
example, a direct neighbor might be ignored if its center of gravity is
further away from the specied distance.
Sub-objects
Image objects that exist under other image objects (superobjects) whose
position in the hierarchy is higher. The distance is calculated in levels.
Superobject
Sub-objects of
superobject
Only the image objects that exist under a specic superobject are
considered in this case. The distance is calculated in levels.
Level
Species the level on which an image object will be compared to all other
image objects existing at this level. The distance is calculated in levels.
Relational
function
Description
Mean
Calculates the mean value of selected features of an image object and its
neighbors. You can select a class to apply this feature or no class if you
want to apply it to all image objects. Note that for averaging, the feature
values are weighted with the area of the image objects.
Standard
deviation
Mean difference
Reference Book
11 June 2014
254
Relational
function
Description
Mean absolute
difference
Ratio
Sum
Number
Min
Returns the minimum value of the feature values of an image object and
its neighbors of a selected class.
Max
Returns the minimum value of the feature values of an image object and
its neighbors of a selected class.
Mean difference
to higher values
Mean difference
to lower values
Portion of higher
value area
Portion of lower
value area
Portion of higher
values
Calculates the feature value difference between an image object and its
neighbors of a selected class with higher feature values than the object
itself divided by the difference of the image object and all its neighbors of
the selected class. Note that the features are weighted with the area of the
corresponding image objects.
Continues. . .
Reference Book
11 June 2014
255
Relational
function
Description
Portion of lower
values
Calculates the feature value difference between an image object and its
neighbors of a selected class with lower feature values than the object
itself divided by the difference of the image object and all its neighbors of
the selected class. Note that the features are weighted with the area of the
corresponding image object.
Mean absolute
difference to
neighbors
Reference Book
11 June 2014
26.1 Is 3D
Object Features > Type > Is 3D
The Is 3D feature checks if an image object has three dimensions. If the image object
type is 3D, the feature value is 1 (true), if not it is 0 (false).
26.2 Is Connected
Object Features > Type > Is Connected
The Is Connected feature checks if the image object is connected. If the image object
type is connected in two-dimensional space, the feature value is 1 (true), otherwise it is 0
(false).
256
27.1 Mean
Object Features > Layer Values > Mean
Features in this group refer to the mean layer intensity value of an image object.
27.1.1 Brightness
Object Features > Layer Values > Mean > Brightness
Editable Parameter
To set which image layers providing the spectral information are used for calculation,
select Classication > Advanced Settings > Select Image Layers for Brightness from the
main menu. The Dene Brightness dialog box opens.
Parameters
257
258
Expression
c(v)
=
1 K B
wk ck (v)
wB k=1
Conditions
Pv is the set of pixels/voxels of an image object v with Pv = {(x, y, z,t) : (x, y, z,t)
v}
#Pv is the total number of pixel/voxels contained in Pv
ck (x, y, z,t) is the image layer intensity value at pixel/voxel (x, y, z,t)
is the darkest possible intensity value of image layer k
cmin
k
cmax
is the brightest possible intensity value of image layer k
k
ck is the mean intensity of image layer k.
Reference Book
11 June 2014
259
Expression
ck (v) = ck (Pv ) =
1
ck (x, y, z,t)
#Pv (x,y,z,t)P
v
Feature Value
max
[cmin
k , ck ]
Expression
max ci (v) c j (v)
i, jKB
c(v)
1
0, cmax
KB k
Typically, the feature values are between 0 and 1.
Conditions
27.2 Quantile
Object Features > Layer Values > Pixel Based > Quantile
Gives back the dened quantile of a dened layer.
Reference Book
11 June 2014
260
Calculation: rst values are sorted from small to large. Associates the ordered values with
equally spaced sample fractions. It reads out the value at the dened quantile position.
27.2.2 Parameters
The quantile is given by the element of the list Y with index equal to int(r)
27.3 Mode
Object Features > Layer Values > Pixel Based > Mode
Gives back the pixel value, which repeats most often per object Example: An object has
the following pixel values: 1 2222 333 444444 55 4 would be the mode, because it
appears most often
There is the possibility, that values appears equally often. Like: 11 2222 3333 4444 55,
then you can choose from the parameters, whether you want to get back the minimum
( smallest value, here: 4) the maximum ( largest value, here: 5) or the median (here:
3)
Reference Book
11 June 2014
261
k (v) = k (Pv ) =
v
u
u
1
u 1
t
c2k (x, y, z,t)
#Pv (x,y,z,t)P
#Pv
v
(x,y,z,t)Pv
!2
ck (x, y, z,t)
1 range
0, ck
2
27.5 Skewness
Object Features > Layer Values > Skewness
The Skewness feature describes the distribution of all the image layer intensity values of
all pixel/voxels that form an image object; this distribution is typically Gaussian. The
value is calculated by the asymmetry of the distribution of image layer intensity values in
an image object.
A normal distribution has a skewness of zero. A negative skewness value indicates that
an image object has more pixel/voxels with an image layer intensity value smaller than
the mean; a positive value indicates a value larger than the mean.
Reference Book
11 June 2014
262
Expression
k (v) = k (Pv ) =
#Pv =
(x,y,z,t)Pv
(x,y,z,t)Pv
3
ck (x, y, z,t) ck (v)
2
ck (x, y, z,t) ck (v)
!3
2
27.6.1 Ratio
Object Features > Layer Values > Pixel Based > Ratio
The amount that a given image layer contributes to the total brightness.
Editable Parameters
Image Layer
Parameters
Reference Book
11 June 2014
263
Expression
1ck (v)
k=1
[0, 1]
Conditions
Image Layer
Parameters
Expression
min ck (x, y)
(x,y)Pv
Reference Book
11 June 2014
264
Figure 27.2. Minimum pixel value of an image object consisting of three pixels
Conditions
None
Image Layer
Parameters
Expression
max ck (x, y)
(x,y)Pv
Figure 27.3. Maximum pixel value of an image object consisting of three pixels
Reference Book
11 June 2014
265
Image Layer
Parameters
ck (PvInner )
Reference Book
11 June 2014
266
Image Layer
Parameters
ck (PvOuter )
11 June 2014
267
Editable Parameter
Image Layer.
Parameters
1
ck(q) ck(p)
n [e=(p,q)
in E(v)]
with n = #E(v)
Figure 27.6. Border Contrast algorithm: Where a pixel has a single border with an image
object (1), this value is taken; where it has multiple borders (2), the different is subtracted
Reference Book
11 June 2014
268
Parameters
Bv (d) is the extended bounding box of an image object v with distance d with Bv (d)
equal to {(x, y, z) : xmin (v) d x xmax (v) + d , ymin (v) d y ymax (v) + d ,
zmin (v) d z zmax (v) + d}
Pv is the set of pixel/voxels of an image object v
ck is the mean intensity of image layer k.
Expression
ck Bv (d) Pv
1000 1
1 + ck (Pv )
!
Figure 27.7. The surrounding area of and image object v defined by its bounding box with a
[1000, 1000]
Conditions
Reference Book
11 June 2014
269
Figure 27.8. Pixels of the surrounding area are compared to the mean image layer intensity
Editable Parameters
Pbrighter : {P : ck (x, y, z) > ck (v)} within the surrounding volume of image object v
dened by the bounding box (Bv (d))
Pdarker : (Bv (d)) = {P : ck (x, y, z) < ck (v)} within the surrounding volume of image
object v dened by the bounding box (Bv (d))
(Bv (d)) is the extended bounding box of an image object v with distance d with
{(x, y, z) : xmin (v) d x xmax (v) + d , ymin (v) d y ymax (v) + d , zmin (v)
d z zmax (v) + d}
ck (x, y, z) is the image layer intensity value at pixel/voxel (x, y, z)
ck (v) is the mean intensity of image layer k of all pixel/voxels forming an image
object v
Pv is the set of pixel/voxels of an image object v.
Expression
ck (Pbrighter ) ck (Pdarker )
Feature Value Range
[0, 255]
Conditions
Reference Book
11 June 2014
270
k (Bv (d) Pv )
Feature Value Range
"
cmax
0, k
2
Conditions
Layer
Radius Mode
Number
q of Pixels radius calculated from the number of pixels in the object
R = #Pxl
Reference Book
11 June 2014
271
Expression
[R1 , R2 ]
Conditions
None
Reference Book
11 June 2014
272
27.7 To Neighbors
Object Features > Layer Values > To Neighbors
Image Layer
Feature Distance: Radius of the perimeter in pixel/voxel. Direct neighbors have a
value of 0.
Parameters
1
wu ck (v) ck (u)
k (v) =
w uN (d)
v
Reference Book
11 June 2014
273
Figure 27.9. Direct neighbors (x) and the neighborhood within perimeter d of image object v
Conditions
Image Layer
Feature Distance the radius of the perimeter in pixels. Direct neighbors have a
value of 0.
Parameters
11 June 2014
274
1
k (v) =
wu |ck (v) ck (u)|
w uN (d)
v
[0, ck
Conditions
Image Layer
Parameters
1
D
wu ck (v) ck (u)
k (v) =
w uN D
v
Reference Book
11 June 2014
275
[ck
range
, ck
Conditions
If w = 0 the feature value is 0; therefore the formula is invalid. If NvD = 0 the formula is
invalid.
Image Layer
Parameters
1
Bk (v) =
B wu ck (v) ck (u)
w uN
v
[ck
range
, ck
Reference Book
11 June 2014
276
Image Layer
Parameters
NvB is the darker direct neighbor to v , with NvB {u Nv : ck (u) < ck (v)}
bv is the image object border length
b(v, u) is the length of common border between v and u.
Expression
uNv
b(v, u)
bv
[0, 1]
Reference Book
11 June 2014
277
27.8 To Superobject
Object Features > Layer Values > To Superobject
Image Layer
Image Object Level Distance the upward distance of image object levels in the
image object hierarchy between the image object and the superobject.
Parameters
Expression
ck (v) ck Uv (d)
Feature Value Range
range
[ck
range
, ck
Image Layer
Image Object Level Distance the upward distance of image object levels in the
image object hierarchy between the image object and the superobject.
Reference Book
11 June 2014
278
Parameters
ck (v)
ck Uv (d)
Feature Value Range
[0, ]
Conditions
Image Layer
Image Object Level Distance the upward distance of image object levels in the
image object hierarchy between the image object and the superobject.
Parameters
k (v) k Uv (d)
Feature Value Range
1 range 1 range
[ ck , ck ]
2
2
Reference Book
11 June 2014
279
Condition
Image Layer
Image Object Level Distance the upward distance of image object levels in the
image object hierarchy between the image object and the superobject.
Parameters
k (v)
k (Uv (d))
Feature Value Range
[0, ]
Conditions
27.9 To Scene
Object Features > Layer Values > To Scene
Reference Book
11 June 2014
280
Editable Parameters
Image Layer
Parameters
ck (v) is the mean intensity of image layer k of all pixels forming an image object v
ck is the mean intensity of image layer k
range
range
= cmax
cmin
is the data range of image layer k , with ck
ck
k
k
Expression
ck (v) ck
Feature Value Range
range
[ck
range
, ck
Image Layer
Parameters
ck (v) is the mean intensity of image layer k of all pixels forming an image object v
ck is the mean intensity of image layer k.
Expression
ck (v)
ck
Feature Value Range
[, ]
Condition
Reference Book
11 June 2014
281
Layer red, layer green, layer blue for each parameter, assign a corresponding
image layer from the drop-down list. By default these are the rst three image
layers of the scene.
Output: Select the type of HSI transformation feature to be created: hue (color),
saturation or intensity (brightness).
Parameters
H=
undened
if max = min
GB
60 maxmin
360
if max = R
BR +120
60 maxmin
360
60 RG +240
maxmin
360
if max = G
if max = B
[0, 1]
Reference Book
11 June 2014
28.1 Extent
Object Features > Geometry > Extent
28.1.1 Area
Object Features > Geometry > Extent > Area
The number of pixels forming an image object. If unit information is available, the number of pixels can be converted into a measurement. In scenes that provide no unit information, the area of a single pixel is 1 and the area is simply the number of pixels that
form it. If the image data provides unit information, the area can be multiplied using the
appropriate factor.
Parameters
Av = #Pv u2
Feature Value Range
282
283
bv = bo + bi
Figure 28.1. Border length of an image object v , or between two objects v and u
Figure 28.2. Inner and outer border length of a torus image object
[0, ]
Reference Book
11 June 2014
284
Expression
#(slices)
bv =
n=1
where bv (slice) = bo + bi
Feature Value Range
[0, ]
#Pv v
Reference Book
11 June 2014
285
[0, ]
[0, ]
28.1.6 Length/Thickness
Object Features > Geometry > Extent > Length/Thickness
The length-to-thickness ratio of an image object.
Parameters
Length
Thickness
Feature Value Range
[0, ]
1 (v)
2 (v)
Reference Book
11 June 2014
286
2. The ratio of length to width can also be approximated using the bounding box:
2
kvbb
BB
=
v
#Pv
Both calculations are compared; the smaller of both results is returned as the feature
value.
Parameters
kvBB
hBB
v
a is the bounding box ll rate
#Pxl
h
w is the image layer weight
p
1
2
k h = #Pv k =
#Pv
w ,h
#Pv
k
k
w
k2
#Pxl
#Pxl
w2
Expression
BB
v = minEV
v , maxv
[0, ]
Length
Width
Reference Book
11 June 2014
287
[0, ]
28.1.10 Thickness
Object Features > Geometry > Extent > Thickness
The thickness of an image object is the smallest of three eigenvalues of a rectangular 3D
space with the same volume as the image object and the same proportions of eigenvalues
as the image object. The thickness of an image object can be smaller or equal to the
smallest of dimensions of the smallest rectangular 3D space enclosing the image object.
Feature Value Range
[0, ]
28.1.11 Volume
Object Features > Geometry > Extent > Volume
The number of voxels forming an image object rescaled by using unit information for x
and y co-ordinates and distance information between slices.
Parameters
Reference Book
11 June 2014
288
Expression
Vv = #Pv u2 uslices
Feature Value Range
#Pv
v
Feature Value Range
[0, ]
[0, ]
28.2 Shape
Object Features > Geometry > Shape
Reference Book
11 June 2014
289
Parameters
q
2 14 (VarX + VarY )2 + (VarXY )2 VarX VarY
VarX + VarY
[0, 1]
Reference Book
11 June 2014
290
Parameters
min
max
[0, 1]
bv
2(lv + wv )
Reference Book
11 June 2014
291
[1, ] ; 1 = ideal.
21 22 23
Vv
Reference Book
11 June 2014
292
[0, ] ; 1 = ideal.
Expression
21 22 23
Vv
Feature Value Range
[0, ] ; 1 = ideal.
Reference Book
11 June 2014
293
Expression
#Pv
1 + VarX +VarY
Feature Value Range
V
v is the volume of image object v
3
Vv is the edge of the volume tted cuboid
p
Var(X) + Var(Y ) + Var(Z) is the radius of the tted sphere
Expression
3
Vv
p
Var(X) + Var(Y ) + Var(Z)
Feature Value Range
Reference Book
11 June 2014
294
Parameters
# (x, y)Pv : v (x, y) 1
1
= 2
#Pv
= 2
# (x, y, z)Pv : v (x, y, z) 1
1
#Pv
Reference Book
11 June 2014
295
180 1
tan (VarXY, 1 VarY ) + 90
Figure 28.9. The main direction is based on the direction of the larger eigenvector
Reference Book
11 June 2014
296
[0, 180]
Figure 28.10. The line of best fit (blue) calculated from centers of gravity of image object
[0, 90]
Reference Book
11 June 2014
297
The Radius of Largest Enclosed Ellipse feature describes how similar an image object is
to an ellipse. The calculation uses an ellipse with the same area as the object and based
on the covariance matrix. This ellipse is scaled down until it is totally enclosed by the
image object. The ratio of the radius of this largest enclosed ellipse to the radius of the
original ellipse is returned as feature value.
Parameters
Figure 28.11. Radius of the largest enclosed ellipse of in image object v , for a 2D image object
[0, ]
11 June 2014
298
Figure 28.12. Radius of the largest enclosed ellipse of in image object v , for a 3D image object
[0, ]
Figure 28.13. Radius of the smallest enclosed ellipse of in image object v , for a 2D image
object
Parameters
Reference Book
11 June 2014
299
[0, ]
Figure 28.14. Radius of the smallest enclosed ellipse of in image object v , for a 3D image
object
Parameters
[0, ]
Reference Book
11 June 2014
300
The calculation is based on a rectangle with the same area as the image object. The
proportions of the rectangle are equal to the proportions of the length to width of the
image object. The area of the image object outside the rectangle is compared with the
area inside the rectangle.
Parameters
#(x, y)Pv : v (x, y) 1
#Pv
Reference Book
11 June 2014
301
Expression
#(x, y, z)Pv : v (x, y, z) 1
#Pv
Feature Value Range
max
min
v
v
[0, ] ; 0 = ideal.
Reference Book
11 June 2014
302
max
min
v
v
Feature Value Range
[0, ] ; 0 = ideal.
b
v
4 #Pv
Feature Value Range
[1, ] ; 1 = ideal.
Reference Book
11 June 2014
303
bv
Vv
Feature Value Range
[0, 1]
28.3 To Super-Object
Object Features > Geometry > To Super-Object
Use the To Super-Object feature to describe an image object by its shape and relationship
to one of its superobjects, where appropriate. Editing the feature distance determines
which superobject is referred to. When working with thematic layers these features can
be of interest.
Reference Book
11 June 2014
304
The area of an image object divided by the area of its superobject. If the feature value is
1, the image object is identical to its superobject. Use this feature to describe an image
object in terms of the amount of area it shares with its superobject.
Parameters
#Pv
#PUv(d)
Conditions
[0, 1]
dg v,Uv (d)
max dg u,Uv (d)
uS
Uv(d)(d)
Conditions
11 June 2014
305
[0, 1]
Nu (v) are neighbors of v that exist within the superobject Nu (v) : {u Nv : Uu (d)
Uv (d)}
bv is the image object border length
Expression
[0, 1]
Reference Book
11 June 2014
306
dg (v,Uv (d)) is the distance of v to the center of gravity of the superobject Uv (d)
Feature Value Range
[0, sx sy]
Reference Book
11 June 2014
307
de (v,Uv (d))
Figure 28.20. Distance between the distance from the superobjects center to the center of a
sub-object
Typically [0, 5]
Level Distance
Feature Value Range
[0, 1]
Reference Book
11 June 2014
308
Level Distance
Feature Value Range
[0, 1]
Level Distance the upward distance of image object levels in the image object
hierarchy between the image object and the superobject.
Expression
x = xCG of current image object xCG of superobject (where xCG is the center of gravity)
Feature Value Range
Level Distance: Upward distance of image object levels in the image object hierarchy between the image object and the superobject.
Reference Book
11 June 2014
309
Expression
y = yCG of current image object yCG of superobject (where yCG is the center of gravity)
Feature Value Range
Figure 28.21. Raster image object (black area) with its polygon object (red lines) after vector-
ization
Minimum Length
Minimum length
Reference Book
11 June 2014
310
1 n1
ai
2 i=0
Figure 28.23. A polygon with an inner polygon that is not included in the feature value
Reference Book
11 June 2014
311
Figure 28.24. A polygon with an inner polygon that is included in the feature value
Xi
Average =
i=1
Reference Book
11 June 2014
312
Parameters
Area
Perimeter
Expression
4 Area
Perimeter2
Feature Value Range
Reference Book
11 June 2014
313
This feature enables you to identify the affected image objects and take measures To
avoid this self-intersection. All image objects with a value of 1 will cause a polygon
self-intersection when exported to a shapele.
Figure 28.25. This type of image object leads to a self-intersection at the circled point
To avoid the self-intersection, the enclosed image object needs to be merged with the
enclosing image object.
TIP: Use the Image Object Fusion algorithm to remove polygon intersections. To do so, set the domain to all image objects with a value larger
than 0 for the polygon intersection feature. In the algorithm parameter, set
the Fitting Function Threshold to Polygon Self-Intersection (Polygon) feature to zero and in the Weighted Sum group, set Target Value Factor to 1.
This will merge all image objects with a value of 1 for the Polygon SelfIntersection (Polygon) feature, so that the resulting image object will not
include a self-intersection.
[0, 1]
Reference Book
11 June 2014
314
Expression
v
u n
u
u (xi x)
2
t i=1
n
Branch order: The main line of the skeleton has the order 0.
Feature Value Range
Branch order: The main line of the skeleton has the order 0
Feature Value Range
Reference Book
11 June 2014
315
Branch order: The main line of the skeleton has the order 0.
Feature Value Range
Branch order: The main line of the skeleton has the order 0.
Minimum length
Maximum length
Feature Value Range
Reference Book
11 June 2014
316
Figure 28.26. The main line (green) connects the mid-points of triagles (black and blue) cre-
Reference Book
11 June 2014
317
Reference Book
11 June 2014
318
Reference Book
11 June 2014
319
Figure 28.28. Height of an triangle that is crossed by the main line. In this case the side s
Reference Book
11 June 2014
29.1 Distance
Object Features > Position > Distance
To set, right-click the Distance to Line feature, select Edit Feature and adapt the coordinates of the two points:
[0, ]
320
321
Expression
Figure 29.2. Examples of the distance between an image object and the nearest scene border
[0, max sx 1, sy 1]
Reference Book
11 June 2014
322
min x
(x,y)Pv
Figure 29.3. X-distance between the image object and the left border
[0, sx 1]
Reference Book
11 June 2014
323
sx max x
(x,y)Pv
[0, sx 1]
min y
(x,y)Pv
Reference Book
11 June 2014
324
Figure 29.5. Y-distance between the image object and the bottom border
[0, sy 1]
sy max y
(x,y)Pv
Figure 29.6. Y-distance between the image object and the top border of the scene
[0, sy 1]
Reference Book
11 June 2014
325
29.2 Coordinate
Object Features > Position > Coordinate
1
#Pv
Expression
v (map)
tv = oor xsx
frame
Feature Value Range
Reference Book
11 June 2014
326
xv (map)
tv = oor sxframe
Feature Value Range
xmin (v,map)
tmin (v) = oor
sxframe
Feature Value Range
Reference Book
11 June 2014
327
29.2.5 X Center
Object Features > Position > Coordinate > X Center
X-position of the center of an image object. The calculation is based on the center of
gravity (geometric center) of the image object in the internal map.
Parameters
xv = xv (map) oor
xv (map)
sxframe
sxframe
29.2.6 X Max.
Object Features > Position > Coordinate > X Max
Maximum x -position of an image object derived from its bounding box. The calculation
is based on the maximum x -position of the image object in the internal map.
Parameters
xmax (map)
xmax (v) = xmax (v, map) oor sxframe
sxframe
Feature Value Range
Reference Book
11 June 2014
328
Figure 29.7. Maximum value of the x-coordinate at the image object border
29.2.7 X Min.
Object Features > Position > Coordinate > X Min
Minimum x-position of an image object derived from its bounding box. The calculation
is based on the minimum x-position of the image object in the internal map.
Parameters
xmin (v,map)
sxframe
xmin (v) = xmin (v, map) oor
sxframe
Figure 29.8. Minimum value of the x-coordinate at the image object border
29.2.8 Y Center
Object Features > Position > Coordinate > Y Center
Y-position of the center of an image object. The calculation is based on the center of
gravity (geometric center) of the image object in the internal map.
Reference Book
11 June 2014
329
Parameters
yv (map)
yv = yv (map) oor syslice syslice
29.2.9 Y Max.
Object Features > Position > Coordinate > Y Max
Maximum y-position of an image object derived from its bounding box. The calculation
is based on the maximum y-position of the image object in the internal map.
Parameters
ymax (v,map)
syslice
syslice
Reference Book
11 June 2014
330
Figure 29.10. Maximum value of the y-coordinate at the image object border
29.2.10 Y Min.
Object Features > Position > Coordinate > Y Min
Minimum y-position of an image object derived from its bounding box. The calculation
is based on the minimum y-position of the image object in the internal map.
Parameters
syslice
ymin (v) = ymin (v, map) oor yminsy(v,map)
slice
Figure 29.11. Minimum value of the y-coordinate at the image object border
Reference Book
11 June 2014
331
29.2.11 Z Center
Object Features > Position > Coordinate > Z Center
Z -position of the center of an image object. The calculation is based on the center of
gravity (geometric center) of the image object in the internal map.
Parameters
yv (map)
zv = oor syslice
Feature Value Range
29.2.12 Z Max
Object Features > Position > Coordinate > Z Max
Maximum z -position of an image object derived from its bounding box. The calculation
is based on the maximum z -position of the image object in the internal map.
Parameters
ymax (v,map)
syslice
Reference Book
11 June 2014
332
29.2.13 Z Min
Object Features > Position > Coordinate > Z Min
Minimum z -position of an image object derived from its bounding box. The calculation
is based on the minimum z -position of the image object in the internal map.
Parameters
ymin (v,map)
zmin (v) = oor
syslice
Feature Value Range
Reference Book
11 June 2014
333
334
v
u
u
t
1
#Sv (d)
(ck
uSv (d)
(u))2
!
1
ck (u) ck (u)
#Sv (d) uS(d)
uS (d)
v
Expression
1
k (u)
Reference Book
11 June 2014
335
Conditions
1
#Pu
#Sv (d) S(d)
v
Reference Book
11 June 2014
336
Parameters
v
u
u
t
1
#Sv (d)
1
#Pu2 #Sv (d) #Pu #Pu
uS (d)
uS (d)
uS (d)
v
1
a(u)
#Sv (d) uS(d)
v
Reference Book
11 June 2014
337
v
u
u
t
!
1
a2 (u) #Sv (d) a(u) a(u)
uS (d)
uS (d)
uS (d)
1
#Sv (d)
1
a(u)
#Sv (d) uS(d)
v
Reference Book
11 June 2014
338
1
a(u)
#Sv (d) uS(d)
v
11 June 2014
339
Figure 30.1. For calculation, the directions between 90 and 180 are inverted, which means
1
a(u)
#Sv (d) uS(d)
v
[0 , 180 ]
Condition
11 June 2014
340
The gray level co-occurrence matrix (GLCM) is a tabulation of how often different combinations of pixel gray levels occur in a scene. A different co-occurrence matrix exists for
each spatial relationship. To receive directional invariance, the sum of all four directions
(0, 45, 90, 135) are calculated before texture calculation. An angle of 0 represents
the vertical direction, an angle of 90 the horizontal direction. In Trimble software, texture after Haralick is calculated for all pixels of an image object. To reduce border effects,
pixels bordering the image object directly (surrounding pixels with a distance of 1) are
additionally taken into account.
The normalized GLCM is symmetrical. The diagonal elements represent pixel pairs with
no gray level difference. Cells, which are one cell away from the diagonal, represent pixel
pairs with a difference of only one gray level. Similarly, values in cells that are two pixels
away from the diagonal, show how many pixels have two gray levels and so forth. The
more distant to the diagonal, the greater the difference between the pixels gray levels is.
Summing-up the values of these parallel diagonals, gives the probability for each pixel to
be 0, 1, 2 or 3 etc. different from its neighbor pixels.
11 June 2014
341
if 8 bit data is used directly the results will be most reliable. When using data of higher
dynamic than 8 bit, the mean and standard deviation of the values is calculated. Assuming
a Gaussian distribution of the values, a value greater than 95% is inbetween the interval,
which is subdivided into 255 equal sub-intervals to obtain an 8-bit representation.
x 3 < x < x + 3
The result of any texture after Haralick analysis is dependent upon the direction of the
analysis, whether it is All Directions, Direction 0, Direction 45, Direction 90 or Direction 135. In addition, each feature is calculated based upon the gray values of one
selectable layer.
The calculation of any Haralick texture feature is very processor intensive because of the
calculation of the GLCM; as a result, performance may suffer.
The actual Haralick feature is calculated using the GLCM matrix and algorithm of a
particular feature. For example, the contrast feature is calculated using the formulae:
for (int i=0; i<mtrx.Rows(); i++)
for (int j=0; j<mtrx.Cols(); j++)
ctrst += mtrx[i][j] * ((i-j)*(i-j)).
TIP: For each Haralick texture feature there is a performance optimized
version labeled quick 8/11. The performance optimization works only on
data with a bit depth of 8 bit or 11-bit. Hence the label quick 8/11. Use the
performance optimized version whenever you work with 8- or 11-bit data.
For 16 bit data, use the conventional Haralick feature.
30.3.2 Parameters
11 June 2014
342
30.3.3 Expression
Every GLCM is normalized according to the following operation:
Pi, j =
Vi, j
N1
Vi, j
i, j=0
30.3.4 References
Haralick RM, Shanmugan K and Dinstein I (1973). Textural Features for Image Classication. IEEE Transactions on Systems, Man and Cybernetics. Vol SMC-3, No. 6 pp.
610621
Haralick RM (1979). Statistical and Structural Approaches to Texture. Proceedings of
the IEEE. Vol. 67, No. 5, pp. 786804
Conner RW and Harlow CA (1980). A Theoretical Comparison of Texture Algorithms.
IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol PAMI-2, No. 3.
Expression
N1
Pi, j
2
i, j=0 1 + (i j)
[0, 1]
Reference Book
11 June 2014
343
Expression
N1
i, j=0
Pi, j (i j)2
[0, 65025]
Expression
N1
i, j=0
Pi, j |i j|
[0, 255]
Reference Book
11 June 2014
344
Expression
N1
Pi, j ( ln Pi, j )
i, j=0
[0, 10404]
Expression
N1
(Pi, j )2
i, j=0
[0, 1]
Reference Book
11 June 2014
345
Expression
N1
i, j =
Pi, j
i, j=0
N2
[0, 255]
Reference Book
11 June 2014
346
Expression
2i, j =
N1
i, j=0
Pi, j (i, j i, j )
Standard Deviation
2i, j
[0, 255]
Expression
N1
(i, i )( j j )
Pi, j q
i, j=0
(2i )(2i )
[0, 1]
Reference Book
11 June 2014
347
Parameters
Vk2
k=0
[0, 1]
Expression
Vk (lnVk )
k=0
[0, 10404]
Reference Book
11 June 2014
348
Parameters
k(Vk )
k=0
[0, 255]
Vk k2
k=0
[0, 65025]
Reference Book
11 June 2014
Name
Value: Insert an initial value for the variable.
Choose whether the new variable is numeric (Double) or textual (String)
Shared: Select if you want to share the new variable among different rule sets
349
32.1 Level
Object Features > Hierarchy > Level
Returns the level object associated with an image object.
32.2.1 Parameters
Uv (d) are superobjects of an image object v at distance d
32.2.2 Expression
minUv (d) = 0
d
32.2.4 Conditions
This feature requires more than one image object level.
350
351
32.3.1 Parameters
d is the distance between neighbors
Sv (d) are sub-objects of an image object v at a distance d
32.3.2 Expression
minSv (d) = 1
d
32.4.1 Parameters
Nv (d) are neighbors of an image object v at a distance d
32.4.2 Expression
#Nv (d)
Reference Book
11 June 2014
352
32.5.1 Parameters
Sv (d) are sub-objects of an image object v at a distance d
32.5.2 Expression
#Sv (d)
32.6.1 Parameters
d is the distance between neighbors
Uv (d) are superobjects of an image object v at a distance d
32.6.2 Expression
minUv (d) = 1
d
Reference Book
11 June 2014
353
354
Reference Book
11 June 2014
34.1 Intensity
The point intensity values.
34.1.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
355
356
<none>
From Feature
From Array
You can dene to ignore elevation above the specied value.
Enter the value manually or choose:
Maximum allowed elevation
<none>
From Feature
From Array
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
Reference Book
11 June 2014
357
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.2 X coordinate
The X coordinate value of each point.
34.2.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
358
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of blue value
ltering classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.3 Y coordinate
The Y coordinate value of each point.
34.3.1 Parameters
Image Layer (LAS File Source)
Reference Book
11 June 2014
359
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified value of red value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of blue value
ltering classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
360
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.4 Elevation
The point elevation values.
34.4.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
Reference Book
11 June 2014
361
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of blue value
ltering classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.5.1 Parameters
Image Layer (LAS File Source)
Reference Book
11 June 2014
362
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
363
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.6.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
Reference Book
11 June 2014
364
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.7 Red
The Red image channel value associated with this point.
34.7.1 Parameters
Image Layer (LAS File Source)
Reference Book
11 June 2014
365
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
366
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.8 Green
The Green image channel value associated with this point
34.8.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
Reference Book
11 June 2014
367
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.9 Blue
The Blueimage channel value associated with this point
34.9.1 Parameters
Image Layer (LAS File Source)
Reference Book
11 June 2014
368
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
369
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.10.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Returns
All
First
Last
You can dene to ignore elevation below the specied value.
Enter the value manually or choose:
Minimum allowed elevation
<none>
From Feature
From Array
You can dene to ignore elevation above the specied value.
Enter the value manually or choose:
Maximum allowed elevation
<none>
From Feature
From Array
Reference Book
11 June 2014
370
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified value of red value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of green value
ltering classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified value of blue value
ltering classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
Reference Book
11 June 2014
371
34.11 Class
Gives back information about the classication of the points. You can, for example,
choose Mode in the Result Mode to get information on the dominant class.
34.11.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Median
Mode
Sum
Quantile
Returns
All
First
Last
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
372
Enter a value or select the mode for ltering
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified blue value
classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
34.12.1 Parameters
Image Layer (LAS File Source)
Feature Parameters
Result Mode
Average
Standard Deviation
Minimum
Maximum
Reference Book
11 June 2014
373
Median
Mode
Sum
Quantile
Returns
All
First
Last
You can dene to ignore elevation below the specied value.
Enter the value manually or choose:
Minimum allowed elevation
<none>
From Feature
From Array
You can dene to ignore elevation above the specied value.
Enter the value manually or choose:
Maximum allowed elevation
<none>
From Feature
From Array
Point Filter - By Class
Filtering
<none>
Selected classes
All not selected classes
Point Filter - By RGB value
Ignore points with the specified red value
classes:
<none>
Selected classes
All not selected classes
Ignore points with the specified green value
classes:
<none>
Selected classes
All not selected classes
Reference Book
11 June 2014
374
Enter a value or select the mode for ltering
classes:
<none>
Selected classes
All not selected classes
Point Filter - By Distance
<none>
From Feature
From Array
You can specify the maximum allowed input distance value
for calculation. Enter the value manually or choose:
Maximum allowed distance
<none>
From Feature
From Array
Reference Book
11 June 2014
35 Class-Related Features
Class-related features depend on image object features and refer to the class assigned to
image objects situated at any location in the image object hierarchy.
This location is specied for superobjects and sub-objects by the level distance dening
the vertical distance in the image object hierarchy. For neighbor image objects the location is specied by the spatial distance dening the horizontal distance in the image
object hierarchy. These feature distances can be edited.
Class-related features are global features because they are not related to individual image
objects. Class-Related Features are grouped as follows:
Relations to neighbor objects features are used to describe an image object by its
relationships to other image objects of a given class on the same image object level.
Relations to Sub-objects features describe an image object by its relationships to
other image objects of a given class on a lower image object level in the image
object hierarchy. You can use these features to evaluate sub-scale information because the resolution of image objects increases the lower you move in the image
object hierarchy.
Relations to Superobjects features describe an image object by its relations to other
image objects of a given class on a higher image object level in the image object
hierarchy. You can use these features to evaluate super-scale information because
the resolution of image objects decreases the higher you move in the image object
hierarchy.
Relations to Classication features are used to nd out about the current or potential classication of an image object.
Customized class-related features refer to class-related features. They are available
after they have been created.
35.1.1 Existence of
Class-Related Features > Relations to Neighbor Objects > Existence Of
375
Class-Related Features
376
Existence of an image object assigned to a dened class in a certain perimeter (in pixels)
around the image object concerned. If an image object of the dened classication is
found within the perimeter, the feature value is 1 (= true), otherwise it would be 0 (= false).
The radius dening the perimeter can be determined by editing the feature distance.
Expression
0 if Nv (d, m) = , 1 if Nv (d, m) 6=
Feature Value Range
[0, 1]
35.1.2 Number Of
Class-Related Features > Relations to Neighbor Objects > Number Of
Number of objects belonging to the selected class in a certain distance (in pixels) around
the image object.
Parameters
#Nv (d, m)
Feature Value Range
[0, ]
35.1.3 Border To
Class-Related Features > Relations to Neighbor Objects > Border To
The absolute border of an image object shared with neighboring objects of a dened
classication. If you use geo-referenced data, the feature value is the real border to
image objects of a dened class; otherwise it is the number of pixel edges shared with the
adjacent image objects, as by default the pixel edge-length is 1.
Parameters
Reference Book
11 June 2014
Class-Related Features
377
Expression
b(v, u)
uNv (d,m)
Figure 35.1. The absolute border between unclassified and classified image objects
[0, ]
b(v, u)
uNv (d,m)
bv
Feature Value Range
[0, 1]
Reference Book
11 June 2014
Class-Related Features
378
Conditions
If the relative border is 0 then the class m does not exist. If the relative border is 1 then
the object v is completely surrounded by class m
Class
Feature Distance: Radius (from the selected image object) of the area in pixels.
Parameters
#Pu
uNv (d,m)
#Pu
uNv (d)
[0, 1]
Reference Book
11 June 2014
Class-Related Features
379
Conditions
If the relative area is 0 then the class m does not exist. If the relative area is 1 then the
object v is completely surrounded by class m
35.1.6 Distance to
Class-Related Features > Relations to Neighbor Objects > Distance To
The distance (in pixels) of the image objects center concerned to the closest image objects center assigned to a dened class. The image objects on the line between the image
objects centers have to be of the dened class.
Parameters
min d(v, u)
uVi (m)
[0, ]
Reference Book
11 June 2014
Class-Related Features
380
Parameters
Nv (m))
(v,
Feature Value Range
[0, ]
Classes: Select the candidate classes - only objects of specied classes will be used
in the overlap calculation.
X-/Y-/Z-/T-position shift: a specied shift is applied to image objects pixels prior
to calculating the overlap e.g. relative shift in the x direction.
Overlap calculation
The following overlap calculation methods are available:
1. Do not use overlap: Overlap calculation is omitted. The parameter returns 1 if
there is any overlap with candidate image objects and 0 if there is no overlap.
2. Relative to larger object [0..1]: The ratio of the overlap area to the area of the larger
object (between candidate and current object) is calculated.
3. Relative to smaller object [0..1]: The ratio of the overlap area to the area of the
smaller object (between candidate and current object) is calculated.
4. Relative to current object [0..1]: The ratio of the overlap area to the area of the
current object is calculated.
5. Relative to candidate object [0..1]: The ratio of the overlap area to the area of the
candidate object is calculated.
6. Absolute [in pixels]: The number of overlapping pixels is calculated.
Feature Value Range
1.
2.
3.
4.
5.
6.
[0, 1]
[0..1]
[0..1]
[0..1]
[0..1]
[0, ]
Reference Book
11 June 2014
Class-Related Features
381
35.2.1 Existence Of
Class-Related Features > Relations to Sub Objects > Existence Of
The Existence Of feature checks if there is at least one sub-object assigned to a dened
class. If there is one, the feature value is 1 (= true), otherwise the feature value is 0 (=
false).
Parameters
0 if Sv (d, m) = , 1 if Sv (d, m) 6=
Feature Value Range
[0, 1]
35.2.2 Number of
Class-Related Features > Relations to Sub Objects > Number Of
The number of sub-objects assigned to a dened class.
Parameters
#Sv (d, m)
Reference Book
11 June 2014
Class-Related Features
382
[0, ]
35.2.3 Area of
Class-Related Features > Relations to Sub Objects > Area Of
The absolute area covered by sub-objects assigned to a dened class. If your data are
geo-referenced, the feature value represents the real area.
Parameters
d is the distance
m is the class
M is a sub-object in class m
Expression
#PM
MESv (d,m)
[0, ]
Class
Image object level Distance: Downward distance of image object levels in the
image object hierarchy between the image object and the sub-object.
Parameters
Reference Book
11 June 2014
Class-Related Features
383
Expression
#PM
MESv (d,m)
#Pv
Feature Value Range
[0, 1]
D(x) is the mean spatial distance to next neighbor of the sub-objects of the class x
N(x) is the number of sub-objects of class x
A is the number of pixels of the superobject (Area)
Obs_mean_dist is the observed mean distance of sub objects to their spatial nearest
neighbor with Obs_mean_dist = (D(x))
N(x)
Exp_mean_dist is the expected mean distance of sub objects to their spatial nearest
1
neighbor with Exp_mean_dist = q
2 N(x)
A
CAI =
Obs_Mean_Distance
Exp_Mean_Distance
[0, 2.149]
0 heavily clumped sub-objects
1 homogeneous spatial distribution of sub-objects
2.149 hexagonal distribution (edges of a honeycomb) of the sub-objects
11 June 2014
Class-Related Features
384
can use these features to evaluate super-scale information because the resolution of image
objects decreases the higher you move in the image object hierarchy.
35.3.1 Existence of
Class-Related Features > Relations to Super Objects > Existence of
Checks if the superobject is assigned to a dened class. If this is true, the feature value is
1, otherwise 0.
Parameters
0 if Uv (d, m) = , 1 if Uv (d, m) 6=
Feature Value Range
[0, 1]
35.4.1 Membership to
Class-Related Features > Relations to Classification > Membership to
In some cases it is important to incorporate the membership value to different classes in
one class. This function allows explicit addressing of the membership values to different
classes. If the membership value is below the assignment threshold, this value turns to 0.
Parameters
Reference Book
11 June 2014
Class-Related Features
385
Expression
m)
(v,
Feature Value Range
[0, 1]
35.4.2 Classified as
Class-Related Features > Relations to Classification > Classified As
The idea of this feature is to enable the user to refer to the classication of an image
object without regard to the membership value. It can be used to freeze a classication.
Parameters
m(v)
Feature Value Range
[0, 1]
(v, m)
Reference Book
11 June 2014
Class-Related Features
386
[0, 1]
Distance in class hierarchy species the number of hierarchical levels when navigating
from class to superclass. Using a distance of 0 the class name is returned, a distance of
1 will return the superclass name and so on. Distance in image object hierarchy species
the number of hierarchical levels when navigating from object to superobject. Using a
distance of 0 the class of the image object is used as a starting point for the navigation in
the class hierarchy, a distance of 1 will start at the class of the superobject.
Reference Book
11 June 2014
Class-Related Features
387
Process Example
In the algorithm update array select the Algorithm parameter > Feature > Class-Related
features > Relations to classication > Assigned class This results for instance in the
following process: at Main Level: update array class array: add from feature Assigned
class
Project Example
Please refer to the eCognition community for two projects Array Example 1 & 2 using
this feature: http://community.ecognition.com/home/Arrays%20Example%20%231.zip
/view http://community.ecognition.com/home/Arrays%20Example%20%232.zip/view
(To search the corresponding process press the shortcut Ctrl + F in eCognition or go to
the menu Process > Find and Replace and insert in the eld Name: assigned class)
Reference Book
11 June 2014
388
389
Figure 36.1. Example of a linked objects domain of outgoing direction within a maximum
distance of 3
Std Dev: Compute the standard deviation of the selected feature for all linked
objects
Min: Returns the minimum value of the selected feature for all linked objects
Max: Returns the maximum value of the selected feature for all linked objects
Is Min: Returns 1 if the image object is the minimum for the selected feature
for all linked objects
Is Max: Returns 1 if the image object is the maximum for the selected feature
for all linked objects
Count: Counts the linked objects. Ignores the selected feature
Feature: Select the image object feature that will be used for the statistical operation.
Link Class Filter: Select the classes that contain the links to be counted
Candidate Condition: Select the condition the linked objects must fulll to be
included in the statistical computation
Link Direction: Filter on the link direction:
All: Use all links, independent of link direction.
In: Use only inbound links.
Out: Use only outbound links.
Max. Distance: The maximum number of links in a path that will be taken into
account. This parameter allows you to limit the counting to the local neighborhood
of the object. Use 0 for objects one link away, 1 for objects two links away and
999999 to count all linked objects
Reference Book
11 June 2014
390
In order to display meaningful values, create a feature with default values, select the
image object of interested using Ctrl + left-mouse click (the object will outlined in dark
green), then select a linked image object.
Link Direction species the permitted link directions that create the linked object domain.
Reference Book
11 June 2014
37 Scene Features
Scene features return properties referring to the entire scene or map. They are global
because they are not related to individual image objects. Scene Features are grouped as
follows:
Scene variables are global variables that exist only once within a project. They are
independent of the current image object.
Class-related scene features provide information on all image objects of a given
class per map.
Scene-related features provide information on the scene.
Customized scene features refer to scene features. They are available after they
have been created.
Name
Value: Insert an initial value for the variable
Choose whether the new variable is numeric (Double) or textual (String)
Shared: Select if you want to share the new variable among different rule sets.
37.2 Class-Related
Scene Features > Class-Related
Class-related scene features provide information on all image objects of a given class per
map.
391
Scene Features
392
Class
Parameters
#V (m)
Feature Value Range
Class
Feature Value Range
Class
Reference Book
11 June 2014
Scene Features
393
Parameters
Expression
#Pv
veV (m)
[0, sx sy sz st]
Class
Image layer
Parameters
Expression
1
ck (v)
#V (m) veV(m)
Feature Value Range
[0, 1]
Reference Book
11 June 2014
Scene Features
394
Editable Parameters
Class
Image layer
Parameters
Expression
v
u
u
k V (m) = t
1
#V (m)
veV (m)
2
ck (v)
1
#V (m)
veV (m)
2 !
ck (v)
[0, 1]
Class
Statistical Operation
Feature
Name: To dene an new class variable, enter a name and click OK. The Create
Class Variable dialog box opens for class variable settings.
Reference Book
11 June 2014
Scene Features
395
37.3 Scene-Related
Scene Features > Scene-Related
Scene-related features provide information on the scene.
Level Name
Feature Value Range
[0, 1]
[0, 1]
Reference Book
11 June 2014
Scene Features
396
Editable Parameter
[0, 1]
Map
Feature Value Range
[0, 1]
Image Layer
Expression
ck
Feature Value Range
max
[cmin
k , ck ]
Reference Book
11 June 2014
Scene Features
397
Image Layer
Expression
min
c k
Image Layer
Expression
max
c k
Reference Book
11 June 2014
Scene Features
398
Region
Feature Value Range
[0, 1]
37.3.10 Mode
Scene Features > Scene-Related > Mode
Gives back the mean value of the objects for a dened layer, which repeats most often
per scene. Example: A scene has the following object mean values: 1 2222 333 444444
55 4 would be the mode, because it appears most often
There is the possibility, that values apprears equally often. Like: 11 2222 3333 4444 55,
then you can choose from the parameters, whether you want to get back the mimimum
( smallest, here: 4) the maximum ( largest, here: 5) or the median (here: 3)
Editable Parameters
Image Layer; Return result Choose one of the following calculations for Return result
Minimum
Maximum
Median
37.3.11 Quantile
Scene Features > Scene-Related > Quantile
Gives back the dened quantile of a dened layer for the scene.
Calculation: rst values are sorted from small to large. Associates the ordered values with
equally spaced sample fractions. It reads out the value at the dened quantile position.
Editable Parameters
Select a layer from the Layer drop-down box, from which to extract the quantile
Insert a percentage value into the Quantile eld (or get the value from a feature or
array item).
Reference Book
11 June 2014
Scene Features
399
Parameters
The quantile is given by the element of the list Y with index equal to int(r)
Reference Book
11 June 2014
Scene Features
400
[0, 1]
Reference Book
11 June 2014
Scene Features
401
st = #(frames) uframes
Feature Value Range
[1, ]
Reference Book
11 June 2014
Scene Features
402
Parameters
sx = #(pixels)x u
Feature Value Range
[1, ]
sy = #(pixels)y u
Feature Value Range
[1, ]
Reference Book
11 June 2014
Scene Features
403
Expression
sx = #(slices)y uslices
Feature Value Range
[1, ]
#V
Feature Value Range
[0, 1]
Reference Book
11 June 2014
Scene Features
404
Parameters
Expression
sx sy sz st
Feature Value Range
Reference Book
11 June 2014
Scene Features
405
37.3.36 Scene ID
Scene Features > Scene-Related > Scene ID
The identication number of the scene. It is generated automatically by eCognition Developer 9.0.
Reference Book
11 June 2014
Scene Features
406
uslices
uframes
Reference Book
Example:
11 June 2014
Scene Features
407
Reference Book
11 June 2014
Scene Features
408
Reference Book
11 June 2014
Scene Features
409
eCognition Developer 9.0 will replace the following variables with actual values:
Shortcut
Description
"{:Workspc.Name}"
"{:Workspc.Guid}"
"{:Workspc.Dir}"
"{:Project.Name}"
"{:Project.Guid}"
"{:Scene.Dir}"
"{:Scene.Name}"
"{:ImgLayer(n).Dir}"
"{:ActionLib.Dir}"
"{:Variable:abcde}"
"{:Application.Dir}"
Example
37.7 UI Related
Scene Features > UI Related
Reference Book
11 June 2014
Scene Features
410
37.7.1 Equalization
Scene Features > UI Related > Equalization
Equalization features return the level equalization values selected by users (which can be
changed via the Edit Image Layer Mixing dialog box, or by dragging the mouse across
the image with the right mouse button, if this option is enabled in Tools > Options).
Equalization Mode
Linear
Linear Inverse
Gamma Correction (Positive)
Gamma Correction (Negative)
Gamma Correction (Positive) Inverse
Gamma Correction (Negative) Inverse
The equalization range either side of the center (Window Level Max minus Window Level
Min)
Reference Book
11 June 2014
Figure 38.1. Process Tree window displaying a prototype of a process hierarchy. The pro-
For example, a process distance of 1 means that the parent process is located one hierarchical level above the current process. A process distance of 2 means that the parent
process is located two hierarchical levels above the current process. Figuratively, you
may say, a process distance of 2 denes the grandparent process.
38.1 Customized
Process-Related Features > Customized
412
Dene a customized feature in relation to the difference between a given feature value of
an image object and the feature value of its parent process object (PPO).
Editable Parameters
f (v) f ()
Feature Value Range
Reference Book
11 June 2014
413
Expression
f (v)
f ()
Feature Value Range
38.2.2 Parameters
b(v, ) is the length of common border between v and with the parent process
object
38.2.3 Expression
b(v, r)
Reference Book
11 June 2014
414
38.4.1 Parameters
xv , yv
38.4.3 Expression
(xv , yv )
Reference Book
11 June 2014
415
38.5.2 Parameters
bv is the image object border length
b(v, ) is the length of common border between v and with the parent process
object
38.5.3 Expression
b(v, )
bv
38.6.2 Parameters
v is the image object
is the parent process object (PPO)
Uv (d) is the superobject of an image object v at a distance d
38.6.3 Expression
1 : Uv (d) = U (d)
0 : Uv (d) 6= U (d)
Reference Book
11 June 2014
416
38.7 Series ID
Process-Related Features > Series ID
The Execute Child as Series algorithm executes its child domains based on the Number of
Cycles parameter. For each execution, a unique identier is generated and the parameter
Series Name prexes this identier. Therefore, if the domain has four image objects, and
the number of cycles is set to 1, the child processes will be executed four times (once for
each image object). On each cycle, the value attached to the series increases by one, for
example:
1st cycle = series1
2nd cycle = series2
Therefore, the feature acts as a loop counter inside the Execute Child as Series algorithm.
Reference Book
11 June 2014
39 Region Features
Region features return properties referring to a given region. They are global because
they are not related to individual image objects. They are grouped as follows:
Region-related features provide information on a given region.
Layer-related region features evaluate the rst and second statistical moment
(mean, standard deviation) of a regions pixel value.
Class-related region features provide information on all image objects of a given
class per region.
39.1 Region-Related
Region Features > Region-Related
Region-related features provide information on a given region.
Region
Parameters
Expression
Rx Ry Rz Rt
417
Region Features
418
[0, ]
39.1.2 T Extent
Region Features > Region Related > T Extent
Number of frames within a region.
Editable Parameters
Region
Expression
Rt
Feature Value Range
[0, ]
39.1.3 T Origin
Region Features > Region Related > T Origin
The t -co-ordinate of the origin of a region.
Editable Parameters
Region
Expression
tG
Feature Value Range
[0, ]
39.1.4 X Extent
Region Features > Region Related > X Extent
The extent of a region in x
Reference Book
11 June 2014
Region Features
419
Editable Parameters
Region
Expression
Rx
Feature Value Range
[0, ]
39.1.5 X Origin
Region Features > Region Related > X Origin
The x -co-ordinate of the origin of a region.
39.1.6 Y Extent
Region Features > Region Related > Y Extent
The pixel extent of a region in y .
Editable Parameters
Region
Expression
xG
Feature Value Range
[0, ]
39.1.7 Y Origin
Region Features > Region Related > Y Origin
The y -co-ordinate of the origin of a region.
Editable Parameters
Region
Reference Book
11 June 2014
Region Features
420
Expression
yG
Feature Value Range
[0, ]
39.1.8 Z Extent
Region Features > Region Related > Z Extent
The number of slices within a region.
Editable Parameters
Region
Expression
Rz
Feature Value Range
[0, ]
39.1.9 Z Origin
Region Features > Region Related > Z Origin
The z -co-ordinate of the origin of a region.
Editable Parameter
Region
Expression
zG
Feature Value Range
[0, ]
Reference Book
11 June 2014
Region Features
421
39.2 Layer-Related
Region Features > Layer-Related
Layer-related region features evaluate the rst and second statistical moment (mean, standard deviation) of a regions pixel value.
39.2.1 Mean
Region Features > Layer Related > Mean
Mean image layer value of a given image layer within a region.
Editable Parameters
Region
Image layer
Parameters
ck (R)
Feature Value Range
max
[cmin
R , cR ]
Region
Image layer
Parameters
Reference Book
11 June 2014
Region Features
422
Expression
k (R)
Feature Value Range
max
[cmin
R , cR ]
39.3 Class-Related
Region Features > Class Related
Class-related region features provide information on all image objects of a given class
per region.
Region
Class
Image object level
Parameters
Expression
#Pi (R, m)
Feature Value Range
[0, Rx Ry Rz Rt ]
Reference Book
11 June 2014
Region Features
423
Region
Class
Image object Level
Parameters
Expression
#Pi (R, m)
#Pi (R)
Figure 39.1. Schematic display of the relative area of a class within a two dimensional region.
11/25 = 0.44
10 pixels
10/25 = 0.4
4 pixels
4/25 = 0.16
[0, 1]
Reference Book
11 June 2014
40.1 Object-Related
Region Features > Image Registration Features > Object Related
40.2 Scene-Related
Region Features > Image Registration Features > Scene Related
424
41 Metadata
Metadata items can be used as a feature in rule set development. To do so, you have to
provide external metadata in the feature tree. If you are not using data import procedures
to convert external source metadata to internal metadata denitions, you can create individual features from a single metadata item. See also Object features > Object Metadata.
425
Metadata
426
metadata items. You can type a name of a not existing metadata source, to create a
feature group in advance.
Name: Name of the metadata item as used in the source data.
Type: Select the type of the metadata item: string, double, or integer.
Reference Book
11 June 2014
42 Feature Variables
Feature Variables have features as their values. Once a feature is assigned to a feature
variable, the feature variable can be used like that feature. It returns the same value as the
feature to which it points. It uses the unit of whatever feature is assigned as a variable.
It is possible to create a feature variable without a feature assigned, but the calculation
value would be invalid.
In a rule set, feature variables can be used like the corresponding feature.
427
428
429
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Variable: Select or enter the variable that gets updated by this control
Process on Selection Change: The name of the process that will be executed when
the selection changes
Items: Add or edit the items in the drop-down list
Reference Book
11 June 2014
430
Reference Book
11 June 2014
431
Process on Change: The name of the process that will be executed when a user
changes a value
Reference Book
11 June 2014
432
Description: Enter a description of your widget. The text will appear in the Description pane when the cursor hovers over the widget. You may have a text description or an image description, but not both
Description Image: Allows you to display a TIFF image in the Description pane
(transparency is supported). You may have a text description or an image description, but not both
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Reference Book
11 June 2014
433
Description: Enter a description of your widget. The text will appear in the Description pane when the cursor hovers over the widget. You may have a text description or an image description, but not both
Description Image: Allows you to display a TIFF image in the Description pane
(transparency is supported). You may have a text description or an image description, but not both
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Process on Change: The name of the process that will be executed when a user
changes a value
Reference Book
11 June 2014
434
Dependency Handling: The dependency effect of the selected class. Choose from
one of the following:
None
Required. This activates the parameter Dependency Error Message, which is
displayed when a dependency conict occurs. Use the tag #class within the
error text to identify the class name
Forbidden. This activates the parameter Dependency Error Message, which
is displayed when a dependency conict occurs. Use the tag #class within
the error text to identify the class name
Added
Removed
Description: Enter a description of your widget. The text will appear in the Description pane when the cursor hovers over the widget. You may have a text description or an image description, but not both
Description Image: Allows you to display a TIFF image in the Description pane
(transparency is supported). You may have a text description or an image description, but not both
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Reference Book
11 June 2014
435
Description: Enter a description of your widget. The text will appear in the Description pane when the cursor hovers over the widget. You may have a text description or an image description, but not both
Description Image: Allows you to display a TIFF image in the Description pane
(transparency is supported). You may have a text description or an image description, but not both
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Reference Book
11 June 2014
436
Tick Frequency: Enter a value to dene how often tick marks appear next to the
slider
Jump Value: Enter a value to dene the increments when the slider is moved
Ruleset: Navigate to the ruleset (.dcp le) containing the processes to be executed
Reference Book
11 June 2014
437
Show Edit Classes Button: Select yes to display the Edit Classes button, which
allows users to change the class names and colors of widget classes
Image File for Edit Classes: Navigate to the path of the image le for the button
image
Show/Hide Variable: Enter or select the name of the variable that denes whether
the widget is visible. If you enter 0, the widget is hidden
Enable/Disable Variable: Enter or select the name of the variable that denes
whether the widget is enabled or disabled. A zero value denes the widget state as
disabled
Action Buttons:
Class: Assign a class to the button
Description: Enter a description to appear at the bottom of the description
pane. The text will also display as a tooltip
Description Image: Add a TIFF image to the description area (transparency
is supported). Uploading an image will replace the text in the description in
the description area. You may have a text description or an image description,
but not both
Process Path: Enter the process path to be executed by the action button
Process Path on Release: Enter the process path to be executed when the
button is released
Hot Key: Lets you dene a single hot key to execute an action
Reference Book
11 June 2014
44 General Reference
44.1 Use Variables as Features
The following variables can be used as features:
Scene variables
Object variables
Class variables
Feature variables.
They are displayed in the feature tree of, for example, the Feature View window or the
Select Displayed Features dialog box.
438
General Reference
439
Figure 44.1. The rendering process of an image displayed in the map view
Image layer equalization is part of the rendering process of the scene display within the
map view.
Image layer equalization maps the input data of an image layer which may have different
intensity ranges to the unied intensity range [0. . . 255] of an 8-bit gray value image.
For 8-bit data no image layer equalization is necessary. All other data types have to be
converted into an 8-bit representation at this step of the rendering process.
This function is implemented as a mapping of the input range to the display range of
[0. . . 255]. Image layer equalization can be either linear or manual. By default, the input
data is mapped to the gray value range by a linear function.
Data Type
Input Range
Mapping Function
cs = ck (no transformation)
cs = 255 ck
16-bit unsigned; 32-bit unsigned [0 . . . max2 (ck )]
max2 (ck )
cs = 255 (ck min2 (ck ))
16-bit signed; 32-bit signed
[min2 (ck ) . . . max2 (ck )]
max2 (ck ) min2 (ck )
cs = 255 (ck min10 (ck ))
32 bit oat
[min10 (ck ) . . . max10 (ck )]
max10 (ck ) min10 (ck )
8-bit
[0 . . . 255]
Reference Book
11 June 2014
General Reference
440
Data Type
Mapping Function
Linear
Linear inverse
Gamma correction (positive)
Gamma correction (negative)
Gamma correction (positive) inverse
Gamma correction (negative) inverse cs = 255 255 ((ck cmin ) (cmax cmin ))0.5
Image Equalization
Image equalization is performed after all image layers are mixed into a raw RGB (red,
green, blue) image. If more than one image layer is assigned to one screen color (red,
green or blue) this approach leads to higher quality results. Where only one image layer
is assigned to each color, as is common, this approach is the same as applying equalization
to the individual raw layer gray value images
There are different modes for image equalization available. All of them rely on image
statistics. These are computed on the basis of a 256 256 pixel sized thumbnail of the
current raw RGB image.
No (None) equalization allows you to see the image data as it is, which can be
helpful at the beginning of rule set development, when looking for an approach. The
output from the image layer mixing is displayed without further modication.
None
Reference Book
11 June 2014
General Reference
441
Linear equalization maps each color red, green, and blue from an input range
[cmin . . . cmax ] to the available screen intensity range [0 . . . 255] by a linear mapping. The
input range can be modied by the percentage parameter p. The input range is computed
such that p per cent of the pixels are not part of the input range. In case p = 0 this means
the range of used color values is stretched to the range [0 . . . 255] . For p > 0 the mapping
ignores 2p per cent of the darkest pixels and 2p per cent of the brightest pixels. In many
cases a small value of p leads to better results because the available color range can be
better used for the relevant data by ignoring the outliers.
max{c : #{(x, y) : ck (x, y) < cmin }
(sx sy) 2p
min{c : #{(x, y) : ck (x, y) > cmax }
cmax =
(sx sy) 2p
[cmin . . . cmax] is the input
range
ck cmin
cs = 255
is the mapping function.
cmax cmin
cmin =
With its default parameter of 3.0, standard deviation renders a similar display as linear equalization. Use a parameter around 1.0 for an exclusion
of dark and bright outliers.
Standard Deviation Equalization
Standard deviation equalization maps the input range to the available screen intensity
range [0 . . . 255] by a linear mapping. The input range [cmin . . . cmax ] can be modied by
the width p. The input range is computed such that the center of the input range represents
the mean value of the pixel intensities mean (ck ). The left and right borders of the input
range are computed by taking n times the standard deviation k to the left and the right.
You can modify the parameter n.
cmin = mean(ck ) n k
cmax = mean(ck ) + n k
[cmin . . . cmax] is the input
range
ck cmin
is the mapping function.
cs = 255
cmax cmin
Gamma correction equalization is used to improve the
contrast of dark or bright areas by spreading the corresponding gray values. Gamma correction equalization maps the input range to the available screen intensity range [0 . . . 255]
by a polynomial mapping. The input range [cmin . . . cmax ] cannot be be modied and is
dened by the smallest and the largest existing pixel values.
Gamma Correction Equalization
min
11 June 2014
General Reference
442
You can be modify the exponent of the mapping function by editing the equalization
parameter. Values of n less than 1 emphasize darker regions of the image, values larger
than one emphasize darker areas of the image. A value from n = 1 represents the linear
case.
Histogram equalization is well suited for LANDSAT images but
can lead to substantial over-stretching on many normal images. It can be helpful in cases
you want to display dark areas with more contrast. Histogram equalization maps the input
range to the available screen intensity range [0 . . . 255] by a nonlinear function. Simply
said, the mapping is dened by the property that each color value of the output image
represents the same number of pixels. The respective algorithm is more complex and can
be found in standard image processing literature.
Histogram Equalization
Reference Book
11 June 2014
Acknowledgments
Portions of this product are based in part on the third-party software components. Trimble
is required to include the following text, with software and distributions.
443
Acknowledgments
444
ITK Copyright
Copyright 19992003 Insight Software Consortium
All rights reserved.
Redistribution and use in source and binary forms, with or without modication, are
permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the Insight Software Consortium nor the names of its contributors may be used to endorse or promote products derived from this software
without specic prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
python/tests/test_doctests.py
Copyright 2007, Sean C. Gillies, [email protected]
All rights reserved.
Redistribution and use in source and binary forms, with or without modication, are
permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of Sean C. Gillies nor the names of its contributors may be used
to endorse or promote products derived from this software without specic prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUD-
Reference Book
11 June 2014
Acknowledgments
445
ING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
src/Verson.rc
Copyright 2005, Frank Warmerdam, [email protected]
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation les (the Software), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom
the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY
KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER
src/gt_wkt_srs.cpp
Copyright 1999, Frank Warmerdam, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation les (the Software), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom
the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY
KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
Reference Book
11 June 2014
446
AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
Reference Book
11 June 2014