0% found this document useful (0 votes)
38 views

Comparison of Filtering Algorithms

This document summarizes a study that compared the performance of eight different filtering algorithms for processing airborne laser scanning (ALS) point cloud data. The algorithms were tested on twelve datasets and their results were compared to manually generated reference data. In general, filters performed well in less complex landscapes but struggled more with complex urban areas and discontinuities in terrain. Comparing filtering results at lower resolutions confirmed that filtering method and point density both impact success. The study suggests future research focus on classifying points using external data, improving quality reporting, and developing more efficient filtering strategies.

Uploaded by

Hafiz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Comparison of Filtering Algorithms

This document summarizes a study that compared the performance of eight different filtering algorithms for processing airborne laser scanning (ALS) point cloud data. The algorithms were tested on twelve datasets and their results were compared to manually generated reference data. In general, filters performed well in less complex landscapes but struggled more with complex urban areas and discontinuities in terrain. Comparing filtering results at lower resolutions confirmed that filtering method and point density both impact success. The study suggests future research focus on classifying points using external data, improving quality reporting, and developing more efficient filtering strategies.

Uploaded by

Hafiz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228579450

Comparison of filtering algorithms

Article · January 2003

CITATIONS READS

70 2,040

2 authors:

George Sithole George Vosselman


University of Cape Town University of Twente
33 PUBLICATIONS   2,326 CITATIONS    267 PUBLICATIONS   10,936 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

publication View project

3D Maps for Enviromental Modelling View project

All content following this page was uploaded by George Vosselman on 16 May 2014.

The user has requested enhancement of the downloaded file.


COMPARISON OF FILTERING ALGORITHMS

George Sithole, George Vosselman


Department of Geodesy, Faculty of Civil Engineering and Geosciences
Delft University of Technology
The Netherlands
[email protected], [email protected]

Commission III, Working Group 3

KEY WORDS: LIDAR, DEM/DTM, classification, filtering

ABSTRACT
To determine the performance of filtering algorithms a study was conducted in which eight groups filtered data supplied to them. The
study aimed to determine the general performance of filters, the influence of point resolution on filtering and future research
directions. To meet the objectives the filtered data was compared against reference data that was generated manually. In general the
filters performed well in landscapes of low complexity. However, complex landscapes as can be found in city areas and
discontinuities in the bare earth still pose challenges. Comparison of filtering at lower resolutions confirms that amongst other factors
the method of filtering also has an impact on the success of filtering and hence on the choice of scanning resolution. It is suggested
that future research be directed at heuristic classification of point-clouds (based on external data), quality reporting, and improving
the efficiency of filter strategies.

1 INTRODUCTION In line with these aims a web site was set up in which twelve
sets of data were provided for testing. Individuals and groups
While commercial Airborne Laser Scanning (ALS) systems wishing to participate in the study were kindly requested to
have come a long way, the choice of appropriate data processing process all twelve data sets if possible. A total of 8 data sets
techniques for particular applications is still being researched. (results) were received. The algorithms used by participants
Data processing, here, is understood as being either come from a cross-section of the most common strategies (or
semiautomatic or automatic, and includes such tasks as variants) for extracting the bare-earth from ALS point-clouds.
“modelling of systematic errors”, “filtering”, “feature The report is broken into three main parts. Section 2 discusses
extraction” and “thinning”. Of these tasks, manual classification common characteristics of filtering algorithms. Sections 3 and 4
(filtering) and quality control pose the greatest challenges, describe the data used, and discuss the results of the
consuming an estimated 60 to 80% of processing time (Flood, comparisons. In section 5 the results from the study is discussed
2001), and thus underlining the necessity for research in this and conclusions are drawn in respect to the objectives setout.
area. Algorithms have been developed for semi
automatically/automatically extracting the bare-earth from 2 FILTER CHARACTERISTICS
point-clouds obtained by airborne laser scanning and InSAR.
While the mechanics of some of these algorithms have been Filters are built from combinations of different elements.
published, those of others are not known because of proprietary Therefore, to understand or predict the behavior and output of a
restrictions. Some comparison of known filtering algorithms and filter the way in which elements are combined has to be
difficulties have been mentioned in Huising and Gomes Pereira understood. Seven elements have been identified:
(1998), Haugerud and Harding (2001), Tao and Hu (2001).
However, an experimental comparison was not available. 2.1 Data Structure
Because of this it was felt that an evaluation of filters was
required to assess the strengths and weaknesses of the different The output of an ALS is a cloud of irregularly spaced 3D points.
approaches based on available control data. In line with the Some filters work with the raw point-cloud. However, to take
framework of ISPRS Commission III, the Working Group III/3 advantage of image processing toolkits some filtering algorithms
"3D Reconstruction from Airborne Laser Scanner and InSAR resample the ALS produced point-cloud into an image grid,
Data" initiated a study to compare the performance of various before filtering.
automatic filters developed to date, with the aim of:
2.2 Test neighborhood and the number of points filtered
at a time
1. Determining the comparative performance of existing
filters.
Filters always operate on a local neighborhood. In the
2. Determining the performance of filtering algorithms under classification operation (bare earth or object) two or more points
varying point densities. are classified at a time. In regard to the neighborhood this
classification can be done in three possible ways.
3. Determining problems in the filtering of point-clouds that
still require further attention
Point-to- Point - In these algorithms two points are compared at information is gathered about the neighborhood of a point and
a time. The discriminant function is based on the positions of the thus a much more reliable classification can be obtained.
two points. If the output of the discriminant function is above a
certain threshold then one of the points is assumed to belong to
an object. Only one point is classified at a time.

Point-to-Points - In these algorithms neighboring points (of a


point of interest) are used to solve a discriminant function.
Based on the output of the discriminant function the point of Slope base Block-Minimum
interest can then be classified. One point is classified at a time.

Points-to-Points - In these algorithms several points are used to


solve a discriminant function. Based on the discriminant
function the points can then be classified. More than one point is
classified in such a formulation.
Surface based Cluster/Segmentation
2.3 Measure of Discontinuity Figure 2.1 Filter concepts

Most algorithms classify based on some measure of


discontinuity. Some of the measures of discontinuity used are, 2.6 Replacement vs. Culling
height difference, slope, shortest distance to TIN facets, and
shortest distance to parameterized surfaces. In culling a filtered point is removed from a point-cloud. Culling
is typically found in algorithms that operate on irregularly
2.4 Filter concept spaced point-clouds. In a replacement, a filtered point is
returned to the point-cloud with a different height (usually
Every filter makes an assumption about the structure of bare interpolated from its neighborhood). Replacement is typically
earth points in a local neighborhood. This forms the concept of found in algorithms that operate on regularly spaced (rasterized)
the filter (figure 2.1). point-clouds.

Slope based - In these algorithms the slope or height difference 2.7 Using first pulse and reflectance data
between two points is measured. If the slope exceeds a certain
threshold then the highest point is assumed to belong to an Some scanners record multiple pulse returns. This feature is
object. advantageous in forested areas, where the first pulse is usually
off vegetation and subsequent pulses are from surfaces below
Block-minimum - Here the discriminant function is a horizontal the vegetation canopy. Additional to multiple pulse
plane with a corresponding buffer zone above it. The plane measurements the intensity of the returned pulses is also
locates the buffer zone, and the buffer zone defines a region in measured. Different surfaces in the landscape will absorb/reflect
3D space where bare earth points are expected to reside. pulses differently and therefore it may be possible to use this
information in classifying points.
Surface base - In this case the discriminant function is a
3 TEST DATA AND ALGORITHMS
parametric surface with a corresponding buffer zone above it.
The surface locates the buffer zone, and as before the buffer
As part of the second phase of the OEEPE project on laser
zone defines a region in 3D space where ground points are
scanning (OEEPE 2000) companies were invited to fly over the
expected to reside.
Vaihingen/Enz test field and Stuttgart city center. These areas
were chosen because of their diverse feature content. Eight sites
Clustering/ Segmentation - The rational behind such
were selected for the comparison of filtering algorithms. The
algorithms is that any points that cluster must belong to an
landscape was scanned with an Optech ALTM scanner, and the
object if their cluster is above its neighborhood. It is important
data was produced by FOTONOR AS. Both first and last pulse
to note that for such a concept to work the clusters/ segments
data were recorded. Eight test sites (four urban and four rural)
must delineate objects and not facets of objects.
were chosen. The urban sites were at a resolution of 1-1.5m. The
2.5 Single step vs. iterative rural sites were at a resolution of 2-3.5m. This data was offered
to participants for processing. Some characteristics of the test-
Some filter algorithms classify points in a single pass while sites are listed below:
others iterate, and classify points in multiple passes. The
advantage of a single step algorithm is computational speed. (i) Steep slopes, (ii) Mixture of vegetation and
However, computational speed is traded for accuracy by buildings on hillside, (iii) Large buildings, (iv)
iterating the solution, with the rational that in each pass more Irregularly shaped buildings, (v) Densely packed
buildings with vegetation between them, (vi) Building
with eccentric roofs, (vii) Open spaces with mixtures 4 RESULTS/ COMPARISONS
of low and high features, (viii) Railway station with
trains (low density of terrain points), (ix) Bridge, (x) 4.1 Qualitative assessment
High bridge, (xi) Underpass, (xii) Ramps, (xiii) Road
with embankments, (xiv) Road with bridge and small The fifteen samples were extracted with a view to examining
tunnel, (xv) Quarry (break-lines), (xvi) Vegetation on and comparing how the different filters behave and to identify
river bank, (xvii) Data gaps difficulties in filtering. Based on an examination of the eight
data sets and the fifteen sample sets, each of the filters was
Two sites were also provided at three different resolutions (1- assessed for difficulties.
1.5m, 2-3.5m, 4-6m for the urban and 2-3.5m, 4-5.5m, 7-10m
4.1.1 Filtering difficulties
for the rural) to test filter performance at three different point-
cloud resolutions. To obtain the first reduced resolution the scan
The filtering difficulties identified from the qualitative
lines in the original point-clouds were detected and every second
comparison relate to Outliers, Object Complexity, Attached
point in a scan line was dropped. Similarly, the second reduced
Objects, Vegetation, and Discontinuities in the Bare Earth. Each
point-cloud was produced from the first reduced point-cloud.
is briefly discussed below.
3.1 Reference data sets
4.1.1.1 Outliers
The reference data was generated, by manually filtering the High outliers - These are points that normally do not belong to
eight data sets. In the manual filtering, knowledge of the the landscape. They originate from hits off objects like birds,
landscape and some aerial imagery where available. All points low flying aircraft, etc. Most filters handle such features easily,
in the reference data sets were labelled “Bare Earth” or because they are so far elevated above neighboring points.
“Object”. From the eight data sets fifteen samples were Therefore it is included here for completeness only.
abstracted. These fifteen samples were rechecked and it is these
samples that are used in the quantitative analysis. The fifteen Low outliers - These are points that also normally do not belong
samples are representative of different environments (but are to the landscape. They originate from multi-path errors and
more focused in respect to the expected difficulties). errors in the laser range finder. Most filters work on the
assumption that the lowest points in a point-cloud must belong
3.2 Filtered data sets to the terrain. These points are naturally an exception to the rule.
Many algorithms also work on the assumption that points
Corresponding samples to the fifteen reference samples were neighboring a lower point must belong to an object. In practice
also extracted from the filtered data provided by the participants. this assumption usually holds. However, in cases where the
The filtered data sets contain only the bare earth points. lowest point is an outlier, the assumption fails completely,
resulting in an erosion of points in the neighborhood of the low
3.3 Participants outlier.

Eight individuals/groups submitted results for the test. A brief 4.1.1.2 Object complexity
breakdown of the participants and a description of their filters
are given in Table 3.1. Very large objects - Because many of the filtering algorithms
are localized, large objects may not be filtered if the size of
Table 3.1 Participants objects exceeds that of the test neighborhood.

Developer(s) Filter Description


Very small objects (elongated objects, low point count) -
M. Elmqvist - FOI (Swedish Active Contours - Elmqvist
Defence Research Institute), (2001) Prominent examples of such objects are vehicles.
Sweden
G. Sohn - University College Regularization Method - Sohn Very low objects (walls, cars, etc.) - The closer an object is to
London (UCL) (2002)
the bare earth, the more difficult it becomes for algorithms to
M. Roggero - Politecnico di Modified Slope based filter -
Torino Roggero (2001) differentiate between it and the bare earth. This problem is
M. Brovelli - Politecnico di Spline interpolation -Brovelli complicated even further by the need not to incorrectly filter off
Milano (2002) small but sharp variations in the terrain.
R. Wack, A. Wimmer - Hierarchical Modified Block
Joanneum Research Institute of Minimum - Wack (2002)
Digital Image Processing Complex Shape/Configuration - A major difficulty posed by
P. Axelsson – DIGPRO Progressive TIN densification - urban environments is the variety and complexity of objects
Axelsson (1999, 2000) found in them. This complexity manifests itself in the shape,
G. Sithole, G. Vosselman – TU Modified Slope based filter - configuration, and lay of objects.
Delft Vosselman (2000), Sithole (2001)
N. Pfeifer, C. Briese – TU Hierarchic robust interpolation
Vienna - Pfeifer et. al. (1998), Briese et. Disconnected terrain (courtyards, etc.) - In urban
al. (2001) environments, it is common to find patches of bare earth
enclosed by objects. The decision of whether an enclosed patch Table 4.2 Qualitative analysis
is bare earth is not always clear-cut.

Elmqvist

Axelsson
Roggero

Brovelli
4.1.1.3 Attached objects

Sithole

Pfeifer
Wack
Sohn
Building on slopes - Such objects have roofs that are elevated
above the bare earth on some sides and minimally or not at all Outliers
on other sides. Because of this it becomes difficult to distinguish High points G G G G G G G G
between such objects and the bare earth. High points G G G G G G G G
influence
Low points G F F G G F F G
Bridges - Artificial structures spanning the gap (road, river, Low points G G G G G P P G
etc.,) between bare earth surfaces. influence
Object complexity
Objects G G G G G G G G
Ramps - Natural/Artificial structures spanning the gaps between Large objects G G G G G G G G
bare earth surfaces; where one is lower than the other. Small objects F F G F F G F G
Complex objs. F F F F F F F F
4.1.1.4 Vegetation Low objects P P G G G F F F
Disconnected F F F F F F F F
Vegetation on slopes - Vegetation points can be filtered based terrain
on the premise that they are significantly higher than their Detached objects
Building on G F F F F G F G
neighborhoods. This assumption naturally falls away in steep slopes
terrain where terrain points may lie at the same height as Bridges F
vegetation points. G/ G/ G/ G/ G/ G/ G/
R R R R R R R
Ramps P P P P P F P P
Low vegetation - Similar to the problem of low objects, except Vegetation
this time complicated by steep slopes. Vegetation G G G G G G G G
Veg. on slopes G G F F F F F G
4.1.1.5 Discontinuity Low veg. G F F F G F F G
Discontinuity
Preservation (Steep slopes) - Generally objects are filtered Preservation P P P P F F P F
because they are discontinuous to the terrain. Occasionally it Sharp ridges P P P P F P P P
also happens that the bare earth is piecewise continuous. At G, Good; F, Fair; P, Poor; R, Removed
discontinuities some filters will operate as they would on
objects. Therefore, discontinuities in the Bare Earth are lost. 4.2 Quantitative assessment

Sharp ridges - The preservation of ridges is a similar but more The quantitative assessment was done by generating cross-
drastic problem of retaining convex slopes as described by matrices and generating visual representations of the cross-
Huising and Pereira (Huising et. al. 1998). matrices (figure 4.1). The cross-matrices were then used to
evaluate Type I and Type II errors, and visual representation
4.1.2 Assessment were then used to determine the relationship of Type I and Type
II errors to features in the landscape. Furthermore the size of the
The qualitative assessment was based on a visual examination error between the reference and filtered DEMs was computed
and comparison of the filtered data sets. The Qualitative and analyzed. The purpose of this was to determine the potential
assessment of filters is summarized in Tables 4.1, 4.2. The main influence of the filtering algorithms on the resulting DEM,
problems faced by the filter algorithms are in the reliable based on the predominant features in the data sets. It must be
filtering of complex scenes, filtering of buildings on slopes, stressed that what is presented here covers the difficulties in
filtering of disconnected terrain (courtyards), and discontinuity filtering as observed in the data and in general all the filters
preservation. worked quite well for most landscapes.

Table 4.1 Meaning of Good, Fair and Poor (used in Table 4.2) 4.2.1 Type I vs Type II

Rating Item filter rating Influence rating All the filtering algorithms examined make a separation between
Good Item filtered most of No influence Object and Bare Earth based on the assumption that certain
the time (> 90%) structures are associated with the former and others with the
Fair Item not filtered a few Small influence on filtering of
times neighboring points latter. This assumption while often valid does sometimes fail.
Poor Item not filtered most Large influence on filtering of This failure is caused by the fact that filters are blind to the
of the time (< 50%) neighboring points context of structures in relation to their neighborhoods. Because
of this a trade off is involved between making Type I (reject
Bare Earth points) and Type II errors (accept Object points).
Axelsson Unused 121 number of Type I errors. However, when the height difference at
Filtered discontinuities increases the performance of the slope-based
Ref BE Obj filters remains the same. This is not the case with some of the
BE 21880 602 22482 68.99% other filters, where a discontinuity can also influence filtering in
Obj 588 9515 10103 31.01%
the neighborhood of the discontinuity. Another interesting
22468 10117 32585
68.95% 31.05% aspect of filtering at discontinuities is where the Type I errors
ratio BE-Obj/ Obj-BE 1.02 occur. Some filters only cause Type I errors at the Top edge of
Typical numerical results (including cross-matrix) discontinuities, whereas others cause errors at both the top and
bottom edges. The potential for the latter case happening is
relatively higher for surface based filters.

4.2.4 Bridges

As already mentioned, structure based filters are blind. Because


of this filters do not make a reasoned distinction between objects
that stand clear of the Bare Earth and those that are attached to
the Bare Earth (e.g, bridges). From the results it can be seen that
the removal of bridges can be complete partial or not at all. All
Shaded relief visual of cross-matrix the algorithms for the exception of Axelsson’s seem to remove
Figure 4.1 Sample data for quantitative comparison and bridges consistently. A possible reason for this could be the
assessment method of point seeding used in the algorithm.

Another problem with the filtering of bridges relates to the


The output from some participant’s filters is gridded or altered
decision made about where a bridge begins and ends. This
in position from the original. Because of this, DEMs were
problem is detected by Type II errors at the beginning and end
generated for the participant’s filtered data and the height of the
of bridges (Bridges in the test were treated as objects). This
points in the reference data were compared against these DEMs.
error though is generally not large. Similar to bridges are ramps.
Using a predefined threshold (20 cm) and based on the height
Ramps bear similarity to bridges in that they span gaps in the
comparison, the points in the reference data were labelled as
Bare Earth. However, they differ in that they do not allow
Correct Bare Earth, Type I error, Type II error or Correct
movement below them. As such ramps were treated as Bare
Object. Therefore, the Type I and II errors have to be understood
Earth in the reference data. All algorithms filtered off the ramps.
in the context of height comparison of the reference against the
filtered DEMs. The computed errors ranged form 0-64%, 0- 4.2.5 Complex scenes
19%, 2-58% for Type I, Type II and the Total errors
respectively. This shows that the tested filtering algorithms Shown in the scene (figure 4.2) is a plaza surrounded on three
focus on minimizing Type II errors. It can be seen even more sides by a block of buildings. From the plaza it is possible to
clearly from the graphical comparison that most filters focus on walk onto the road to the east and also descend via stairs to the
minimizing Type II errors, except the filters by Axelsson and road below (west). Further, complicating matters there is a
Sohn. In others words filter parameters are chosen to remove as sunken arcade in the center of the plaza. Defining what is and
many object points, even if it is at the expense of removing valid what is not Bare Earth in such a scenario is difficult. In this
terrain, suggesting that participants consider the cost of Type II example the plaza and arcade were assumed to be Bare Earth
errors to be much higher than that of Type I errors. based on the rational that it is possible to walk without
obstruction from the plaza to the roads on the west and east.
4.2.2 Steep Slopes
However, this assumption is very subjective.

The Axelsson filter generated the least total error (total number
of points misclassified) on steep slopes. One explanation for this
could lie in the Axelsson filter’s (or parameterizations) bias
towards Type II errors. In general there are fewer Object points
then there are Bare Earth points, and if bias is on making Type
II errors then it also means that the Type II misclassifications
will be fewer than Type I misclassifications. Nonetheless,
filtering in steep terrain still remains a problem especially at
reduced resolutions.

4.2.3 Discontinuities
Figure 4.2 Complex scene
The two slope based filters have the most difficulty with
discontinuities in the Bare Earth. This is borne by the large
4.2.6 Outliers Site 1: Type I

40%

The number of outliers (both high and low) are relatively small 35%

and therefore their contribution to Type I and Type II errors is 30%

25%
small. However, their influence on filtering in their

% error
20%
neighborhoods can be considerable. The filters by Axelsson and 15%
Sithole produce such Type I errors. While single outliers cause 10%

problems for a few filters numerous closely spaced outliers will 5%

cause problems for many filters. Even more, the influence of 0%


1 2 3
numerous outliers an their neighborhoods can be significant Reduction (1 = Original, 2 = Reduction 1, 3 = Reduction 2)

depending on the concept base of the filter. Sohn Axelsson Pfeifer Brovelli Roggero Wack Sithole

Site 1: Type II

4.2.7 Vegetation on slopes 18%


16%
14%
Most of the filters do well in identifying vegetation on slopes.
12%
However, some of this is done at the cost of increased Type I

% error
10%
errors in the underlying slope, and in the case of the Elmqvist 8%

and Brovelli filters quite significantly. 6%


4%
2%
4.2.8 Low Bare Earth point count 0%
1 2 3
Reduction (1 = Original, 2 = Reduction 1, 3 = Reduction 2)
Because filters depend on detecting structures, especially those Sohn Axelsson Pfeifer Brovelli Roggero Wack Sithole
that detect Bare Earth it is essential that there be enough sample Site 8: Type I
Bare Earth points. Most of the filters do well in identifying Bare 40%
Earth points despite the low count of Bare Earth points. 35%

30%
4.3 Effect of resolution 25%
% error

20%

As the resolution of the data is lowered, the bare earth and 15%

objects begin to lose definition. Therefore, this comparison aims 10%

to determine how filters cope when the resolution of the bare 5%

0%
earth and objects breakdown. To test this the filtered data the 1 2 3

Reduction (1 = Original, 2 = Reduction 1, 3 = Reduction 2)


different resolutions (for sites 1 and 8) were compared with Sohn Axelsson Pfeifer Brovelli Roggero Wack Sithole
reference data of corresponding resolution. A cross-matrix was Site 8: Type II
generated for each comparison. The results are shown in the
18%
charts in figure 4.3. For some of the participants there was no 16%
data at some of the resolutions. Overall Type I and Type II 14%

errors increase with decreasing resolution. However, comparing 12%


% error

10%
site 1 and 8 it can be seen that there are variations and
8%
exceptions. Four possible reasons are offered. 6%
4%

Landscape characteristics – The size of Type I errors for site 1 2%

are much larger than those for site 8. This is due to (i) more 0%
1 2 3

Reduction (1 = Original, 2 = Reduction 1, 3 = Reduction 2)


complex objects in site 1, (ii) buildings and vegetation on steep
Sohn Axelsson Pfeifer Brovelli Roggero Wack Sithole
slopes in site 1.
Figure 4.3 Type I and Type II errors vs. resolution. Site
Filter concept vs. Neighborhood size – In section 2.4 four
different filter concepts were identified. The choice of
neighborhood was also touched on in section 2.2. The sampling of a surface. Because of this the surface fit becomes
combination of these factors is thought to be responsible for the more general and an increase in Type I errors can be expected.
variations in Type I errors. For site 1 both slope based filters
(Roggero and Sithole) show decreasing Type I errors with Filter parameter optimality – Filter parameters have to be
decreasing resolution. As resolution is decreased there are fewer tweaked to obtain optimal results at different resolutions.
points against which a point is tested (fixed neighborhood), However, it is not always guaranteed that the most optimal
hence in steep slopes a drop in Type I errors is to be expected. resolution will be obtained at different resolutions. The small
Naturally Type II errors will also increase. For surface based decreases in Type I or Type II errors are believed to be due to
and minimum-block filters (Pfeipfer and Wack) the this.
neighborhood has to be expanded to achieve a minimum
Edge effects – For filters that work on gridded data, artifacts landscapes with steep slopes. Additionally a large Type I error
can be become pronounced along edges of the data (or where does not necessarily mean the resulting DEM will be poor.
there are gaps), especially at the lower resolutions. The large Importantly it depends on where Type I and Type II errors occur
increase in Type I error in Site 8 (10m resolution) for the Wack in the landscape.
filter is due to this.
5.3 Research Issues
5 DISCUSSION
It is recognized that full automation is not possible, nonetheless
The objectives of the study were to, (1) determine the the difficulties observed in complex urban landscapes and bare
performance of filter algorithms, (2) determine how filtering is earth characterized by discontinuities provide challenges that
affected by point density and (3) establish future research issues. can potentially be overcome and thus improve the reliability of
These objectives are treated individually in the sections below. filtering.

5.1 Performance Classification using context knowledge and external


information - As already indicated filtering of complex scenes
What has been presented are some of the striking difficulties in is difficult and to obtain significant improvement will require:
filtering as observed in the data. In general all the filters worked Firstly, algorithms that reason the classification of points based
quite well in landscape of low complexity (characterized by on the context of their neighborhoods. This opposed to current
gently sloped terrain, small buildings, sparse vegetation, high algorithms that classify solely based on structures (i.e., slopes,
proportion of bare earth points). surfaces, etc.,). This assertion is confirmed by the fact that in
the comparisons surface-based filters performed better than
Main problems - Problems that pose the greatest challenges point-based filters that examine less neighborhood context.
appear to be complex cityscapes (multi-tier buildings, Secondly, the use of additional information such as imagery to
courtyards, stairways, plazas, etc.,) and discontinuities in the support the classification process, because even with more
bare earth. It is expected that tailoring algorithms specifically context, reliable results may not be realized because (a) the
for these areas may improve results, albeit by a small amount. semantics of objects in a landscape change with the context
(urban, rural, industrial, etc.,) of the environment, (b) It is not
Strategy - Overall surface based strategies appear to yield better possible to extract sufficient semantic information from the
results. This noted, it is the opinion of the authors that clustering position of the points alone, (c) where there are insufficient or
and segmentation algorithms (or some hybrid based on these no bare earth points a classification of objects cannot be made,
concepts) hold more promise. and (d) point-clouds will contain systematic errors (multi-path,
etc.,) and noise.
Which error should be reduced? - A decision always has to be
made between minimizing Type I and Type II errors. The Strategy - Current filtering strategy only makes two distinctions
question of which error to minimize depends on the cost of the between features in a landscape, bare earth or object. But from
error for the application that will use the filtered data. But from the results it is evident that this distinction is inadequate. A
a practical point of view it will also depend very much on the hierarchical approach to filtering will potentially yield more
time and cost of repairing the errors manually, which is often controlled results. In the hierarchical approach, points that are
done during quality control. Experience with manual filtering of identified as object are further classified to search out objects
the data showed that it is far easier to fix Type II errors than (bridges, etc.,) that have strong associations with the bare earth.
Type I errors. Generally Type II errors are conspicuous. In
contrast, Type I errors result in gaps in the landscape, and Quality reporting, Error flagging and Self-diagnosis - Here
deciding whether a gap has been caused by a Type I error or checking filtering quality has been possible because some
from the removal of objects is not easy. There is also a third reference data could be generated. The results have shown that
alternative, and that is to minimize the Total error. But reducing filters are not foolproof and performance can vary from one type
the total error is biased in favor of minimizing Type I errors of environment to another. Therefore, while testing a filter
because very often in a landscape there are relatively more bare against reference data is a good measure of gaining an
earth points then there are object points. appreciation of the filters performance, it is not a guarantee that
a filter will always perform as expected. If the type of
5.2 Point density environment being filtered is untested then unpredictable results
can be and should be expected. Therefore, it would be
More tests on decreasing resolution will need to be done, as the advantageous if filters could be designed to report on the
test sites chosen have proved inadequate to obtain a conclusive anticipated quality of the filtering and/or flag where the filter
picture of the effects of resolution on filtering. The complexity may have encountered difficulties. There is also the matter of
of the sites has meant that even at the highest resolutions the perception of reliability. This particularly relates to filters that
filters have difficulties, which then masks the performance of output interpolated data. The existence of data after filtering
the filters at lower resolutions. Nonetheless, in choosing the scan creates the perception that it is accurate (i.e., it is the bare earth).
resolution the filter concept used becomes critical, especially in However, the tests have shown that with interpolated data it is
arguable about what accurate is, since it depends very much on Using Adaptive TIN Models. In IAPRS. Vol. 33. part B4/1, pp.
the threshold used in the comparison. 110-117.
Briese C., Pfeifer N., 2001: "Airborne laser scanning and
Effort vs. Result - At a certain point depending on the concept derivation of digital terrain models". Proceedings of the 5th
or implementation used better results will not be obtained by conference on optical 3D measurement techniques, Vienna,
filtering based on positional information alone. Ascertaining Austria.
when that limit has been reached is difficult, but it is important Brovelli M.A., Cannata M. and Longoni U.M., 2002:
to be aware that it exists, especially when large volumes of data “Managing and processing LIDAR data within GRASS”.
are processed. If the limit of each algorithm were known then a Proceedings of the GRASS Users Conference 2002, Trento, 11-
13 September 2002 (in print on Transaction in GIS).
multi-algorithm approach could be used to increase the
efficiency of filtering. In this way the most efficient filter (in Elmqvist M., 2001: “Ground Estimation of Lasar Radar Data
terms of computing effort and algorithm complexity) could be using Active Shape Models”. Paper presented at the OEEPE
used for specific regions in a data set. There is also the aspect of workshop on airborne laserscanning and interferometric SAR
for detailed digital elevation models 1-3 March 2001, paper 5 (8
parameter selection. For any landscape filter, parameters are
pages). Royal Institute of Technology Department of Geodesy
chosen with the most difficult situations in mind. For some and Photogrammetry 100 Stockholm, Sweden.
filters, choosing parameters in such situations translates into
more processing time. For such filters it would be more efficient Flood M., 2001: “LIDAR activities and research priorities in the
commercial sector”. IAPRS. WG IV/3., Vol XXXIV.
to automatically gauge the landscape characteristics in an area
Annapolis,MD, 22-24 Oct 2001. pp.678-684.
and use the most optimal filter parameters.
Haugerud R.A., Harding D.J., 2001: “Some algorithms for
6 CONCLUSION virtual deforestation (VDF) of LIDAR topographic survey data”.
IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October
2001. pp. 211-218.
The results from eight algorithms were compared against
reference data sets. For typically non-complex landscapes most Huising E.J., Gomes Pereira L. M., 1998: "Errors and accuracy
of the algorithms did well. However, for complex landscapes estimates of laser altimetry data acquired by various laser
scanning systems for topographic applications". ISPRS JPRS,
performance varied and surface based filters tended to do better.
Vol. 53, no. 5. pp.245-261.

The effect of lowered resolutions on the performance of filters OEEPE: 2000 Working Group on laser data acquisition. ISPRS
was also tested. Comparison of the results at lower resolutions Congress 2000. http://www.geomatics.kth.se/~fotogram/
OEEPE/ISPRS_Amsterdam_OEEPE_presentation.pdf
confirms that amongst other factors the method of filtering also
has an impact on the success of filtering and hence on the choice Pfeifer N.; Kostli A., Kraus K., 1998: "Interpolation and
of scanning resolution. However, more tests are required to form filtering of laser scanner data - implementation and first results."
International archives of photogrammetry and remote sensing,
a clear impression of which filter characteristics have a
Vol XXXII, Columbus, part 3/1 Columbus, pp.153 - 159.
significant impact on filtering at lower resolutions.
Roggero M., 2001: “Airborne Laser Scanning: Clustering in raw
The filtering of complex urban landscapes still poses the greatest data”. IAPRS, Vol XXXIV –3/W4 Annapolis, MD, 22-24 Oct,
2001. pp. 227-232.
challenges. As has been suggested elsewhere, filtering using
segmentation, and understanding of the context of the landscape Sithole G., 2001: “Filtering of laser altimetry data using a slope
being filtered and data fusion might be one ways in which this adaptive filter”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD,
challenge could be overcome. 22-24 October 2001. pp. 203-210.
Sohn G., Dowman I., 2002: “Terrain Surface Reconstruction by
The full report of the test can be found at the following URL: the Use Of Tetrahedron Model With the MDL Criterion”.
http://www.geo.tudelft.nl/frs/isprs/filtertest/ IAPRS, Vol XXXIV Part 3A. ISPRS Commission III,
Symposium. September 9 - 13, 2002, Graz, Austria. pp. 336-
344.
7 ACKNOWLEDGEMENTS
Tao C. V., Hu Y., 2001: “A review of post-processing
This study would not have been possible without the help and algorithms for airborne LIDAR Data”. Proceedings ASPRS
co-operation of participants who took time from their schedules conference April 23-27, 2001. St. Louis Missouri. CD-ROM, 14
pages.
to filter the twelve data sets. The authors wish to extend their
gratitude to P. Axelsson, C. Briese, M. Brovelli, M. Elmqvist, N. Vosselman G., 2000: “Slope based filtering of laser altimetry
Pfeifer, M. Roggero, G. Sohn, R. Wack and A. Wimmer. data”. IAPRS, Vol XXXIII, Part B3, Amsterdam, The
Netherlands. pp. 935-942.
Wack R., Wimmer A., 2002: “Digital Terrain Models From
REFERENCES Airborne Laser Scanner Data – A Grid Based Approach”.
Axelsson P., 1999: "Processing of laser scanner data - IAPRS, Vol XXXIV Part 3B. ISPRS Commission III,
algorithms and applications". ISPRS Journal of Photogrammetry Symposium. September 9 - 13, 2002, Graz, Austria. pp. 293-
& Remote Sensing, 54 (1999). pp. 138 -147. 296.

Axelsson P., 2000: “DEM Generation from Laser Scanner Data

View publication stats

You might also like