Automatic Lane Marking Extraction From Point Cloud Into Polygon Map Layer
Automatic Lane Marking Extraction From Point Cloud Into Polygon Map Layer
To cite this article: David Prochazka, Jana Prochazkova & Jaromir Landa (2018): Automatic lane
marking extraction from point cloud into polygon map layer, European Journal of Remote Sensing,
DOI: 10.1080/22797254.2018.1535837
Article views: 58
Automatic lane marking extraction from point cloud into polygon map layer
David Prochazkaa, Jana Prochazkova b
and Jaromir Landaa
a
Department of Informatics, Mendel University in Brno, Brno, Czech Republic; bInstitute of Mathematics, Faculty of Mechanical
Engineering, Brno University of Technology, Brno, Czech Republic
CONTACT Jana Prochazkova [email protected] Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic
1
http://ec.europa.eu/transport/road_safety/index_en.htm.
© 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (http://creativecommons.org/licenses/by-nc/4.0/),
which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
2 D. PROCHAZKA ET AL.
successive application of the alpha shape method and Terrain point classification
spanning tree. In the last step, we present a novel geo-
Terrain point classification methods can be divided into
metric method that is able to reconstruct arbitrary shape
two groups. The first possibility is to detect directly the
of the lane marking in vector form. It can process the
terrain points (e.g. the road points), the other is to
input k-ary spanning tree, where k ¼ 1; 2; 3; 4.
detect the adjacent road curbs. This second approach
In this paper we cover the issue of automatic road line
has an obvious limitation. It is not suitable for roads
detection, identification of its kind, reconstruction of its
that are not surrounded by detectable borders.
correct shape and export to the ESRI ShapeFile map layer.
A naive direct detection approach takes a given per-
centage of lowest points in the point cloud and classifies
Literature overview them as the ground points (Babahajiani et al. 2014). The
disadvantage of this method is again clearly visible –
Our method, as well as methods used in similar projects, only road segments with negligible elevation difference
is based on the processing of point clouds provided by can be processed. Nonetheless, more complex methods
aerial or ground vehicles equipped with LiDAR. are used in the practically oriented projects. In the article
Airborne laser scanning is frequently used for by Belton and Bae (2010), road points are defined as the
large areas. The airborne laser scanning data is used lowest horizontal points on a smooth surface. The
for digital elevation models (Sithole & Vosselman, approach described by Yang and Dong (2013) uses a
2005; Susaki, 2012) and applications such as monitor- shape-based segmentation method. The segments of the
ing of atmospheric aerosols in ecology (Badarinath, point cloud are then classified using Support Vector
Kharol, & Sharma, 2009), structural mapping in geol- Machines. The algorithm can detect lines or planar
ogy (Grebby, Cunningham, Naden, & Tansey, 2012. and spherical patches; therefore, it can be easily used
On the other hand, the mobile laser scanning (MLS) for ground point detection. Another innovative method
is used for urban areas mapping (Graham, 2010) presents the application of Hough transformation on
because it usually provides better results due to a Millimetre Wave Radar data to obtain road edges (K.Y.
higher point cloud density. Guo, Hoare, Jasteh, Sheng, & Gashinova, 2015).
Within the urban areas, a very common goal is As mentioned, the other possibility how to detect a
detection of different street objects (buildings, road road is to extract its road curbs. The road curbs create a
surface markings, street lights etc.). These detected boundary of the road because of their higher elevation
objects are then represented as 3D models (Lafarge & above the road surface. For example, the method of
Mallet, 2011) or map layers that are used for spatial (Yang, Fang, & Li, 2013) describes road cross-sections
analyses (Landa & Ondrousek, 2016). The authors with window operators that filter out non-ground
(Yang, Dong, et al. 2017) presents a robust method points. The window operators work with three criteria:
for road facilities recognition based on multiple elevation jump, point density and slope change. Guan
aggregation levels computation and designing a series et al. (2014) also uses the same criteria but for a pre-
of contextual features to improve the recognition processing of the raw point cloud. The cloud is parti-
performance. Our article is focused particularly on tioned into a set of horizontal segments (so-called pro-
detection of the road surface markings; therefore, we files) according to the vehicle trajectory.
present primarily papers focused on this problem.
The process of detection of common street objects
from LiDAR data usually starts with the classification
Road lane marking detection and identification
of the terrain points (Chen et al., 2009). The classifi-
from point clouds
cation of terrain points is used either for the isolation
of objects on the ground, or for the elimination of Traffic sign detection, as well as road surface marking
points that are not necessary for the computations. If detection, works with the high reflectance intensity
the target of the detection is an object with high (higher retroreflective property) of the special sign
reflectance property, then the points with high reflec- paint. A transformation of the point cloud into 2D
tance are isolated (Yang, Fang, Li, & Li, 2012). images is commonly used.
Subsequent operations are the road marking posi- In the article (Chen et al., 2009), the authors filter the
tion detection and identification of its kind, for exam- point cloud on the basis of the point reflectance.
ple full or broken line (Chen et al., 2009; Yang et al., Consequently, they generate a 2D binary image. The
2012). The last part of the process can be the recon- value of each pixel is one if it corresponds to a surface
struction of road marking shape. However, this part marking and zero otherwise. The Hough transform is
is usually not used in methods similar to the one then applied to detect the lines on the road surface.
presented in this paper. The process commonly Finally, they identify the road lanes using a bounding
ends with the identification of the marking type. box and RANSAC.2
2
Random Sample Consensus.
EUROPEAN JOURNAL OF REMOTE SENSING 3
Another use of 2D images can be seen in Guan lane marking accurately. Finally, a point is selected
et al. (2014). The authors generate 2D georeferenced every 10 cm along a fitted curve. The final coordinate
intensity images using an extended inverse-distance- of the point is an average of points within a 10 by
weighted (IDW) approach. Further, they segment 10 cm rectangular box surrounding the given selected
these images into road marking candidates with a point. The authors, however, do not reconstruct the
point density-dependent method. representation to obtain the marking envelope shape,
Furthermore, the authors in Yang et al. (2012) use they are focused only on its detection.
2D images to detect road markings. They generate a The shape representations described above can
georeferenced image of the point cloud. This image is have insufficient quality. Therefore, we focus on a
filtered on the basis of point reflectance and height. method that allows to reliably detect and correctly
The final step is labelling of the road marking regions reconstruct the lane marking shape in this article.
according to their shape and arrangement. The
method incorporates related semantic knowledge
Implementation
(e.g. shape, pattern) of the road marking. A similar
example of the road marking detection using 2D This section describes our approach towards lane
images can be found in Thuy and Leon (2010). marking detection, identification and reconstruction.
Another possibility is to process the LiDAR intensity Primarily, we focus on the identification of full and
and range attributes as used in Kumar, Elhinney, Lexis, broken road lane markings. A key issue is to find a
and McCarthy (2014). The described algorithm gener- precise polygon representation of each detected
ates 2D intensity raster surfaces from the LiDAR data. marking and store this representation into a common
According to the authors, the algorithm can detect 88% polygon map layer.
of road marking points. Naturally, the point cloud data The detection process is the application of our
can be also combined with common RGB images to method proposed in Landa et al. (2013). The follow-
detect lane markings. Examples can be found in Huang ing phase is the classic segmentation of the standard
et al. (2013), Li, Chen, Li, Shaw and Nuchter (2014). well-known method. We proposed a new two-step
identification phase that consists of the alpha shape
and spanning tree. The reconstruction also presents a
Road lane marking shape reconstruction
novel geometric method how to compute the accu-
The road lane marking shape (envelope) is frequently rate shape of the lane markings. The entire process is
required during the lane identification process. One briefly outlined in Figure 2.
of the most frequently used representations is a com-
mon bounding box. However, such representation
Ground point classification and lane marking
has two major limitations: (a) It can be used solely
detection
for the straight lanes. (b) It can be used only in
situations where the lane markings do not touch or Road marking detection methods are closely con-
cross each other (e.g. not on road intersections). nected with the point reflectance as described in the
Another frequently used representation is the con- previous section. Our method is based on such a
cave or convex hull (Moreira & Santos, 2007). The common approach where the points with the reflec-
hull quality directly depends on the amount of noise tance higher than a given threshold are chosen. In
and complexity of the shape. A solution proposed by this set of points with high reflectance values, the
Schindler, Maier and Janda (2012) uses circular arc ground points are then classified. The ground point
splines. Nevertheless, this approach is not suitable for classification is performed solely on this set of points
line intersections (see Figure 1). with high reflectance to minimize the computation
The road lane marking is firstly represented by a needs. The reflectance value depends on the type of
bounding box in Chen et al. (2009). Further there is the scanning device and it is set experimentally.
applied the RANSAC curve fitting algorithm pub- For ground point detection, we use the algorithm
lished in Fischler and Bolles (1981) to localize each proposed in Landa et al. (2013). The algorithm is
Figure 1. Examples of possible road lane marking combinations. Left: solid and dashed lane. Right: examples of road lane crossings.
4 D. PROCHAZKA ET AL.
Figure 2. Overview of the proposed road lane marking detection, identification and reconstruction process. This process is able to create
a common polygon map layer from a given point cloud.
based on a dynamic bounding box principle. The that in spite of the previous elimination of false posi-
input point cloud is divided into separate columns tive results, it still contains some segments that do not
and in each column the lowest points are extracted. represent road markings (e.g. pavement).
The extracted ground points are further segmented In this part, we describe the identification of a
on the basis of the Euclidean distance. The points particular line type (full, broken). The identification
satisfying empirically the determined limit of max- process consists of four steps:
imal distance between two points are considered as
(1) The 3D point cloud is transformed into a 2D
belonging into a single segment Figure 3.
point cloud.
The result of the Euclidean-based segmentation con-
(2) The alpha shapes of represented objects are
tains also a substantial amount of noise segments, for
computed.
example building parts, curbs etc. The elimination of
(3) Their spanning trees are determined.
these false-positive segments is based on multi-thresh-
(4) The road lane marking segments are identified.
old criteria. We determined these conditions:
● the minimal number of points in the segment The first step is the point cloud transformation from a
● the maximal number of points in the segment
geospatial coordinate system to a local coordinate system.
● the maximal size of envelope rectangle in x or y
This step simplifies the computations. Subsequently, the
direction, concave hull is frequently computed as a shape represen-
● the minimal percentage of planar points using
tation in some works. Nonetheless, such a concave hull is
ambiguous (see Figure 4); therefore, we compute the
RANSAC algorithm (usual value is 95 %).
alpha shape which is unambiguous. The alpha shape
The above-mentioned criteria create the result visible computation algorithm is described in Edelsbrunner,
in Figure 3. Kirkpatrick and Seidel (1983).
The alpha shape is a generalization of a convex hull.
The general definition in Edelsbrunner, Kirkpatrick and
Lane marking identification
Seidel (1983) says that alpha-hull of set S is the intersec-
The result of the ground point extraction and segmen- tion of all closed discs with radius 1=α (α is a sufficiently
tation described in the previous section is a segmented small but otherwise arbitrary positive real number) that
point cloud where each segment represents a possible contain all the points of S. In our case, the Delaunay
road marking. However, it is important to mention triangulation is used to construct a TIN (Triangulated
Figure 3. Left: Input point cloud. Right: Result of segmentation based on multi-threshold criteria: segments identified as
possible lane markings (positive) are labelled in black, the other segments (negative) are labelled in red.
EUROPEAN JOURNAL OF REMOTE SENSING 5
Figure 6. Possible variants of spanning trees that can be constructed from the point cloud: k-ary spanning tree (from left to
right: k = 1,2,3,4).
6 D. PROCHAZKA ET AL.
1-Ary tree
Let Hi ¼ hi1 ; hi2 be the vertex with multiplicity one
(see Figure 10). Then the result points T1i , T2i can be
simply expressed as:
x ¼ hi1 judj u1
(1)
y ¼ hi2 judj u2
2-Ary tree
Let Hi be a vertex with multiplicity two and Hi1 ,
Hiþ1 vertices on the adjacent edges (see Figure 11).
The computed points T1i satisfies:
d ¼ T1i Qj¼jT1i P
and
T1i P?Hiþ1 Hi ; T1i Q?Hi1 Hi ! ΔHi T1i P ffi ΔHi T1i Q
We can derive:
Figure 8. The spanning tree of the graphs projected on the d
orthophoto. The green colour represents detected possible lane jHi Pj¼jHi Qj ¼ (2)
segments. The yellow colour represents the noise in data. tan jffHi1 H2 i Hiþ1 j
EUROPEAN JOURNAL OF REMOTE SENSING 7
Figure 9. The process of lane type identification: (a) first spanning tree is selected and connected to all others, (b) spanning tree
with the lowest angle and distance to first one is found, (c) two spanning trees from the previous step are connected and the
process continues, (d) all related spanning trees are connected.
Hence, the coordinates of the points P, Q can be where nP , nQ are normal vectors of lines p; q.
expressed using Equation (1). The result point T1i Furthermore, the symmetry point T2i can be simply
can be written as the intersection of two lines p; q derived by central inversion with the centre Hi .
perpendicular to the edges Hi1 Hi , Hiþ1 Hi in points
P ¼ ½p1 ; p2 , Q ¼ ½q1 ; q2 :
3,4-Ary tree
p: nP1 x þ nP2 y p1 nP1 p2 nP2 ¼0 (3) Suppose that the vertex Hi is connected with the
vertices Hiþ1 , Hiþ2 , Hiþ3 (and Hiþ4 for multiplicity
four). First, the order of edges has to be determined.
q : nQ1 x þ nQ2 y q1 nQ1 q2 nQ2 ¼ 0 (4) Without loss of generality, assume that Hi Hiþ1 is the
8 D. PROCHAZKA ET AL.
initial edge. The angle ∢Hi Hi+1 Hk can be com- polygon file is useful mainly for the mentioned inventory
puted by: process. For instance, it allows to calculate an approx-
imate amount of paint required for the road marking
ðu; vÞ
cosðαÞ ¼ (5) maintenance and other related tasks. We have chosen the
juj:jvj
ESRI ShapeFile for polygon and polyline storage; how-
ever, any polygon or polyline format can be used. Both
ðHi Hiþ1 ; Hi Hk Þ
cosðαk Þ ¼ ; k files are created using the Shape C Library.3
jHi Hiþ1 jjHi Hk j
¼ i þ 2; i þ 3; resp: i þ 4: (6)
Because of the cosine function properties, edges can Results
be ordered in dependence on the size and the sign of
results in Equation (6). Three different situations can The evaluation is based on three different accuracy
be distinguished (see Figure 12). metrics: precision, recall and quality (Boyko &
Thus, the problem is transformed to the evaluation Funkhouser, 2011). The ground truth data pieces for
of three (or four) couples of edges as in the case of a the matrices are obtained with visual control of
2-ary tree (see Equations (2, 3, 4)). In the final step, images taken during the process of point cloud cap-
the points are ordered and stored in a common file turing. The matrices are defined:
format. The result map layer presented above aerial Precision:
imagery is in Figure 13.
TP
p¼
TP þ FP
Map layers’ creation Recall:
Two map layers are created after the shape envelope TP
r¼
reconstruction. The first layer (polyline type) contains TP þ FN
all identified road lane markings where each line pro- Quality:
vides information about its type (full or broken). This
TP
polyline file is suitable especially for analytical purposes. q¼
It provides information about the type and position of TP þ FN þ FP
the markings; therefore, it can be used, for exampe for where TP is the number of true positives; FP is the
road safety analysis. The second created layer contains number of false positives and FN is the number of
polygons of all reconstructed lane markings. This false negative results.
3
http://shapelib.maptools.org.
EUROPEAN JOURNAL OF REMOTE SENSING 9
Figure 13. Left: individual lane marking polygons mapped on the orthophoto. Right: identified road lane markings mapped on
the orthophoto. Broken lane markings are labelled in red and full lane markings are labelled in green.
Source point cloud data contains approximately 20 mil. points. Weather con-
ditions were good and there were no cars occluding
We tested our method on two sets of point clouds.
the road lanes during the testing.
Both sets were obtained through mobile mapping.
The second set represents a country road near
The first set represents an urban area inside the city
Semice. The length of the captured road is approxi-
centre of Brno. It was obtained using the Riegl VMX
mately 4.3 km and it contains approximately 62 mil.
250 Mobile Laser Scanning System.4 The length of the
points. The set was captured using the Riegl VMX
captured road is approximately 0.5 km and it
450 Mobile Laser Scanning System.5
4
http://www.gb-geodezie.cz/wp-content/uploads/2016/01/DataSheet_Riegl_VMX-250.pdf.
5
http://www.riegl.com/uploads/tx_pxpriegldownloads/DataSheet_VMX-450_2015-03-19.pdf.
10 D. PROCHAZKA ET AL.
Table 1. Confusion matrix for broken lane markings from the first data set; positive predictive value (PPV), false omission rate
(FOR), true positive rate (TPR), false positive rate (FPR), accuracy (ACC).
Predicted values
48 46 2 TPR = 95.83% FNR = 4.17 %
84 0 84 FPR = 0.0% TNR = 100.0%
ACC = 98.48 % PPV = 100.0% FOR = 2.32%
6
http://pointclouds.org.
7
http://opencv.org.
8
http://www.liblas.org.
9
http://shapelib.maptools.org.
EUROPEAN JOURNAL OF REMOTE SENSING 11
Figure 15. (a) The result polygon layer representing road lane markings (first set of point clouds). (b) The result polyline layer
representing different types of road lane markings (first set). (c) The result polygon layer representing road lane markings
(second set).(d) The result polyline layer representing different types of road lane markings (second set).Broken lane markings
are labelled in red, full lane markings are labelled in green. Shape envelopes are labelled in yellow.
Table 2. Confusion matrix for dashed lines from the second data set; positive predictive value (PPV), false omission rate (FOR),
true positive rate (TPR), false positive rate (FPR), accuracy (ACC).
Predicted values
287 264 23 TPR = 92.0% FNR = 8.0 %
92 19 72 FPR = 20.88% TNR = 79.12%
ACC = 88.9 % PPV = 93.29% FOR = 24.21%
The least time-consuming part of the process is segments computed during the segmentation. On
the road lane marking envelope reconstruction. In the other hand, in the second set, the most time-
case of the first set of point clouds, the most consuming part of the process is the identifica-
time-consuming part is the spanning tree compu- tion. The main reason is the high number of
tation. This is due to the higher complexity of simple spanning trees.
12 D. PROCHAZKA ET AL.
Our key advantage is the ability to reconstruct the COM. (2010). Towards a European road safety area: Policy
precise shapes of the road lane markings. The recon- orientations on road safety 2011–2020 (Report No. 2010/
structed shapes can be used for the visualization as 0903), European Commission.
Douglas, D., & Peucker, T. (1973). Algorithm for the
well as for analytic purposes (maintenance cost cal- reduction of the number of points required to represent
culation). The results are provided in two map layers a digitized line or its caricature. Cartographica, 10(2),
based on ESRI ShapeFile standard. The first layer 112–122. doi:10.3138/FM57-6770-U75U-7727
contains polylines with the kind of the road lane Edelsbrunner, H., Kirkpatrick, D., & Seidel, R. (1983). On
markings, the other contains the precise recon- the shape of a set of points in the plane. IEEE
Transactions on Information Theory, 29(4), 551–559.
structed shapes of the markings.
doi:10.1109/TIT.1983.1056714
Elhinney, C.M., Kumar, P., Cahalane, C., & McCarthy, T.
(2010). Initial results from European road safety inspec-
Acknowledgements tion (eursi) mobile mapping project. International
Archives of Photogrammetry, Remote Sensing and
The first data set (Brno) used in this article was provided Spatial Information Sciences, 38(5), 440–445.
by GEODIS BRNO, s.r.o. The second data set (Central Fischler, M.A., & Bolles, R.C. (1981). Random sample con-
Bohemian Region) was provided by the consortium of sensus: A paradigm for model fitting with applications to
companies: CleverMaps, Consultest, AdMaS and Geodrom. image analysis and automated cartography. Communicable
ACM, 24(6), 381–395. doi:10.1145/358669.358692
Gonzalez, R.C., & Woods, R.E. (2001). Digital image pro-
cessing (2nd ed.). Boston, MA, USA: Addison-Wesley
Disclosure statement Longman Publishing Co., Inc..
No potential conflict of interest was reported by the authors. Gonzalez-Jorge, H., Puente, I., Riveiro, B., Martinez-Sanchez,
J., & Arias, P. (2013). Automatic segmentation of road over-
passes and detection of mortar efflorescence using mobile
ORCID LIDAR data. Optics & Laser Technology, 54, 353–361.
Graham, L. (2010). Mobile mapping systems overview.
Jana Prochazkova http://orcid.org/0000-0002-7963-1942 Photogrammetric Engineering and Remote Sensing, 76(3),
222–228.
Grebby, S., Cunningham, D., Naden, J., & Tansey, K.
References (2012). Application of airborne LIDAR data and air-
borne multispectral imagery to structural mapping of
Arias, P., Riveiro, B., Soilán, M., Díaz-Vilarino, L., & Martínez- the upper section of the Troodos ophiolite, Cyprus.
Sánchez, J. (2015). Simple approaches to improve the auto- International Journal of Earth Sciences, 101(6), 1645–
matic inventory of zebra crossing from mls data. 1660. doi:10.1007/s00531-011-0742-3
International Archives of the Photogrammetry, Remote Guan, H., Li, J., Yu, Y., Wang, C., Chapman, M., &
Sensing and Spatial Information Sciences, XL-3/W3, 103– Bisheng, Y. (2014). Using mobile laser scanning data
108. doi:10.5194/isprsarchives-XL-3-W3-103-2015 for automated extraction of road markings. ISPRS
Babahajiani, P., Fan, L., & Gabbouj, M. (2014). Object recog- Journal of Photogrammetry and Remote Sensing, 87,
nition in 3D point cloud of urban street scene. In C.V. 93–107. doi:10.1016/j.isprsjprs.2013.11.005
Jawahar & S. Shan (Eds.), Computer vision - ACCV 2014 Guo, K.-Y., Hoare, E., Jasteh, D., Sheng, X.-Q., & Gashinova, M.
workshops (pp. (177–190)). ACCV 2014. Cham: Springer. (2015). Road edge recognition using the stripe Hough trans-
Badarinath, K., Kharol, S.K., & Sharma, A.R. (2009). Long- form from millimetre-wave radar images. IEEE Transactions
range transport of aerosols from agriculture crop residue on Intelligent Transportation Systems, 16(2), 825–833.
burning in indogangetic plains - a study using LIDAR, Guo, Z., & Hall, R.W. (1989). Parallel thinning with two-
ground measurements and satellite data. Journal of subiteration algorithms. Communications of the ACM,
Atmospheric and Solar-Terrestrial Physics, 71(1), 112– 32(3), 359–373. doi:10.1145/62065.62074
120. doi:10.1016/j.jastp.2008.09.035 Huang, H., Wu, S., Cohen-Or, D., Gong, M., Zhang, H., Li,
Belton, D., & Bae, K.H. (2010, June). Automating post- G., & Chen, B. (2013). L1-medial skeleton of point
processing of terrestrial laser scanning point clouds for cloud. ACM Trans. Graph., 32(4), 65:1–65: 8.
road feature surveys. In J.P. Mills, D.M. Barber, P.E. doi:10.1145/2461912.2461913
Miller, & I. Newton, (Eds.), ISPRS 2010 Proceedings of Kumar, P., Elhinney, C.M., Lexis, P., & McCarthy, T.
the ISPRS Commission V Mid-Term Symposium on Close (2014). Automated road markings extraction from
Range Image Measurement Techniques (74–79). mobile laser scanning data. International Journal of
Newcastle upon Tyne, UK: ISPRS. Applied Earth Observation and Geoinformation, 32,
Boyko, A., & Funkhouser, T. (2011). Extracting roads from 125–137. doi:10.1016/j.jag.2014.03.023
dense point clouds in large scale urban environment. Lafarge, F., & Mallet, C. (2011). Building large urban envir-
ISPRS Journal of Photogrammetry and Remote Sensing, onments from unstructured point data. In 2011 IEEE
66(Suppl. 6), S2–S12. doi:10.1016/j.isprsjprs.2011.09.009 International Conference on Computer Vision (1068–
Chen, X., Stroila, M., Ruisheng, W., Kohlmeyer, B., 1075). Barcelona, Spain: IEEE.
Narayanan, A., & Jeff, B. (2009, November). Next genera- Landa, J., & Ondrousek, V. (2016). Detection of pole-like
tion map making: Geo-referenced ground-level LIDAR objects from LIDAR data. In 19th International
point cloud for automatic retro-reflective road feature Conference Enterprise and Competitive Environment 2016
extraction. Proceedings of the 17th ACM SIGSPATIAL (226–235). Brno, Czech Republic: Mendel University.
International Conference on Advances in Geographic Landa, J., Prochazka, D., & Stastny, J. (2013). Point cloud
Information Systems, 488–491. Seattle, USA: ACM. processing for smart systems. Acta Univ. Agric. Silvic.
14 D. PROCHAZKA ET AL.
Mendel. Brun., 61(7), 2415–2421. doi:10.11118/ (DTM) generation. Remote Sensing, 4(6), 1804–1819.
actaun201361072415 doi:10.3390/rs4061804
Li, Q., Chen, L., Li, M., Shaw, S.-L., & Nuchter, A. (2014). Thuy, M., & Leon, F.P. (2010). Lane detection and tracking
A sensor-fusion drivable-region and lane-detection sys- based on LIDAR data. Metrology and Measurement
tem for autonomous vehicle navigation in challenging Systems, 17(3), 311–321. doi:10.2478/v10178-010-0027-3
road scenarios. IEEE Transactions on Vehicular Wu, S.T., & Marquez, M.R.G. (2003, October). A non-self-
Technology, 63(2), 540–555. intersection Douglas-Peucker algorithm. In XVI Brazilian
Moreira, A., & Santos, M. (2007, March). Concave hull: A Symposium on Computer Graphics and Image Processing
k-nearest neighbours approach for the computation of (60–63) SIBGRAPI 2003. Sao Carlos, Brazil: IEEE.
the region occupied by a set of points. In GRAPP 2007: Yang, B., & Dong, Z. (2013). A shape-based segmentation
Proceedings of the International Conference on Computer method for mobile laser scanning point clouds. ISPRS
Graphics Theory and Applications (61–68). Barcelona, Journal of Photogrammetry and Remote Sensing, 81, 19–30.
Spain: INSTICC. Yang, B., Dong, Z., Liu, Y., Liang, F., & Wang, Y. (2017).
Pu, S., Rutzinger, M., Vosselman, G., & Elberink, S.O. Computing multiple aggregation levels and contextual
(2011). Recognizing basic structures from mobile laser features for road facilities recognition using mobile laser
scanning data for road inventory studies. Journal of scanning data. ISPRS Journal of Photogrammetry and
Photogrammetry and Remote Sensing, 66(6), S28–S39. Remote Sensing, 126, 180–194. doi:10.1016/j.
doi:10.1016/j.isprsjprs.2011.08.006 isprsjprs.2017.02.014
Ramer, U. (1972). An iterative procedure for the polygonal Yang, B., Fang, L., & Li, J. (2013). Semi-automated extrac-
approximation of plane curves. Computer Graphics and tion and delineation of 3D roads of street scene from
Image Processing, 1(3), 244–256. doi:10.1016/S0146- mobile laser scanning point clouds. Journal of
664X(72)80017-0 Photogrammetry and Remote Sensing, 79(4), 80–93.
Schindler, A., Maier, G., & Janda, F. (2012, June). doi:10.1016/j.isprsjprs.2013.01.016
Generation of high precision digital maps using circular Yang, B., Fang, L., Li, Q., & Li, J. (2012). Automated
arc splines. In 2012 IEEE Intelligent Vehicles Symposium extraction of road markings from mobile LIDAR point
(IV) (246–251) Madrid, Spain: IEEE. clouds. Photogrammetric Engineering and Remote
Sithole, G., & Vosselman, G. (2005). Filtering of airborne Sensing, 78(4), 331–338. doi:10.14358/PERS.78.4.331
laser scanner data based on segmented point clouds. Yang, B., Liu, Y., Dong, Z., Liang, F., Li, B., & Peng, X.
International Archives of Photogrammetry, Remote (2017). 3D local feature BKD to extract road information
Sensing and Spatial Information Sciences, 36(3), 66–71. from mobile laser scanning point clouds. ISPRS Journal
Susaki, J. (2012). Adaptive slope filtering of airborne of Photogrammetry and Remote Sensing, 130, 329–343.
LIDAR data in urban areas for digital terrain model doi:10.1016/j.isprsjprs.2017.06.007