Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data
Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data
Manuscript
Received:
24,Apr., 2012
Revised:
13,May, 2012
Accepted:
22,Jan., 2013
Published:
15,Mar.,2013
Keywords
3D Visibility,
Urban
Environment,
Spatial
Analysis,
Abstract This paper presents a unique solution to
the visibility problem in 3D urban environments
generated by procedural modeling. We shall
introduce a visibility algorithm for a 3D urban
environment, consisting of mass modeling shapes.
Mass modeling consists of basic shape vocabulary
with a box as the basic structure. Using boxes as
simple mass model shapes, one can generate basic
building blocks such as L, H, U and T shapes,
creating a complex urban environment model
computing visible parts. Visibility analysis is based
on an analytic solution for basic building structures
as a single box. The algorithm quickly generates the
visible surfaces' boundary of a single building and,
consequently, its visible pyramid volume. Using
simple geometric operations of projections and
intersections between these visible pyramid volumes,
hidden surfaces between buildings are rapidly
computed. Real urban environment from Boston,
MA, approximated to the 3D basic shape vocabulary
model demonstrates our approach.
This paper also include unique concept of automatic
approximated visibility analysis from point clouds
data using RANSAC and Kalman Filter methods. We
extend the analytic visibility solution for cylinder
and sphere objects and presents automatic detection
and objects prediction with visibility analysis from
point clouds data set.
1. Introduction
Over the last years, the visibility problem, i.e.
identifying the parts visible from a single point or multiple
points of objects in the environment, has become an
interesting and challenging problem for 3D dense
environments in a vast range of research and applications
such as robotics, computer vision, graphics and GIS. This
problem is directly connected to spatial analysis in the 3D
models field.
Most previous works approximate the visible parts to
find a fast solution in open terrains, and do not challenge or
suggest solutions for a dense urban environment. The exact
visibility methods are highly complex, and cannot be used
for fast applications due to the long computation time. Other
fast algorithms are based on the conservative Potentially
Visible Set (PVS) [2]. These methods are not always
completely accurate, as they may include hidden objects'
Oren Gal, Mapping and Geo-Information Engineering, Technion, Israel
Institute of Technology ([email protected])
Yerach Doytsher, Mapping and Geo-Information Engineering, Technion,
Israel Institute of Technology ([email protected])
parts as visible due to various simplifications and heuristics.
In this paper, we extend previous work [4] and introduce
a fast and exact solution to the 3D visibility problem in
complex urban environments generated by mass modeling
shapes and a procedural modeling method. Our solution
does not suffer from approximations, and can be carried out
in a near Real Time performance. We consider a 3D urban
environment, which can be generated by grammar rules.
The basic entities are basic vocabulary mass modeling, such
as L, H, T profile shapes that can be separated into simple
boxes. Based on our previous work, we analyze the spatial
relations for each profile and compute the visible and the
hidden parts. Each box is a basic building modeled as 3D
cubic parameterization which enables us to implement an
analytic solution to the visibility problem, without the
expensive computational process of scanning all the objects'
points.
The algorithm is demonstrated by a collection of basic
mass modeling shapes of an urban environment, where each
shape can be sub-divided to a number of boxes. Using an
extension of our analytic solution for the visibility problem
of a single box from a viewpoint, an efficient solution for a
complex environment is demonstrated. We also compared
computation time between the presented method and the
traditional "Line of Sight" (LOS) method.
In the final section of this paper, we present unique
concept for predicted and approximated visibility analysis
in the next attainable vehicle's state at one-time step ahead
in time, based on local points cloud data which is partial
data set. Our concept based on Kalman Filter for prediction
and RANSAC method for automatic objects detection from
point clouds. We represent an extended analytic visibility
boundary solution for cylinder and sphere objects.
2. Related Work and Background
Shape grammars which are an inherent part of the
procedural modeling method have been used for several
applications over the last years. The first and original
formulation of shape grammar deals with arrangement and
location of points and labeled lines. Therefore, this method
was used for architecture applications, for construction and
analysis of architectural design [1].
Modeling a 3D urban environment can be done by
dividing and simplifying the environment using a set of
grammar rules consisting of basic shape vocabulary of mass
modeling [7]. By that, one can simply create and analyze
Fast Visibility Analysis in 3D Mass Modeling
Environments and Approximated Visibility Analysis
Concept Using Point Clouds Data
Oren Gal & Yerach Doytsher
Gal et al.: Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data.
International Journal Publishers Group (IJPG)
155
3D complex urban environments using computer
implementation.
Automatic generation or modeling of complex 3D
environments, such as the urban case, can be a very
complicated task dealing with fast computations analysis. In
our case, visibility computation in 3D environments is a
very complicated task, which can hardly been done in a
very short time using traditional well-known visibility
methods, due to the environment's complexity modeled with
or without a procedural modeling method.
Over the years, visibility analyses focused on open
environments have been modeled with Digital Elevation
Model (DEM). Due to the computational effort of LOS
computation, approximate visibility approaches which
interpolate visibility values between points have been
introduced by [3]. DEM models have been extensively
researched over the years, facing visibility analysis [3].
In computer graphics, an object can be modeled in
different views managing the data in a graph such as the
aspect graph or Shadow techniques are also used for
visibility boundary computation. All of these works are not
applicable to a large scene, due to computational
complexity.
The paper is structured as follows: We first state, in
section 3, the visibility problem in a 3D urban environment.
In section 4, we introduce an analytic solution for a
visibility problem and hidden surfaces between buildings
modeled as boxes. Visibility analysis of a basic shape
vocabulary based on analytic solution is introduced in
section 4.C. Next, selected examples are given in section 5.
A unique concept for predicted and approximated visibility
analysis from point clouds data presented in section 6, and
conclusions in section 7.
A. Procedural Modeling
Procedural modeling consists of production rules that
iteratively create more and more details. In the context of
urban environments, grammar rules first generate crude
volumetric models of buildings, named as mass modeling,
which will be introduced in the next sub-section. Iterative
rules can also be applied on faade windows and doors.
Modeling processes of the environment also specify the
hierarchical structure.
Fig. 1 Generating Urban Environment Using CGA Shape Based on Mass
Modeling (source: [26])
Shape grammar, which is also called Computer
Generated Architecture (CGA) shape, produces buildings'
shells in urban environments with high geometric details. A
basic set of grammar rules was introduced by Wonka et
al.[7].
Procedural modeling enables us to create fast and different
three dimensional urban models using a combination of
random numbers and stochastic rule selection with different
heights and widths. An example model using these four
rules is depicted in Figure 1.
B. Mass Modeling
Modeling urban environments can be a very
complicated task. The simplest constructions use boxes as a
basic structure. By using boxes as simple mass models, one
can generate basic buildings blocks such as L, H, U and T
shapes, demonstrated in Figure 2.
An extended mass modeling of roofs and facade for
building models was introduced by [5]. In this paper we
introduce visibility analysis of the basic shape vocabulary of
mass modeling using a boxs basic structure, described as
visibility computation of a basic shape vocabulary.
Fig. 2 Basic Shape Vocabulary for Mass Modeling (source: [27])
3. Problem Statement
We consider the basic visibility problem in a 3D urban
environment, consisting of 3D buildings modeled as 3D
cubic parameterization
max
min
1
( , , )
N
h
i h
i
C x y z
=
=
, and a viewpoint
0 0 0
( , , ) V x y z .
Given:
A viewpoint
0 0 0
( , , ) V x y z
in 3D coordinates
- Parameterizations of N objects
max
min
1
( , , )
N
h
i h
i
C x y z
=
=
describing a 3D urban environment model.
Computes:
Set of all visible points in max
min
1
( , , )
N
h
i h
i
C x y z
=
=
from
0 0 0
( , , ) V x y z
.
This problem is ostensibly solved by conventional
geometric methods (namely, LOS approach), but it
demands a long computation time. We introduce a fast
and efficient computation solution for a schematic
structure of an urban environment that demonstrates our
method.
International Journal of Advanced Computer Science, Vol. 3, No. 4, Pp. 154-163, Apr., 2013.
International Journal Publishers Group (IJPG)
156
4. Visibility Computation of a
Basic Shape Vocabulary
A. Visibility Computation of a Simple Box
In this section, we briefly introduce the visibility
solution from a single point to a single 3D box object (an
extended analysis can be found in [4]). This solution is
based on an analytic expression, which significantly
improves time computation by generating the visibility
boundary of the object without the need to scan the entire
objects points.
We define the visibility problem in a 3D environment
for more complex objects as:
co s co s
0 0 0
'( , ) ( ( , ) ( , , )) 0
n t n t
z z
C x y C x y V x y z =
(Equ. 1)
where 3D model parameterization is
co s
( , )
n t
z
C x y
, and the
viewpoint is given as
0 0 0
( , , ) V x y z . Solutions to equation
(1) generate a visibility boundary from the viewpoint to an
object, based on basic relations between viewing directions
from V to
co s
( , )
n t
z
C x y
using cross-product characters.
A three-dimensional urban environment consists mainly
of rectangular buildings, which can hardly be modeled as
continuous curves. Moreover, an analytic solution for a
single 3D model becomes more complicated due to the
higher dimension of the problem, and is not always possible.
Object parameterization is therefore a critical issue,
allowing us to find an analytic solution and, using that, to
rapidly generate the visibility boundary.
1) 3D Building Model: A
three-dimensional building model
should be, on the one hand, simple,
enabling analytic solution, and on the
other hand, as accurate as possible.
We introduce a model that can be used for analytic solution
of the current problem. The basic building model can be
described as:
1
, ,
1
1 1, 350, 1
n
n
x
x t y z c
x
t n c c
| |
= = =
|
|
\ .
s s = = +
(Equ. 2)
This mathematical model approximates building corners as
continuous curves, thus replacing the standard model where
corners are singular points from a mathematical viewpoint.
Extension of this model, specifying equation order vs.
approximation, can be found in [4].
2) Analytic Solution for a Single
Building: In this part we demonstrate
the analytic solution for a single 3D
building model. As mentioned above,
we should integrate building model
parameterization into the visibility
statement. After integrating eq. (1) and
(2):
co s co s
0 0
0 0
0 0 0
1
1
'( , ) ( ( , ) ( , , )) 0
( ) 1 0
( ) 1 0
350, 1 1
n t n t
z z
n n
y x
n n
y x
C x y C x y V x y z
x V n x x V
x V n x x V
n x
=
+ =
= s s
(Equ. 3)
where the visibility boundary is the solution for these
coupled equations. The visibility statement leads to two
polynomial N order equations, which appear to be a
complex computational task. The real roots of these
polynomial equations are the solution to the visibility
boundary. This solution allows us to easily define the
Visible Boundary Points.
Visible Boundary Points (VBP) - we define VBP of the
object i as a set of boundary points
1..
bound
j N =
of the
visible surfaces of the object, from viewpoint
0 0 0
( , , ) V x y z .
1 1 1
2 2 2
1..
1 0 0 0
, ,
, ,
( , , )
..
, ,
bound
bound bound bound
j N
i
N N N
x y z
x y z
VBP x y z
x y z
=
=
(
(
(
=
(
(
(
(Equ. 4)
Roof Visibility The analytic solution in equation (3) does
not treat the roof visibility of a building. We simply check if
the viewpoint height
0
z
V
is lower or higher than the
building height
max
C
i
h
and use this to decide if the roof is
visible or not:
0
max
C
i
z
V Z h > =
(Equ. 5)
If the roof is visible, roof surface boundary points are added
to VBP. Roof visibility is an integral part of VBP
computation for each building.
Simple cases using the analytic solution from a
visibility point to a building can be seen in [4]. The
visibility point is marked in black, the visible parts colored
in red, and the invisible parts colored in blue. The visible
volumes are computed immediately with a very low
computation effort, without scanning all the models points,
as is necessary in LOS-based methods for such a case as can
be seen in Figure 3.
Fig. 3 Visibility Volume computed with the Analytic Solution. Viewpoint is
Gal et al.: Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data.
International Journal Publishers Group (IJPG)
157
marked in black, visible parts colored in green, and invisible parts colored
in purple. VBP marked with yellow circles.
B. Visibility Computation in Urban Environments
In the previous sections, we treated a single building
case, without considering hidden surfaces between
buildings, i.e. building surface occluded by other buildings,
which directly affect the visibility volumes solution. In this
section, we briefly introduce our concept for dealing with
these spatial relations between buildings, based on our
ability to rapidly compute visibility volume for a single
building based on a VBP set.
Hidden surfaces between buildings are simply computed
based on intersections of the visible volumes for each object.
The visible volumes are easily defined using VBP, and are
defined, in our case, as Visible Pyramids (VP). The invisible
components of the far building are computed by intersecting
the projection of the closer buildings' VP base to the far
building's VP base, as described in 4.2.2.
1) The Visible Pyramid (VP): Visible Pyramid (VP)
- we define
1..
0 0 0
( , , )
surf
j N
i
VP x y z
=
of the object i as a 3D
pyramid generated by connecting VBP of specific surface
j to a viewpoint
0 0 0
( , , ) V x y z .
The maximum number of
surf
N
for a single object (a single
box) is three. VP boundary, colored with green arrows, can
be seen in Figure 4.
Fig. 4 A Visible Pyramid from a viewpoint (marked as a black point) to
VBP of a specific surface
The intersection of VPs allows us to efficiently
compute the hidden surfaces in urban environments, as can
be seen in the next sub-section.
2) Hidden Surfaces between Buildings: As we
mentioned earlier, invisible parts of the far buildings are
computed by intersecting the projection of the closer
buildings' VP to the far buildings' VP base. For simplicity,
we demonstrate the method with two buildings from a
viewpoint
0 0 0
( , , ) V x y z one (denoted as the first one) of
which hides, fully or partially, the other (the second one).
As can be seen in Figure 5, in this case, we first
compute VBP for each building separately,
1..4 1..4
1 2
, VBP VBP
;
based on these VBPs, we generate VPs for each building,
1 1
1 2
, VP VP
. After that, we project
1
1
VP
base to
1
2
VP
base
plane, if existing. At this point, we intersect the projected
surface in
1
2
VP
base plane and update
1..4
2
VBP and
1
2
VP
(decreasing the intersected part). The intersected part is the
invisible part of the second building from viewpoint
0 0 0
( , , ) V x y z hidden by the first building, which is marked in
white in Figure 5 (c).
Fig. 5 Generating VP - (a)
1
1
VP
boundary colored in red arrows; (b)
1
2
VP
boundary colored in blue lines; (c) the two buildings -
1
1
VP
in red and
1
2
VP
in blue, from the viewpoint.
We have demonstrated a simple case of an occluded
building. A general algorithm for a more complex scenario,
which contains the same actions between all the
combinations of VP between the objects, is detailed in [4].
Visibility Algorithm pseudo code and complexity analysis
are also detailed in [4].
C. Visibility Analysis for Basic Shape Vocabulary
In this section we present an analysis of visibility
aspects of a basic shape vocabulary as part of the mass
modeling of urban environments. Mass modeling shapes
consist of boxes as a basic structure, in different shapes such
as L, T, U and H. Based on visibility analysis for a single
box, and the hidden surfaces removal between overlapping
boxes introduced above, we demonstrate an accurate and
fast visibility solution for mass modeling buildings profiles.
1) L Shape Visibility:
We demonstrate visibility analysis for an L shape, which
can be split into two different boxes. The profile shape
consists of boxes which overlap the visible surfaces, in
some cases of viewpoint location.
Let L shape be separated into two boxes Aand B , visible
parts colored in green, and invisible parts colored in purple,
and viewpoint V colored by a black dot, as can be seen in
Figure 6 (a). As we described in sub-section 4.1, we
compute the VBP of each box,
A
VBP
,
B
VBP
. In the next
phase, a visible pyramid is computed for each box,
1
A
VP
,
1
B
VP
. Projection of
1
A
VP
to
1
B
VP
base plane and intersection
between pyramids can be seen in Figure 6 (c), which, in that
case, is identical to
1
B
VP
which is hidden. A similar case,
with a viewpoint located in another point relative to the L
shape, can be seen in Figure 7.
(a) (b)
(c)
International Journal of Advanced Computer Science, Vol. 3, No. 4, Pp. 154-163, Apr., 2013.
International Journal Publishers Group (IJPG)
158
Fig.6 L Shape Visibility Analysis. (a) L Shape and Viewpoint consists of
two boxes, A and B. (b) Box A and Viewpoint. (c) Box B and Viewpoint
Hidden Surface Removal colored in black.
Fig. 7 L Shape Visibility Analysis. (a) L Shape and Viewpoint consists of
two boxes, A and B. (b) Box A and Viewpoint. (c) Box B and Viewpoint
Hidden Surface Removal colored in black.
The analysis of the next shapes has been done in the
same way. In the next sub-sections, we introduce the results
of these analyses.
2) T Shape Visibility:
In this case, we demonstrate visibility analysis for a T
shape, which can be split into two separate boxes (similar to
the L shape case) A (Figure 8(a)) and B (Figure 8(b)), the
visible parts are colored in green and invisible parts in
purple; and the viewpoint V colored by a black dot.
Fig.8 T Shape Visibility Analysis. (a) T Shape and Viewpoint consists of
two boxes, A and B. (b) Box A and Viewpoint. (c) Box B and Viewpoint (d)
Hidden Surface Removal colored in black and visible surface colored in
green.
3) U Shape Visibility:
Fig. 9 U Shape Visibility Analysis - (a) Box A and Viewpoint with visible
part colored in green; (b) Box B and Viewpoint with visible part colored in
green; (c) Box C and Viewpoint with visible part colored in green; (d) (e)
Hidden Surface Removal colored in black and visible surface colored in
green; (f) U Shape with the visible and invisible parts.
4) H Shape Visibility:
Fig.10 H Shape Visibility Analysis - (a) Box A and Viewpoint with visible
part colored in green; (b) Box B with visible part colored in green; (c) Box C
with visible part colored in green and Hidden Surface Removal colored in
black ; (d)-(e) H Shape with the visible and invisible parts.
(a)
(b)
(c)
(a) (b)
(c)
(a)
(b)
(c)
(d)
(a) (b)
(c) (d)
(e)
(f)
(a) (b)
(c)
(d)
(e)
Gal et al.: Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data.
International Journal Publishers Group (IJPG)
159
5. Results
We have implemented the presented algorithm and
tested some urban environments modeled with mass
modeling of a built-up environment consisting of basic
shape vocabulary. First, we analyzed the versatility of our
algorithm on a synthetic test scene with different occluded
elements (Figure11) and then on real data - Huntington Ave,
Boston, MA, USA (Figure 12). We used a 1.8GHz Intel
Core CPU with Matlab. After that, we compared our
algorithm to the basic LOS visibility computation, to prove
accuracy and computational efficiency.
A. Test Scene:
Fig. 11 Scene number 1: Nine basic shape structures of buildings in an
Urban Environment, V(x,y,z)= (3,-5,2) - (a) Topside view; (b)-(d) Different
views demonstrating the visibility computation using our algorithm. CPU
time was 0.25 sec.
B. Computation Time and Comparison to LOS
The main contribution of this research focuses on a
fast and accurate visibility computation in urban
environments consisting of basic shape vocabulary. We
compare our algorithm time computation with the
common LOS visibility computation, demonstrating the
algorithm's computational efficiency.
1) Visibility Computation Using LOS:
The common LOS visibility methods require scanning all
of the objects points. For each point, we check if there is
a line connecting the viewpoint to that point which does
not cross other objects. We used the LOS2 Matlab
function, which computes the mutual visibility between
two points on a DEM model. We converted our first test
scene with one to ten structures to DEM, operated LOS2
function, and measured CPU time after model conversion.
Each mass modeling shape with DEM was modeled
homogeneously by 50 points. The visible parts using the
LOS method were the exact parts computed by our
algorithm. Obviously, computation time of the LOS
method was about 6600 times longer than our algorithm
using analytic solution (1650 sec vs. 0.25 sec) for scene
no. 1, and about 7530 times longer for scene no. 2 (2184
sec vs. 0.29 sec).
Figure 12: (a) Scene number 2: Huntington Ave, Boston, MA, USA
(Google Maps). (b) Topside view; (c)-(d) Different views, V(x,y,z)=
(12,-4,5). CPU time was 0.29 sec.
Efficient LOS-based visibility methods for DEM
models, such as Xdraw, have been introduced in order to
generate approximate solutions [3]. However, the
computation time of these methods is at least
( ( 1)) O n n ,
and, above all, the solution is an approximate one.
Complexity analysis for simple boxes is detailed in [4]. In a
case of mass modeling such as shapes consisting of several
boxes, the complexity analysis remains the same.
6. Conceptual Visibility Analysis
From Point Clouds Data
A. Overview
As we mentioned, visibility analysis in complex
urban scenes commonly treated as approximated feature
due to computational complexity. Recently, urban scene
modeling becomes more and more exact using
Terrestrial/ground-based LiDAR generating dense point
clouds data used for roads, signs, lamp posts, buildings,
trees and cars modeling. Automatic algorithms detecting
basic shapes and extraction were extensively studied and
still very active research field [8].
In this part, we present unique concept for predicted
and approximated visibility analysis in the next attainable
vehicle's state at one-time step ahead in time, based on
(a)
(b)
(c) (d)
(a)
(b) (c)
(d)
International Journal of Advanced Computer Science, Vol. 3, No. 4, Pp. 154-163, Apr., 2013.
International Journal Publishers Group (IJPG)
160
local point clouds data which is partial data set.
We focus on three basic geometric shapes in urban
scenes: planes, cylinders and spheres, which are very
common and can be used for the majority of urban entities
in modeling scenarios. Based on point clouds data generated
from the current vehicle's position in state k-1, we extract
these geometric shapes using efficient RANSAC algorithms
[12] with high success rate detection tested in real point
cloud data.
After extraction of these basic geometric shapes from
local point clouds data, our unified concept and our main
contribution focus on the ability to predict and approximate
urban scene modeling at the next view point
k
V , i.e.
attainable location of the vehicle in the next time step.
Scene prediction is based on the geometric entities and
Kalman Filter (KF) which is commonly used in dynamic
systems for tracking target systems [9,10]. We formulate the
geometric shapes as states vectors in dynamic system and
predict the scene structure the in the next time step, k.
Based on the predicted scene in the next time step,
visibility analysis is been done from the next view point
model [4], which is of course an approximated one. As the
vehicle reaching to the next viewpoint
k
V , point clouds data
are measured and scene modeling and states vectors are
updated, which is an essential procedure for reliable KF
prediction.
Our concept is based on RANSAC and KF which are
both real-time algorithms which can be integrated to
autonomous mapping vehicles that become very popular.
This concept can be applicable for robot trajectory planning
generating visible paths, by analyzing local point clouds
data and predict the most visible viewpoint in the next time
step among several options.
In the next sub-section we introduce the main stages of
our concept and algorithm. Basic geometric shapes and
RANSAC detection method from point clouds data are
presented in sub-section 6.C. Discrete dynamic model state
for basic geometric shapes and prediction using KF
technique consist of predict, measure, and update stages
presented in sub-section 6.D. Predicted visibility analysis
discussed in sub-section 6.E.
B. Concept's Stages
Our methodology can be mainly divided into three
sub-problems:
1) Extract basic geometric shapes from point clouds
data (using RANSAC algorithms)
2) Predict scene modeling in the next viewpoint
(using KF)
3) Approximated visibility analysis of a predicted
scene
Each of the following stages is done after the other,
where the last stage also includes updated measurement
of point clouds data validating KF for the next viewpoint
analysis.
C. Shapes Extraction
1) Geometric Shapes:
The urban scene is very complex one in matter of
modeling applications using ground LiDAR and the
generated point clouds is very dense one. Due to these
inherited complications, feature extraction can be very
efficient by using basic geometric shapes. We define three
kinds of geometric shapes planes, cylinders and spheres
with minimal number of parameters for efficient time
computation.
Plane: center point (x,y,z) and unit direction vector from
center point.
Cylinder: center point (x,y,z), radius and unit direction
vector of the cylinder axis.
Sphere: center point (x,y,z), radius and unit direction vector
from center point.
2) RANSAC:
The RANSAC [11] paradigm is a well-known one,
extract shapes from point clouds using minimal set of
shape's primitives generated by random drawing in point
clouds set. Minimal set is defined as the smallest number of
points required to uniquely define a given type of geometric
primitive.
For each one of the geometric shapes, points are tested
and approximate the primitive of the shape (also known as
"score of the shape"). At the end of this iterative process,
extracted shapes are generated from the current point clouds
data.
Based on RANSAC concept, the geometric shapes
detailed above can be extract from a given point clouds data
set. In order to improve extraction process and reduce
number of points validating shape detection, we compute
the approximated surface normal for each point and to test
the relevant shapes.
Given a point-clouds
1
{ .. }
N
P p p = with associated
normals
1
{ .. }
N
n n , the output of RANSAC algorithm is a set
of primitive shapes
1
{ .. }
N
o o and a set of remaining points
1
\{ .. }
N
R P p p
o o
= .
In this part we briefly introduce the main idea of plane,
sphere and cylinder extraction from point clouds data. An
extended study of RANSAC capabilities can be found in
[12].
Plane: Minimal set for plane case, can be found only by
three points
1 2 3
{ , , } p p p , without considering normals in
the points. Final validation of the candidate plane is
computed from the deviation of the planes normal from
1 2 3
{ , , } n n n . Plane is extracted only in case of all deviations
are less than the predefined angle o .
Sphere: A sphere is fully defined by two points with
corresponding normal vectors. Sphere center is defined
from the midpoint of the shortest line segment between the
two lines given by the points and their normals.
Sphere count as detected shape in case of all three points is
within a distance of c of the sphere and their normals do
Gal et al.: Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data.
International Journal Publishers Group (IJPG)
161
not deviate by more than o degrees.
Cylinder: Cylinder is set from two points and their normals.
Where the cylinder axis direction is the projected cross
product of the normals, and center point is calculated as the
intersection of parametric lines generated from points and
point's normal. Cylinder is verified by applying the
thresholds c and o to distance and normal deviation of
the samples.
D. Predicted Scene Kalman Filter
In this part, we present the global Kalman Filter
approach for our discrete dynamic system at the estimated
state, k, based on the defined geometric shapes formulation
defined in the previous sub-section.
Generally, Kalman Filter can be described as a filter that
consists from three major stages: predict, measure, and
update the state vector. The state vector contains different
state parameters, and provides an optimal solution of the
whole dynamic system [9]. We model our system as linear
one, with discrete dynamic model:
, 1 1 k k k k
x F x
= (Equ. 6)
Where x is the state vector, F is the transition matrix and k
is the state.
The state parameters for all of the geometric shapes are
defined with shape center s , and unit direction vector d ,
of the geometric shape, from current time step and
viewpoint to the predicted one.
In each of the current state k, geometric shape center
k
s , is estimated based on the previous updated of shape
center location
1 k s
, and the previous updated unit
direction vector
1 k d
, multiplied by small arbitrary scalar
factor c:
1 1 k k k s s cd = + (Equ.7)
Direction vector
k d can be efficiently estimated
extracting the rotation matrix T, between the last two states
k, k-1. In case of inertial system fixed on the vehicle,
rotation matrix can be simply found from the last two states
of the vehicle translations:
1 k k d Td = (Equ.8)
The 3D rotation matrix T tracks the continuous
extracted plans and surfaces to the next viewpoint
k
V , and
make it possible to predict scene model where one or more
of the geometric shapes are cut from current point clouds
data in state k-1.
The discrete dynamic system can be written as:
1
1
1
1
1
1
11 12 13
21 22 23
31 32 33
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0
0 0 0
0 0 0
k k
k k
k k
k k
k k
k k
x x
y y
z z
x x
y y
z z
s s
c
s s
c
c s s
T T T
d d
T T T
d d
T T T
d d
( (
(
( (
(
( (
(
( (
(
( (
=
(
( (
(
( (
(
( (
(
( (
(
( (
(Equ.9)
Where the state vector x is 6 1 vector, and the
transition squared matrix is
, 1 k k
F
. The dynamic system can
be extended to additional state variables representing part of
the geometric shape parameters such as radius, length etc.
We define the dynamic system as the basic one for generic
shapes that can be simply modeled with center and direction
vector. The sphere radius and cylinder Z boundaries defined
in additional data structure of the scene entities.
E. Fast and Approximated Visibility Analysis
In this section, we present analytic analysis of visibility
boundaries of planes, cylinders and spheres for the
predicted scene presented in the previous sub-section that
leads to an approximated visibility. For the plane surface,
fast and efficient visibility analysis already presented at [4].
In this part, we extend previous visibility analysis
concept [4] and include cylinders as continuous curves
parameterization
ln
( , , )
c d
C x y z . Cylinder parameterization
can be described as:
ln
sin( )
( , , ) cos( )
C d
r const
r
C x y z r
c
u
u
=
| |
|
=
|
|
\ .
_ max
0 2
1
0
peds
c c
c h
u t s s
= +
s s
(Equ.10)
We define the visibility problem in a 3D environment for
more complex objects as:
co s co s
0 0 0
'( , ) ( ( , ) ( , , )) 0
n t n t
z z
C x y C x y V x y z =
(Equ.11)
where 3D model parameterization is ( , )
z const
C x y
=
, and the
viewpoint is given as
0 0 0
( , , ) V x y z . Extending the 3D cubic
parameterization, we also consider the cylinder case.
Integrating equ. (10) to (11) yields:
sin cos
sin cos 0
0
x
y
z
r V r
r r V
c V
u u
u u
| |
| |
|
|
=
|
|
|
|
\ .
\ .
(Equ.12)
International Journal of Advanced Computer Science, Vol. 3, No. 4, Pp. 154-163, Apr., 2013.
International Journal Publishers Group (IJPG)
162
(Equ.13)
As can be noted, these equations are not related to Z axis,
and the visibility boundary points are the same for each x-y
cylinder profile.
The visibility statement leads to complex equation,
which does not appear to be a simple computational task.
This equation can be efficiently solved by finding where the
equation changes its sign and crosses zero value; we used
analytic solution to speed up computation time and to avoid
numeric approximations. We generate two values of u
generating two silhouette points in a very short time
computation. Based on an analytic solution to the cylinder
case, a fast and exact analytic solution can be found for the
visibility problem from a viewpoint.
We define the solution presented in eq. (13) as x-y-z
coordinates values for the cylinder case as Cylinder
Boundary Points (CBP). CBP are the set of visible
silhouette points for a 3D cylinder, as presented in Fig 13:
_
_ _ _
1 1 1
1.. 2 0 0 0
, ,
( , , )
, ,
PBP bound
PBP bound PBP bound PBP bound
i N
N N N
x y z
CBP x y z
x y z
= =
(
=
(
(
(Equ.14)
(a) (b)
Fig. 13 Cylinder Boundary Points (CBP) using Analytic Solution marked as
blue points, Viewpoint Marked in Red: (a) 3D View (Visible Boundaries
Marked with Red Arrows); (b) Topside View
In the same way, sphere parameterization can be described
as:
sin cos
( , , ) sin sin
cos
0
0 2
Sphere
r const
r
C x y z r
r
| u
| u
|
| t
u t
=
| |
|
=
|
|
\ .
s <
s <
(Equ.15)
We define the visibility problem in a 3D environment this
objects as:
0 0 0
'( , , ) ( ( , , ) ( , , )) 0 C x y z C x y z V x y z =
(Equ.16)
Where 3D model parameterization is ( , , ) C x y z , and the
viewpoint is given as
0 0 0
( , , ) V x y z . Integrating eq. (15) to
(16) yields:
(Equ.17)
Where r is set from sphere parameter, and
0 0 0
( , , ) V x y z
is changes from visibility point along Z axis. The visibility
boundary points for sphere included with the analytic
solutions to planes and cylinders allow us to compute fast
and efficient visibility in a predicted scene from local point
cloud data, that being updated in the next state.
This extended visibility analysis concept integrated with
well-known predicted filter and extraction method, can be
implemented in real application with point clouds data.
7. Conclusion and Future Work
We have presented an efficient algorithm for visibility
computation in an urban environment, consisting of basic
shapes vocabulary of mass modeling. Each shape of this
modeling can be sub-divided into several boxes which stand
for a basic building structure. The basic building structure is
modeled with mathematical approximating for presentation
of buildings corners. Our algorithm is based on a fast
visibility boundary computation for a single object, and on
computing the hidden surfaces between buildings by using
projected surfaces and intersections of the visible pyramids.
By using a simple and efficient visibility solution, the
concept efficiently extended to a complex shapes profiles
in handling a complex urban environment scenario.
Computational running time of our algorithm has been
presented, compared to LOS visibility computation,
showing significant improvement of performance time.
The main contribution of the method presented in this
paper is that it does not require special hardware, and is
suitable for on-line computations based on the algorithms'
performances, as presented above. The method generates an
exact and fast solution to the visibility problem in relatively
complex urban environments, modeled or generated by
using procedural modeling consisting of basic shape
vocabulary that can be used for real urban environments as
can be seen in scene no. 2. Using these basic shapes, one
can create buildings having different shapes (including for
example balconies).
Further research will focus on improving
computational performances algorithm using
General-Purpose Graphic Process Unit (GPGPU) extending
urban environment modeling by taking into account Level
Gal et al.: Fast Visibility Analysis in 3D Mass Modeling Environments and Approximated Visibility Analysis Concept Using Point Clouds Data.
International Journal Publishers Group (IJPG)
163
of Details (LOD) and roof modeling.
The conceptual approximated visibility analysis from
point clouds presented as applicable concept. Future work
will focus on simulations and real point cloud data for fast
and approximated visibility analysis in urban scenes.
References
[1]. F. Dowing, & U. Flemming, "The bungalows of buffalo"
(1981) Environment and Planning B, vol. 8, pp. 269293.
[2]. F. Durand, "3D Visibility: Analytical Study and
Applications," (1999), PhD thesis, Universite Joseph Fourier,
Grenoble, France.
[3]. W.R. Franklin. and C. Ray, " Higher isnt Necessarily Better:
Visibility Algorithms and Experiments," (1994), In T. C.
Waugh & R. G. Healey (Eds.), Advances in GIS Research:
Sixth International Symposium on Spatial Data Handling, pp.
751770. Taylor & Francis, Edinburgh.
[4]. O. Gal. Y, Doytsher, Fast and Accurate Visibility
Computation in a 3D Urban Environment (2012), In
Proceedings of GEOProcessing, Valencia, Spain.
[5]. P. Muller, P. Wonka, S. Hawgler, A. Ulmer, L.C. Gool,
Procedural Modeling of Buildings, (2006), In Proceedings
of ACM, SIGGRAPH, pp. 614-623.
[6]. G. Schmitt, Architectura et machina. (1993), Vieweg&Sohn.
[7]. P. Wonka, M. Wimmer, F. Sillion, and W. Ribarsky, Instant
architecture, (2003), ACM Transactions on Graphics vol.
22, no. 3, pp. 669677.
[8]. G. Vosselman, B. Gorte, G. Sithole, and T. Rabbani.
"Recognizing structure in laser scanner point clouds". (2004),
The International Archives of the Photogrammetry Remote
Sensing and Spatial Information Sciences (IAPRS), vol. 36,
pp. 3338.
[9]. R. Kalman. "A new approach to linear filtering and
prediction problems". (1960). Transactions of the
ASME-Journal of Basic Engineering, vol. 82, no. 1, pp:35
45
[10]. J. Lee, M. Kim, and I. Kweon. "A kalman filter based visual
tracking algorithm for an object moving", (1995), In
IEEE/RSJ Intelligent Robots and Systems, pp. 342347.
[11]. H. Boulaassal, T. Landes, P. Grussenmeyer, and F. Tarsha-
Kurdi. "Automatic segmentation of building facades using
terrestrial laser data". (2007), The International Archives of
the Photogrammetry Remote Sensing and Spatial
Information Sciences (IAPRS),vol. 36, no. 3.
[12]. R. Schnabel, R. Wahl, R. Klein, "Efficient RANSAC for
Point-Cloud Shape Detection", (2007), Computer Graphics
Forum, vol. 26, no.2, pp. 214-226.