Computing in Civil Engineering 2015
Computing in Civil Engineering 2015
ENGINEERING 2015
PROCEEDINGS OF THE 2015 INTERNATIONAL WORKSHOP ON
COMPUTING IN CIVIL ENGINEERING
SPONSORED BY
Computing and Information Technology Division
of the American Society of Civil Engineers
EDITED BY
William J. O'Brien, Ph.D., P.E.
Simone Ponticelli, Ph.D.
Any statements expressed in these materials are those of the individual authors and do not necessarily
represent the views of ASCE, which takes no responsibility for any statement made herein. No reference
made in this publication to any specific method, product, process, or service constitutes or implies an
endorsement, recommendation, or warranty thereof by ASCE. The materials are for general information
only and do not represent a standard of ASCE, nor are they intended as a reference in purchase
specifications, contracts, regulations, statutes, or any other legal document. ASCE makes no
representation or warranty of any kind, whether express or implied, concerning the accuracy,
completeness, suitability, or utility of any information, apparatus, product, or process discussed in this
publication, and assumes no liability therefor. The information contained in these materials should not be
used without first securing competent advice with respect to its suitability for any general or specific
application. Anyone utilizing such information assumes all liability arising from such use, including but
not limited to infringement of any patent or patents.
ASCE and American Society of Civil Engineers—Registered in U.S. Patent and Trademark Office.
Photocopies and permissions. Permission to photocopy or reproduce material from ASCE publications
can be requested by sending an e-mail to [email protected] or by locating a title in ASCE's Civil
Engineering Database (http://cedb.asce.org) or ASCE Library (http://ascelibrary.org) and using the
“Permissions” link.
Preface
The 2015 ASCE International Workshop on Computing in Civil Engineering (IWCCE) was held
in Austin from June 21-23, 2015. The workshop was hosted by The University of Texas at
Austin under the auspices of the ASCE Technical Council on Computer and Information
Technology (TCCIT). The IWCCE has become an international platform for researchers to share
and debate topical issues across the various aspects of computing research in Civil Engineering.
This edition of the workshop is aimed at continuing this successful debate by attracting a strong
active audience through keynote sessions, dedicated topic tracks, and committee meetings.
These proceedings include 88 papers from 14 countries around the globe with topics in three
main categories: Data, Sensing, and Analysis; Visualization, Information Modeling, and
Simulation; Education. From a total of 168 received abstracts, the final set of articles was
selected through a rigorous peer-review process, which involved the collection of at least two
blinded reviews per article. The review process was performed for both abstracts and full papers,
ensuring that only the best contributions were selected. Finally, authors had the chance to
incorporate reviewers’ comments into the final version. We are really pleased with the high
quality of selected articles – that represent most contemporary and cutting-edge research on
Computing in Civil Engineering – and we wish to thanks both authors and reviewers for their
efforts.
Organizing this workshop has been possible only with the support of many. We are particularly
grateful to the Department of Civil, Architectural, and Environmental Engineering at The
University of Texas at Austin for their support and infrastructure. The TCCIT committees
supported the review process and provided extensive organizational assistance. Special thanks
are also extended to Simone Ponticelli and Kris Powledge for their hard work in assisting the
committee.
We hope that you enjoyed the technical sessions during the workshop and that you had a
memorable stay in Austin!
© ASCE
Computing in Civil Engineering 2015 iv
Acknowledgments
Special thanks are due to the members of the Local Organizing Committee for their continuous
and tireless support throughout the organization of the workshop:
Name Institution
Dr. Carlos Caldas The University of Texas at Austin
Dr. Fernanda Leite The University of Texas at Austin
Dr. William J. O’Brien The University of Texas at Austin
Dr. Simone Ponticelli The University of Texas at Austin
The editor would also like to thank the following reviewers for their dedication, which ensured
the high value of the paper selection process:
Name Institution
Mani Golparvar-Fard University of Illinois at Urbana-Champaign
Nora El-Gohary University of Illinois at Urbana-Champaign
Yong Cho Georgia Institute of Technology
Ivan Mutis University of Florida
Carol Menassa University of Michigan
SangHyun Lee University of Michigan
Fernanda Leite The University of Texas at Austin
Hazar Dib Purdue University
Changbum Ahn University of Nebraska-Lincoln
Eric Marks University of Alabama
Pingbo Tang Arizona State University
SangUk Han University of Alberta
Abbas Rashidi The University of Tennessee, Knoxville
Ken-Yu Lin University of Washington
Zhenhua Zhu Concordia University
Renate Fruchter Stanford University
JoonOh Seo University of Michigan
Ali Mostafavi Florida International University
Chao Wang Georgia Institute of Technology
Reza Akhavian University of Central Florida
Omar El-Anwar Cairo University
Frederic Bosche Heriot-Watt University
© ASCE
Computing in Civil Engineering 2015 v
© ASCE
Computing in Civil Engineering 2015 vi
Contents
© ASCE
Computing in Civil Engineering 2015 vii
GIS-Based Decision Support System for Smart Project Location ..................... 131
A. Komeily and R. Srinivasan
© ASCE
Computing in Civil Engineering 2015 viii
© ASCE
Computing in Civil Engineering 2015 ix
Education
© ASCE
Computing in Civil Engineering 2015 x
© ASCE
Computing in Civil Engineering 2015 xi
Integration of BIM and GIS: Highway Cut and Fill Earthwork Balancing ..... 468
Hyunjoo Kim, Zhenhua Chen, Chung-Suk Cho, Hyounseok Moon, Kibum Ju, and
Wonsik Choi
© ASCE
Computing in Civil Engineering 2015 xii
Using BIM for Last Planner System: Case Studies in Brazil ............................. 604
M. C. Garrido, R. Mendes Jr., S. Scheer, and T. F. Campestrini
© ASCE
Computing in Civil Engineering 2015 xiii
© ASCE
This page intentionally left blank
Computing in Civil Engineering 2015 1
Abstract
Keywords
INTRODUCTION
During the past decades, corrugated steel webs were introduced to replace the
stiffened steel plates of box girders for bridges. Generally, beams and girders with
corrugated webs are more economical and improve the aesthetics of the structure
(Sayed-Ahmed 2001). In 1982, the advantages of using trapezoid corrugated steel
webs along with external pre-stressing for box or I-girder composite systems in
bridge construction were recognized by Campenon from France (Cheyrezy and
Combault 1990). French research started in 1983 and led to building of four bridges
1
© ASCE
Computing in Civil Engineering 2015 2
between 1986 and 1994. In Japan, similar research led to construction of three
bridges with corrugated webs between 1993 and 1998 (Naito and Hattori 1994;
Metwally and Loov 2003). One of the most critical issues in design of composite
girders is the connection reliability between steel girder and concrete slab. Currently
two types of connectors are widely used to realize compatible deformation of steel
and concrete, i.e. the headed stud and the ‘Perfobond strip’ (PBL connector) (Kraus
and Wurzer 1997; Saari et al. 2004; Shim et al. 2004; Lee et al. 2005). Both types
can transfer shear force and prevent separation between two parts of composite
girders, i.e., anti-shearing and anti-uplift.
As the development of computers, finite element programs have been widely
applied in analysis and design of composite bridges. In small scale, the mechanism of
composite girder could be simulated exactly by fine finite element analysis. But in
large scale, such as long-span bridges, the finite element usually should be simplified
to save computing time (Mabsout et al. 1997; Sebastian and McConnel 2000). In this
study, a full-scale finite element model of composite box girder bridge with
corrugated webs was constructed in the program Midas FEA. As considering time
scale, this 4D full-scale finite element analysis was conducted to exactly simulate the
authentic construction, shrinkage and creep of concrete, effect of pre-stressed-strand,
and slip effect. Furthermore, to compare the exact performance of headed stud and
PBL connectors, two models with different connectors and construction were
compared to approach an optimization design.
Project background
The analysis model was based on the construction project of Xin’an Bridge
across Dongbao River in Shenzhen, China. It was a 3-spans (88m+156m+88m)
composite box girder bridge. In the design, girders of this bridge were composite box
girders using corrugated steel webs.
Fig.1 shows the design of Xin’an Bridge. The box girders were consisted of
concrete slab and corrugated steel web. And the height of box girders changed along
the whole bridge between 3.5m and 8.3m.
2
© ASCE
Computing in Civil Engineering 2015 3
Model construction
The finite element model was constructed by the program Midas FEA. In this
model, solid elements were adopted to simulate concrete slab and shell elements
were used as corrugated steel webs. Pre-stressed steel rebar was also simulated by
2D link elements. The compressive strength of concrete was set as 60MPa, and the
yield strength of steel was set as 345MPa. Fig.2 shows the model constructed in
Midas FEA.
Two models, i.e., Model A and Model B, were constructed with different
arrangement of connectors. In Model A, only PBL connectors were adopted. As the
stiffness of PBL connectors were high, the slips between concrete slab and
corrugated steel web had not been considered.
In Model B, PBL connectors in some regions were replaced by studs. Fig.3
shows the replacing regions and the arrangement of studs. As the stiffness of studs
was lower than that of PBL connectors, slip deformation between concrete slab and
corrugated steel web was considered. Spring elements were used to simulate studs.
3
© ASCE
Computing in Civil Engineering 2015 4
(a) Region
ns using stu
uds
150
(b) Stud
ds on top flaange (cc) Studs on b
bottom flan
nge
Fig.3 Arrangemen
A nt of studs in
i Model B
4D FINITE
F ELE
EMENT AN
NALYSIS
In order to
t concentratte on the tim
me influence and differennce between Model A
and Model B, th he same loaad effect waas applied anda the matterial perform mance of
conccrete was alsso time-relatted. As pagee limited, thrree represenntative stagess of finite
elem
ment simulattion were presented aand differen nce betweeen two moddels was
comp pared.
4
© ASCE
Computing in Civil Engineering 2015 5
Consstruction stage 8
Before thhe constructiion stage 8, there was noo difference between twwo models
as th
he change off connectors started form m constructioon stage 9. O
On stage 8, under self-
gratittude, the peerformance is shown inn Fig.5. Thee maximum vertical defformation
was 8.1mm, andd happened in i the middle of bridgee. The maxim mum tensilee stress in
conccrete was 1.36MPa and d the maxiimum comp pressive streess in conccrete was
22.5MMPa. The maximum
m Von Mises sttress of corrrugated steell webs was 172MPa,
happpening in th he top flange of sectionn 3, where the parametters of conccrete slab
channged. As the results, thhe simulatioon of full-sscale finite element model was
reasoonable and acceptable.
a
(c) V
Von Mises stress
s of
(a) Vertical
V defformation (b) Stresss of concreete
steel web
bs
Fig
g.5 Simulatee results of stage 8
(c) V
Von Mises stress of
(a) Vertical
V defformation (b) Stresss of concrette
steel webs
mulate results of Model A at stage 16
Fig.6 Sim
Consstruction stage 16
After stage 16, consstruction of this bridge had been doone. Fig.6 shows
s the
resullt of Model A.
A Table.1 presents
p the key targets of Model A and Modell B. Also,
the Von
V Mises stresss of 98.7% of corruugated steel webs in Moodel A and 99.2% in
5
© ASCE
Computing in Civil Engineering 2015 6
Model B were below 200MPa. As the comparison, Model B owned a smaller vertical
deformation and a lower stress level. As the stiffness of stud was smaller than PBL
connector, slip deformation was permitted in Model B and the stress could be
homogenized.
Use stage
Based on the 4D finite element analysis of two full-scale models and the
comparing results, the performance of two types of connectors used in the whole
bridge could be researched. The model applying PBL connectors had a larger
stiffness than that using stud, while the latter had a better stress condition.
CONCLUSION
6
© ASCE
Computing in Civil Engineering 2015 7
ACKNOWLEDGEMENT
REFERENCES
7
© ASCE
Computing in Civil Engineering 2015 8
INTRODUCTION
Techniques such as Branch and bound methods (Brucker, et al., 2003), mathematical
programming (Winston, et al., 2002), genetic algorithms (Senouci, et al., 2004), Ant
Colony Optimization (Christodoulou, 2009), neural networks (Senouci, et al., 2001),
and particle swarm optimization (Zhang, et al., 2006), were used to solve Resource-
Constrained Project Scheduling Problems (RCPSP). This paper develops an Agent
Based Model (ABM) for resource constrained scheduling of construction projects.
ABM can be a promising alternative technique for solving RCPSP problems (Knotts
et al., 2000). It has several advantages, including flexibility in defining activities and
resources. The ABM model includes two features, namely, handling of activity
interruptions and incorporating the impact of predecessor quality on successor
duration. These features were not addressed in previous RCPSP models. The ABM
model was validated using an illustrative example from the literature.
PROBLEM IMPLEMENTATION
Model Architecture
The model follows a modular architecture and object oriented programming. This
will allow other researchers to build on this work with minimum effort. An Agent
Based Model is considered herein.
Definition of an Agent
Agents are autonomous objects that have the ability of satisfying internal goals. They
have a complex underlying functional architecture such as the belief-desire-intention
(BDI) architecture (Rao et. al., 1992).
Types of Agents
1
© ASCE
Computing in Civil Engineering 2015 9
The agents used in this model range from relatively simple agents to extremely
complicated ones. The common agent types are reactive, adaptive, goal oriented,
learning, and intelligent (Sycara, et al., 1996).
Model Components
Critical Path Module
It is important to note that the model computes Critical Path Method (CPM) values
with no resource constraints before the resource constrained simulation starts. These
values include Early Start, Early Finish, Late Start, Late Finish, and Total Float.
The Activity Agent
This agent is the main agent in the proposed Agent Based Model. Thus, agents are the
main drivers of the simulation. Activities are not intelligent agents (i.e., they do not
learn). They can be considered goal oriented reactive agents. The activity agent’s goal
is to be completed. This is done by starting, performing certain tasks for a given
duration then concluding. In some cases, interruptible activities may be interrupted.
The life of an activity can be translated to a state chart for coding purposes. All
activities start in a “NotReady” state. Each activity then assesses whether its
predecessors are complete or not. If they are complete then the activity becomes
“Ready”. Once resources for this activity are available then the activity can start and
become in an “InProgress” state. If the activity is interrupted for any reason, it moves
to an “Inter” state. Once the activity is finished, it is transformed to a “Complete”
state.
The activities that are in “Ready” state compete for the resources available through
preset priority rules. Three Priority rules are currently coded into the model, namely,
shortest remaining float, earliest early start, and latest late finish. The user chooses
which priority rule to apply. The activity that is in “Ready” state and has priority will
then check if there are enough resources available for it. If there are enough
resources, the activity will commence otherwise the activity will remain in a “Ready”
state. Another important aspect is interruption. During the priority check, the
activities that can be interrupted may be interrupted in favor of more critical
activities. In this case, the state of the activity will be converted from “InProgress” to
“Interrupted”.
In addition to states, the activity agent includes the following parameters.
Predecessors, Duration, and Resources: These are input parameters that must be
included in each activity.
Early Start, Early Finish, Late Start, Late Finish, Total Float: These parameters are
calculated in the CPM module before the simulation, then assigned to each activity.
These parameters are used in priority calculations.
Actual Start, Actual Finish: These are the start and finish times calculated through the
resource constrained ABM simulation.
Interruptability: This is an input parameter that reflects whether the activity can be
interrupted or not. This parameter is usually assigned by the user.
Quality: This parameter reflects the finish quality of the activity. This parameter is
usually assigned by the user.
Dependency on Predecessor Quality: This parameter is an input parameter that
reflects whether the duration of this activity is affected by the finish quality of a given
predecessor.
2
© ASCE
Computing in Civil Engineering 2015 10
Duration Object
Duration was modeled as a separate object to allow for activity duration to be
manipulated as an aggregate. For instance, the user may decide to apply a certain
distribution that would calculate the duration given certain limits. This can be used
easily when durations are treated as a separate object.
Resource Pool
The resource pool is the source of resources. The model handles only one type of
resources. The resource pool is constrained to a user predefined number of resources.
Activities book these resources and the remainder resources remain in the resource
pool.
Concept Flow Chart
Figure 1 highlights the overall process being implemented in the code.
ILLUSTRATIVE EXAMPLE
An example project (after (Moroto et.al. (1994)) was used to illustrate the different
aspects of the proposed model. Table 1 summarizes the project activity duration and
resources, and preceding activities.
Priority Rules
Three different priority rules were implemented in the developed program. The
priority is given to the activity that has the earliest start date, the latest late finish, or
the shortest float period, given that there are sufficient resources for the activity to
start. Three different runs were performed. Each run used one of the priority rules
(PR) as the BDI of the agents. The results are summarized in Table 2.
The difference in priority rules is apparent. However, this does not necessarily
suggest that using the earliest start priority rule will always yield the best results.
Each priority rule will produce a better solution at different situations. The three
priority rules can be used as a preliminary step towards finding an optimum solution.
3
© ASCE
Computing in Civil Engineering 2015 11
For the sake of simplicity, only finish to start relationships were considered herein.
4
© ASCE
Computing in Civil Engineering 2015 12
3 7 9 7 9 12 14 12 14 12 14 Y
4 9 23 9 23 14 28 14 28 14 28 Y
5 9 10 22 23 28 29 28 29 31 32 N
6 9 14 18 23 29 34 29 34 39 44 N
7 9 12 12 15 34 37 34 37 28 31 N
8 12 20 15 23 37 45 37 45 31 39 N
9 23 43 23 43 45 65 45 65 44 64 Y
10 43 46 43 46 65 68 65 68 64 67 Y
11 46 50 47 51 97 101 68 72 73 77 N
12 46 53 49 56 68 75 72 79 87 94 N
13 46 52 46 52 78 84 79 85 67 73 Y
14 50 75 51 76 101 126 97 122 94 119 N
15 50 54 72 76 126 130 122 126 150 154 N
16 53 56 61 64 75 78 134 137 127 130 N
17 53 61 56 64 78 86 79 87 119 127 N
18 52 64 52 64 86 98 85 97 73 85 Y
19 52 57 60 65 89 94 87 92 77 82 N
20 52 57 60 65 84 89 92 97 82 87 N
21 50 58 68 76 130 138 126 134 137 145 N
22 75 81 76 82 138 144 144 150 154 160 N
23 64 68 64 68 155 159 140 144 130 134 Y
24 57 60 65 68 94 97 137 140 134 137 N
25 81 86 82 87 144 149 155 160 160 165 N
26 68 73 68 73 159 164 144 149 145 150 Y
27 86 89 87 90 149 152 160 163 165 168 N
28 89 92 90 93 152 155 169 172 174 177 N
29 73 93 73 93 164 184 149 169 154 174 Y
30 73 78 99 104 164 169 150 155 168 173 N
31 93 104 93 104 184 195 172 183 177 188 Y
*Resource pool set at 8 resources
It is worth noting that these results are specific to the resource constraints mentioned
earlier. If the resource constraints are changed, the total duration will change and the
ranking of the priority rules may change as well. Figure 2 illustrates the current
resource profile of the above results.
Interruption
Another aspect of this model is the ability to allow activity interruption. The activity
agents in the model allow for the interrupt ability option to be highly selective and
flexible. For instance, some of the activities are allowed to be interrupted while
others are not. Table 3 illustrates an example where all the activities are allowed to be
interrupted.
5
© ASCE
Computing in Civil Engineering 2015 13
(a)
(b)
(c)
Figure 2. Resource Profile (a) Late Finish Priority (b) Early Start Priority (c) Total
Float Priority
6
© ASCE
Computing in Civil Engineering 2015 14
7
© ASCE
Computing in Civil Engineering 2015 15
Impact of Quality
As mentioned earlier, the impact of the quality of predecessor on successors can be
implemented in this model. This impact is translated into extra time that is needed by
the successor activity. As an illustration, different activities were chosen to have
lower quality and the impact on the entire project duration was observed. Figure 3
summarizes the result of this exercise. It can be observed in Figure 3 that as the
percentage of predecessors of lower quality increases the total project duration
increases. The relation is expected to be non-linear and unique to a given set of
activities.
CONCLUSION
The agent based model developed herein was capable of calculating total project
durations using three different priority rules. The model performed decisions on
whether or not to interrupt activities, provided the user allowed interruption, to obtain
a shorter schedule. The model was also able to incorporate the impact of poor
predecessor finish quality on successor’s duration. Agent based modeling proved to
be applicable and useful in handling resource constrained schedules. The model
developed illustrated the flexibility that can be added to standard techniques that
solve resource constrained problems. Future work includes adding more attributes to
activities and resources, such as trade, skill level, learning and complexity of task.
Future models are expected to link this attributes and predict individual activity
duration stochastically.
Rao, A. S., & Georgeff, M. P. (1992). “An abstract architecture for rational agents.”
KR, 92, 439-449.
Senouci, A. B., & Adeli, H. (2001). “Resource scheduling using neural dynamics
model of Adeli and Park.” J. Constr. Eng. Manage.,ASCE, 127(1), 28-34.
Senouci, A. B., & Eldin, N. N. (2004). “Use of genetic algorithms in resource
scheduling of construction projects.” J. Constr. Eng. Manage., ASCE, 130(6), 869-
877.
Sycara, K., Pannu, A., Willamson, M., Zeng, D., & Decker, K. (1996). “Distributed
intelligent agents.” IEEE expert, 11(6), 36-46.
Winston, W. L., & Venkataramanan, M. (2002). Introduction to mathematical
programming (4th ed.). Thomson-Brooks/Cole, Pacific Grove, Calif.
Zhang, H., Li, H., & Tam, C. (2006). Permutation-based particle swarm optimization
for resource-constrained project scheduling. J. Comput. Civ. Eng., 20(2), 141-149.
9
© ASCE
Computing in Civil Engineering 2015 17
Abstract
© ASCE
Computing in Civil Engineering 2015 18
RESEARCH APPROACH
The projects. The projects for this case study are five different borrow pits in Middle
East. These pits have been dug for use in different construction projects, e.g.
highways and dams. The borrow pits are located in areas containing almost similar
climate conditions and annual rainfall.
Machine selection. The choice of machine in this study was limited to two
manufacturers’ models: Caterpillar, and Komatsu. Caterpillar believes to control
more than half of the U.S. construction equipment market and one third of the world
market (Arditi et al. 1997). Both Caterpillar and Komatsu were the only brands of
crawler-type dozer used in the borrow pits. The following models were used in the
projects:
- Caterpillar: D6N, D6T, D7R, D9T, D10T
- Komatsu: D155A-2, D155A-6, D275A, D375A
Table 1 shows the number and age of each model. There were totally 39
machines with average age of 7.3 working years. All these machines worked with
Semi-Universal (SU) blades.
Production data collection. The data for the nominal hourly production was derived
from machinery charts and performance handbooks of the two manufacturers:
Caterpillar Performance Handbook, and Komatsu Specifications & Application
Handbook.
© ASCE
Computing in Civil Engineering 2015 19
The actual production data was derived from records of different machines in
the five borrow pits. Each model was individually considered at project site. In data
gathering phase, the hauling distance (average dozing distance) of machine was
considered as a parameter to achieve integrated results. The author recorded the
hourly production of all machines in some specified hauling distances. On the basis of
the project conditions, 15, 30, and 75 meter were finally chosen as the three main
hauling distances for all models.
Figure 1 shows the distribution for working experience of machine operators.
The majority of operators had more than five years of working experience. It is
noteworthy that the operators assigned randomly to different machines.
Working condition, ground slope, and materials conditions have great effects
on dozer production (Peurifoy et al. 2011). In this study, several surveys for different
site conditions were employed to individually asses each of these three independent
parameters. Accordingly, there was a need to find various conditions for the machines.
Based on the project conditions, project machinery managers’ experience, and the
machinery production literatures (research works, and machinery charts and
handbooks), the author finally decided to categorize each of these three independent
parameters as follow:
- Working condition: 1) Good; 2) Medium; 3) Weak
- Ground slope: 1) Zero; 2) +15; 3) 15
- Types of Materials: 1) Loose soil; 2) Soil containing rubble stones; 3)
Blaster rocks
© ASCE
Computing in Civil Engineering 2015 20
By applying the mentioned analysis method for each scenario of each machine,
the actual data in different scenarios was finally found. In this research, the author
have tried to present the results in a practical implication. For achieve this goal, the
author first chose one of the scenarios as the base scenario. Then by assessing the
relationship among the base scenario and other scenarios, some correction factors
which are used to change the results of the base scenario to the others were defined. It
must be mentioned that manufacturers generally use this method to show the hourly
© ASCE
Computing in Civil Engineering 2015 21
production of their machinery and equipment. Indeed, the author presents the results
as same as manufacturers results.
The scenario in which machines work with loose soil, in good condition, and
on ground with zero slope was chosen as the base one. Figure 2 and 3 present the
results of this scenario. Table 2 and 3, and figure 4 present the correction factors
which lead to achieve the results of other scenario. The correction factors are outputs
of the One-Way ANOVA analysis. Depending on the project site conditions, the
author’s correction factors (obtained results) must be applied to the data of actual
hourly production (obtained results) presented in figure 2 and 3 to find the desired
hourly production.
© ASCE
Computing in Civil Engineering 2015 22
© ASCE
Computing in Civil Engineering 2015 23
several years cannot work in positive grade as well as a new machine. In a negative
grade, however, the machine weight is the main parameter leading to a machine to
work well, and so the author’s and manufacturers’ factors are roughly the same.
Furthermore, in order to find the importance of each parameter on hourly
production of machine, Factor Analysis was employed on the different data. The
results show that the Working Condition (47%) is the main parameter. Type of
Materials (31%) and Ground Slope (22%) are the next parameters, respectively.
The working condition defined in this study as a parameter applying the effect
of machinery age, operators’ skills, and weather condition. The manufactures’ charts
are on the basis of ideal condition that a machine is a new one, the operator is a fully-
skilled one, and the weather is in the best conditions. However, in real construction
projects, achieving to these ideal conditions rarely occurs. In the presented case study,
there were no new machines (see table 1) and the machine operators were not always
the best ones (see figure 1). These two sub parameters of working condition lead to a
considerable shortfall between nominal and actual hourly production.
It must be noticed that the presented results are based on a case study. The
main limitation is that the results cannot be generalized to any other type of projects
where machines work. The other limitation is that there is uncertainty in the collected
data. In most case studies, it has been mentioned there is a bias in data collection.
Furthermore, assuming similar climate conditions and annual rainfall for different
borrow pits also limits the results. The machine operators were also assigned
randomly to different machine. If there is a plan to show which operator must work
with a specific machine, it can also be concluded how the machine operators affect
the machine production.
CONCLUSION
This paper provides a case study to assess the discrepancies between actual
and nominal production for crawler-type dozer. 21 different models of Caterpillar
with average working year of 10.1 and 18 different models of Komatsu with average
working year of 4.1 were individually considered in five different borrow pits. In
each machine, three different hauling distances was considered, and for each hauling
distance the actual hourly production were obtained. The results show a considerable
shortfall of machine hourly production for the ideal site conditions. This high range of
variation must be considered by project managers in planning of machinery. The
importance of different parameters affecting the machine production was also
examined. The main important factor is the working condition of project site. The
author believes that the results of this study can be an appropriate prospect for the
machinery managers in construction projects.
ACKNOWLEDGMENTS
© ASCE
Computing in Civil Engineering 2015 24
REFERENCES
Arditi, D., Kale, S., and Tangkar, M. (1997). "Innovation in construction equipment
and its flow into the construction industry", J. of Construction Engrg and
management-ASCE, 123, pp. 371-378.
Caterpillar Inc. (2012).Caterpillar performance handbook, 42th Ed., Peoria, Illinois.
Caterpillar Inc. (2010).Caterpillar performance handbook, 40th Ed., Peoria, Illinois.
Edmonds, C.D., Tsay, B., and Lin, W. (1994). "Analyzing machine efficiency." The
National Public Accountant, 39 (12): 28-44.
Gransberg, D., Popescu, M., and Ryan, C. (2006).Construction equipment
management for engineers, estimators and owners, Taylor& Francis, Florida.
Komatsu Inc. (2009). Komatsu Specifications & Application Handbook, 30th Ed.,
Tokyo.
Nabizadeh Rafsanjani, H., Gholipour, Y., and Hosseini Ranjbar, H. (2009). "An
assessment of nominal and actual hourly production of the construction
equipment based on several earth-fill dam projects in iran." J. of Open Civil
Engrg, 3: 74–82.
Nabizadeh Rafsanjani, H. (2011). "An assessment of nominal and actual hourly
production of crawler-type front shovel in construction project." J. of Civil
and Environmental Engrg, 11 (6): 59–63.
Nabizadeh Rafsanjani, H., Shahrokhabadi, Sh., and Hadjahmadi, A. (2012). "The use
of linear regression to estimate the actual hourly production of a wheel-type
loader in construction projects." ICSDEC 2012: Developing the Frontier of
Sustainable Design, Engineering, and Construction; ASCE Proceeding, 727-
731.
Peurifoy, R. L., Schexnayder, C. J., Shapira, A., and Schmitt, R. (2011).Construction
planning, equipment, and methods,8th Ed., McGraw-Hill, Boston.
Zou, J. (2006). HSV color-space digital image proceeding for the analysis of
construction equipment utilization and for the maintenance of digital cities
image inventory. MSc thesis, Dept. of Civil Engineering, University of
Alberta, Edmonton, Alberta.
© ASCE
Computing in Civil Engineering 2015 25
Abstract
Offshore pipelines are vital infrastructure systems for oil and gas
transportation. Statistics around the globe confirm that third-party threats, such as
vessel anchoring, fishing, and offshore construction, contribute the most to offshore
pipeline damages and are the number one cause of death, injury, and pollution. This
research studies satellite imagery and its application in automating vessel detection
for the purpose of offshore pipeline surveillance. Current methods of relying on high-
resolution satellite images lead to a high implementation cost and less efficient image
processing. This paper proposes a method of utilizing lower resolution satellite
images for vessel detection in offshore pipeline safety zones. It applies a combination
of cascade classifier and color segmentation method as well as a unique “color-coding”
scheme to achieve an accurate and efficient satellite image processing procedure. The
proposed method was tested on 150 Google Earth satellite images with an average
detection rate of 94% for large and medium vessels and an average false alarm rate of
19%.
BACKGROUND
Offshore pipelines are vital infrastructure systems that play an important role
in transporting gases and liquids over long distances across the ocean. Offshore
pipelines must be constantly and reliably operated and monitored to ensure maximum
operating efficiency and safety. Offshore pipelines generally transport perilous
pressurized products and operate in hostile ocean environments, including current
dynamics, geo-hazards, as well as third-party threats. Leaks and bursts in such
pipeline networks cause significant economic losses, service interruption, and can
also lead to enormous negative impact on the public and environment.
© ASCE 1
Computing in Civil Engineering 2015 26
PROBLEM STATEMENT
Previous research on oil and gas pipelines monitoring has focused on the pre-
failure and leak detection techniques using sensing technologies, such as fiber optic,
acoustic, ultrasonic, and magnetic sensors, or in-line inspection methods (e.g. smart
pigging). However, they are reactive in nature and only confirm damages after the
fact. Traditional pipeline patrolling (e.g. spot check on vessel or aircraft) is costly and
tedious due to the spatial distribution of pipeline networks. Gatehouse (2010)
developed a third-party vessel tracking technique based on Automatic Identification
System (AIS). AIS is an automatic tracking system used on vessels for identifying
and locating vessels. Vessels equipped by AIS transponders communicate data
electronically with other nearby vessels and AIS base stations. Its primary purpose is
to avoid collisions in poor visibility situations, but it can also be used for proactively
monitoring vessels activities in pipeline safety zones. When a vessel is approaching
or entering a pipeline safety zone, the operator can be notified and warned if the
monitoring system detects a violating behavior. The detection algorithm is based on
heuristic rules obtained from experts. However, two limitations were noted: (1) data
from vessels far away from shore are not available due to limited coverage of AIS
stations; and (2) vessels may not be equipped with tracking devices, thus tracking
data is unavailable.
Satellite sensing can complement to AIS because of its global coverage and
the capability to identify vessels not equipped with AIS transponders. Over one
thousand satellites fly over our planet every day. They provide global coverage of
earth surface activities, including weather conditions, land movements, and traffic
(onshore and offshore). Remote sensors attached to satellites collect data by detecting
the energy that is reflected from earth including radio, microwave, infrared, visible
optical light and multispectral signals. Surface objects (e.g., vessels and fishing boats)
can be detected and classified by analyzing these sensory data. Furthermore, image
processing and computer vision techniques can be applied to automate the
identification of third-party entities. A sampling of their presence, frequency, and
traffic density can supplement AIS data for pipeline risk management.
The long-term goal of this research is to integrate AIS and satellite imaging
for pipeline safety zone traffic monitoring. The objective of this study is to develop a
cost-effective method to automatically identify vessels from optical satellite images
© ASCE 2
Computing in Civil Engineering 2015 27
for the purpose of characterizing marine traffic in pipeline safety zone. Current
methods of relying on high-resolution satellite images lead to a high implementation
cost and less efficient image processing. This paper proposes a method of utilizing
lower resolution satellite images for vessel detection. Along with AIS, this method
will provide third-party activity statistics to support more accurate new pipeline route
design and prioritization of maintenance effort. The section below describes related
works. It is followed by the proposed methodology and its implementation and testing.
LITERATURE REVIEW
© ASCE 3
Computing in Civil Engineering 2015 28
clouds, vessels adjoin a large island, or when the gray scale of a neighbor area is very
close to that of the vessel. To overcome this shortcoming, cascade classifier is
employed in our research. In addition, samples of vessels or vessels partially covered
by clouds are included in sampling procedure of this study.
Qi et al. (2009) studied an object-oriented image analysis method to detect
and classify vehicles from satellite images. Image objects identified through
segmentation are organized in a hierarchical image object network. Feature space is
then created by extracting features of these objects and later used for vehicle
detection, classification, and traffic flow information analysis. Ortho-rectified image
date from QuickBird satellite with four spectral bands, including red, green, blue,
near infrared, and panchromatic, were employed for their study. The shadow region
of the vehicles (moving objects) accounts for about 10% errors in classification. The
shadow problems are resolved in our study by using training samples that include
shadows around vessels.
© ASCE 4
Computing in Civil Engineering 2015 29
manually mark vessels as ground truth in each of the positive sample set so that the
classifier can be trained later to correctly identify vessel objects in new images.
Negative samples refer to images that do not contain objects of interest, which can
help to minimize false alarms. They contain backgrounds and noises that typically
associated with the presence of vessels (e.g. ocean surface and waves), or non-vessel
objects similar in appearance to vessels, such as small islands, clouds, and oil
platform. Figure 2a and 2b show samples of a positive image and a negative image.
© ASCE 5
Computing in Civil Engineering 2015 30
called boosting. Boosting provides the ability to train a highly accurate classifier by
taking a weighted average of the decisions made by preceding stages. Each stage of
the classifier analyzes a portion of an image defined by a sliding window and labels
the region as either positive or negative. The size of the window varies to detect
objects at different scales. During training, if any object is detected from a negative
sample, this is a false positive decision. This false positive is then used as negative
sample and each new stage of the cascade is trained to correct mistakes made by
preceding stages. For detection purposes, positive indicates that an object was found
or otherwise it is negative. When the label is negative, the classification of this region
is complete, and the classier slides the window to the next region. If the label is
positive, the classifier passes the region to the next stage. The classifier confirms a
vessel found when the final stage classifies the region as positive.
During the training phase, several parameters must be determined in order to
achieve acceptable classifier accuracy, such as the number of stages and false alarm
rate. The greater the number of stages, the greater the amount of training data the
classifier requires. The false alarm rate is the fraction of negative training samples
incorrectly classified as positive. The lower this rate is, the higher the complexity of
each stage. These parameters can be fine-tuned experimentally according to a desired
level of accuracy. Once the classifier is satisfactorily trained, it can be used to process
new satellite images to detect vessels.
Offshore pipelines usually spread out over a long distance and the presence of
vessels in a relatively narrow pipeline safety zone (e.g. usually 200 meters along both
side of a pipeline) is rare. Based on this observation, we proposed a unique “color-
coding” scheme to significantly reduce image processing time by focusing on
pipeline safety zone and ignoring its surrounding area and thus noises (e.g. ocean
surface and onshore objects). This is achieved by adding color layers to an image to
segment the image according to the presence of pipelines. For example, as shown in
Figure 3, the green color coded region contains the pipelines while the red zone does
not. In particular, the dark green area (3c) represents the pipeline’s danger zone, and
the light green area (3b) refers to the vicinity of the danger zone. The red color zone
(3a) will be ignored by the classifier, while it focuses its effort on analyzing green
areas, resulting in much shorter processing time.
© ASCE 6
Computing in Civil Engineering 2015 31
Computer Vision System Toolbox (MathWorks 2014). For testing purpose, a total of
755 passive satellite image samples containing various types of vessels were collected.
In addition, 35 negative image samples were also included in the training dataset.
These images were stored in Portable Network Graphics (PNG) format since it is a
raster-based graphic format that supports lossless data compression. These color
images were then converted into gray-scale images to increase the contrast for more
efficient image processing.
For labeling ground truth data in the positive training samples, a MATLAB
application, TIL, was used, and this application allows a user to interactively specify
a rectangular region around a vessel as Regions of Interest (ROIs). A ROI define the
location of a vessel, which are later used as positive samples to train the classifier.
The training of the custom vessel cascade classifier was implemented using
the trainCascadeObjectDetector function in the Computer Vision System Toolbox
(MathWorks 2014). The training parameters were determined using the trial-and-error
approach. The feature descriptor selected was HOG and false alarm rate and the
number of cascade stages were set to 17.5% and 8 respectively.
The color-coding scheme was applied to keep the classifier working only
within the interested regions, i.e. pipeline safety zone. Once the location of an
offshore pipeline location was determined, three sub-zones were defined: Danger
Zone (DZ) in light green, Vicinity of the Danger Zone (VDZ) in dark green, and the
no-pipeline zone in red. Our program is capable of detecting the color segmentations,
ignoring the red color zone, and focusing on the green zones to detect vessels. When
a vessel is found, the program labels the detected object with a yellow rectangular, as
shown in Figure 4a.
The performance of the classifier can be measured by the rates of true
detection (i.e. true positive) and false alarm (i.e. false positive). A true positive occurs
when a positive sample is correctly classified. A false positive occurs when a negative
sample is mistakenly classified as positive. The proposed algorithm was tested on 150
Google Earth satellite images with an average detection rate of 94% and a false alarm
rate of 19.75%. Comparing with past similar studies, Li et al. (2013) used commercial
high-resolution satellite images and achieved an average detection rate of 83.2% and
a false alarm rate of 33.5%, as shown in Figure 4b.
CONCLUSION
© ASCE 7
Computing in Civil Engineering 2015 32
Despite its advantages, satellite imaging has several limitations. First, it has
low temporal resolution as indicated by the revisit time, which is a measure of the
time interval between two satellite visits to the same location on earth, ranging from a
few days to weeks. For meaningful pipeline surveillance, satellite- and AIS-based
sensing must be integrated. Second, optical image sensors lack the ability of data
capturing in all-weather conditions and during night time. Synthetic Aperture Radar
(SAR) imagery that is based on radio waves should be investigated in future research.
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 33
Abstract
Registration of 3D point clouds is one possible way to compare the as-built and the as-
designed status of construction components. Building information models (BIM) contain detailed
information about the as-designed state, particularly 3D drawings of construction components.
On the other hand, using automated and accurate data acquisition methods such as laser scanning
provide reliable and robust information about the as-built status of construction components.
Registration therefore makes it possible to automatically compare the designed and built states in
order to appropriately plan forward and generate the corrective actions required. This paper
presents a new approach for reliably performing the registration with a required level of accuracy
and automation within a substantially improved timeframe. Rather than performing the
computationally intensive registration methods that may not work robustly for dense point
clouds, the proposed framework employs the geometric skeleton of the construction components,
which is extremely less dense and therefore computationally less costly for the processing step
required. The method is experimentally tested for the components extruded along an axis, such
as industrial assemblies (i.e. pipe spools and structural frames) for which a geometric skeleton
represents the components abstractly. The registration of 3D point clouds is performed in a
computationally less intensive manner, and the framework developed has the potential to be
employed for (near) real-time assembly control, quality control and status assessment processes.
Keywords: Laser scans; 3D imaging; Skeletonization; Point cloud registration; Quality control
INTRODUCTION
Problem Statement
Assembly of construction components heavily involves complicated geometries in various
phases such as fabrication, installation, and inspection. Due to the unavoidable errors that
predominantly occur during the fabrication phase or the continual changes that occur during
construction, engineers and construction managers need a tool to track of the built status of the
construction components. Such a tool must provide a sufficient level of accuracy in order to be
1
© ASCE
Computing in Civil Engineering 2015 34
reliably and timely integrated with the construction processes involved. The discrepancies are
then detected in a timely manner and required corrective actions can be planned and generated
accordingly.
Despite the fact that it might be accurate, using conventional surveying approaches, such as
tape measuring, for as-built dimension measurement, is ineffective and inefficient; because of the
complicated geometry of the components involved, as discussed earlier, and the ineffective way
of data collection. The industrial sector is a major portion of the construction industry according
to the statistics provided by (US Census Bureau 2014). Over $83 billion is spent on industrial
power generations, which is approximately 10% of the total construction output in 2013.
Sophisticated pipe spools and steel structural frames are the dominant assemblies used in
industrial construction. On the other hand, promising approaches for tracking and reconstructing
of construction components, in general, and the industrial sector, in particular, provide the as-
built dimensions accurately and efficiently in order to be used for tracking continuous change in
projects. Using accurate and robust data acquisition methods such as laser scans, digital
photographs/videos, and range images provides the required level of accuracy for as-built
dimensional analysis. Although accurate as-built status is acquired using one of the promising
technologies, extraction of meaningful information (i.e. as-built dimensions for the components)
from the acquired as-built status is still being performed manually. The manual processing for
generating the as-built BIM is performed for further manufacturing designs and engineering
considerations such as inspection, maintenance, and planning for corrective actions where
discrepancy is detected. Manually performing such analyses for a real, complicated industrial
facility, such as a power plant, is time consuming, costly, and therefore causes delays in
construction projects.
Point cloud registration is an automated way of comparing the built status with the designed
drawings. However, due to the inherent challenges with the geometry and complications of the
erection process, point cloud registration requires precise preprocessing. Moreover, point cloud
registration output is a rigid transformation that does not necessarily lead to the correct match
between the as-designed and as-built status. It becomes computationally intensive for dense point
clouds which make its use limited in (near) real-time modeling systems. This paper presents an
automated framework in order to address the industrial need for improving automated inspection
processes and solve the challenges involved in the existing registration techniques. A skeleton-
based registration which makes use of the assemblies abstracted to a wireframe (skeleton) is
proposed. Representing the assemblies by their geometric skeletons is computationally more
efficient and reasonably accurate.
Background
As-built modeling in the construction industry is performed to extract as-built dimensions
from 3D images acquired by an appropriate method. As-built modeling has been employed for
various applications such as progress tracking (Golparvar-Fard et al. 2009; Turkan et al. 2012),
status assessment (Zhu and Brilakis 2010) and quality control (Nahangi et al. 2015). Automated
full construction site retrieval is neither possible nor computationally time effective. However,
detecting, localizing, and characterizing of critical dimensions (“surveying goals”) by the
automated modelling of corresponding shapes and features has been recently attempted and
found to be effective. For example, surveying goals can be automatically extracted from laser
scans of concrete bridges in order to assess the built status of construction components (Tang and
2
© ASCE
Computing in Civil Engineering 2015 35
Akinci 2012a; Tang and Akinci 2012b). On the other hand, building information can be
employed as a-priori knowledge in order to expedite the comparison step involved. Point cloud
registration is an efficient and automated technique for comparing the as-designed (BIM) with
the as-built (scan) status. 3D point cloud registration is performed based on common features
that exist in different point clouds. Such features include geometric features (e.g. edge, corner),
and invariant mathematical features (e.g. SIFT, SURF). Finding the common features between
the 3D images is a key for point cloud registration. Point cloud registration has been employed in
construction for various purposes such as automated inspection and structural health monitoring.
For example, (Bosché 2012) used a plane-based registration by automatically extracting plains
and roughly matching the point clouds representing the two states for the purpose of automated
progress tracking. They employed the iterative closest point (ICP) method for the fine
registration step.
PROPOSED METHODOLOGY
The proposed methodology for efficient registration of point clouds is illustrated in Figure 1.
As shown in Figure 1, the proposed method consists of three primary steps: (1) preprocessing
that is required in order to make the inputs in an appropriate format and resolution, (2)
skeletonization that results in the extraction of the geometric skeleton of an assembly, and (3)
registration that employs ICP for calculating the rigid transformation for matching the resulted
skeletons from the previous step. These steps are explained in the following sections.
BIM
Skeletonized
BIM
Skeletonization Registration
Find T Apply T on the Registered point
Engine (Transformation) entire point cloud clouds
Skeletonized
Scan
As-Built
3
© ASCE
Computing in Civil Engineering 2015 36
Skeletonization
Several research studies have recently attempted to extract the skeleton of a geometric shape
(Au et al. 2008; Cornea et al. 2007; Junjie Cao et al. 2010; Tagliasacchi et al. 2009; Zhou et al.
1998). Skeleton extraction is specifically advantageous for pipe spools and structural assemblies
whereby the as-built status is represented by an abstract wireframe (skeleton). Once the point
clouds representing the designed and built states are generated and required preprocessing is
performed, the skeleton of each state is to be extracted in this step. The skeleton extraction
method employed in this study has two primary steps:
(1) Skeletal candidate estimation: based on a Voronoi diagram (Aurenhammer 1991; Okabe
et al. 2009), the point cloud’s skeletal candidates are initially estimated. A Voronoi diagram is an
approach to divide a subspace ( into a number of sub regions equal to the number of points in
( such that the any point in the region is closer to the corresponding point being investigated.
Once the Voronoi diagram is defined, its dual graph can be calculated accordingly. The resulted
dual graph is called Delaunay triangulation. Combining Voronoi and Delaunay regions, results in
the identification of a disk around each point, known as Umbrella. Based on angle and ration
conditions on the resulted Umbrella, defined in (Dey and Zhao 2004), skeletal candidates are
selected.
(2) Skeleton generation: based on a pruning algorithm which employs Laplacian smoothing
(Junjie Cao et al. 2010) of the skeleton candidates, the skeleton of the point cloud is generated.
Laplacian contraction (smoothing) is an iterative procedure that updates the skeletal candidates.
Given a skeletal dataset with candidates, Laplacian smoothing is employed to update the
skeletal dataset by solving a linear matrix equation as follows:
in which, and are the attraction and contraction weights, respectively, and the superscript
represents the iteration. In summary, the final output of this iterative procedure is the skeleton
of the point cloud. The convergence rate of the method relies on the attraction and contraction
parameters. A threshold value is also required to cease the iterative procedure based on the
accuracy required. (Lee et al. 2013) used skeletons of spools for 3D reconstruction of pipe
assemblies accurately and time effectively. For more details on the skeleton extraction technique
used here, the reader is referred to (Junjie Cao et al. 2010; Lee et al. 2013).
Registration
Given a laser scanned point cloud that represents the as-built status ( ), and an originally
designed drawing converted to point clouds that represents the as-designed status ( ),
performing the skeletonization resulted in the abstracted states in the form of skeleton (wire
frame). The skeleton of the as-built status is denoted by and the as-designed status is denoted
by . For finding the rigid transformation required to best match the two skeletons, an ICP-
based registration is employed. The summary of the ICP algorithm is illustrated in Figure 2.
4
© ASCE
Computing in Civil Engineering 2015 37
Registered
Find the best
Skeletons
Skeletonized Find the transformation T
Apply T and Error is and
as-built and correspondences (Minimize error Yes
calculate error acceptable original
as-designed (closest points) between
point
correpondences)
clouds
No
Figure 2: Proposed ICP-based registration engine and the flow between various
components
As shown in Figure 2, the registration is calculated by finding the best transformation which
minimizes the error between the correspondences in the skeleton datasets. The rigid
transformation calculated ( ) is then applied on the original point clouds. In other words, the
required mathematical manipulations are calculated using the skeletons rather than the original
point clouds that are computationally intensive. Details about the effective parameters and the
subsystems involved in the registration step can be found in (Kim et al. 2013; Nahangi and Haas
2014; Rusinkiewicz and Levoy 2001).
VERIFICATION AND VALIDATION
For the purpose of verifying and validating the proposed methodology explained earlier, a
case study is investigated. As-built status of a pipe spool branch is acquired using a laser scanner.
3D CAD drawings in the point cloud format is the other required input to the efficient
registration system proposed here. The studied pipe spool along with the laser scanned as-built
and 3D drawing is shown in Figure 3. The two point clouds are preprocessed and stored for
further application of the required transformation. These point clouds are then imported to the
skeletonization engine developed. The skeletonized designed and built states are then imported
to the registration engine. The rigid transformation resulted from the registration step is applied
on the original point clouds. The explained procedure is programmed and implemented in an
integrated platform using C++ and MATLAB.
(a) (b)
(c) (d)
Figure 3: The investigated spool branch. (a): As-designed status (3D drawing) of the
branch ( ); (b): 3D drawing converted to point cloud with the resulted skeleton ( );
(c): Laser scanned as-built status ( ); (d): the resulted as-built skeleton ( ).
5
© ASCE
Computing in Civil Engineering 2015 38
Typical results of the registration is shown in Figure 4. Results of the investigated spool
branch is summarized in Table 1.
(b)
(a)
(c)
sampling impact on the extracted skeleton results is insignificant. The root mean square
represents the average error for the registration performance. The RMS value of the skeleton-
based registration is also comparable to the regular ICP registration. A post ICP on original point
clouds with significantly less number of iterations is expected to improve the RMS value from
the new transformed states.
CONCLUDING REMARKS
A skeleton-based registration of point clouds was presented in this paper. The point clouds
representing the built and designed states are skeletonized using a Laplacian based contraction.
The skeletonized point clouds are significantly less dense and the required manipulations are
therefore computationally less intensive. For the registration step ICP is applied on the skeletons.
The rigid transformation resulted from the registration step is then applied on the original point
clouds. In order to measure the performance of the proposed registration method, the explained
framework was programmed and a case study was performed on a pipe spool. The results show
that the proposed registration method is performed within a significantly faster timeframe (Table
1). Such improvement in the time effectiveness of the method implies that the skeleton-based
registration developed here can be used for (near) real-time investigation of industrial
assemblies. The registration method is reasonably accurate as well and it has the potential to be
employed for further investigation of industrial assemblies. For example, the registration output
can feed into a discrepancy detection system for localizing and quantifying the discrepancies
incurred and planning the required corrective actions (Nahangi et al. 2015). It should be noted
that the framework developed here is well suited for pipe spools and structural assemblies
whereby skeletons meaningfully represent the assemblies (i.e. it is not well suited for
unsymmetrical cross sections or point clouds with unbalanced density). One potential avenue for
improving the developed framework is to apply a few ICP iterations on the resulting point clouds
after applying the skeleton-based registration. A smaller RMS value is expected after performing
such a post processing step.
REFERENCES
Au, O. K., Tai, C., Chu, H., Cohen-Or, D., Lee, T. (2008). "Skeleton extraction by mesh
contraction." Proc., ACM Transactions on Graphics (TOG), ACM, 44.
Aurenhammer, F. (1991). "Voronoi Diagrams—a Survey of a Fundamental Geometric Data
Structure." ACM Computing Surveys (CSUR), 23(3), 345-405.
Bosché, F. (2012). "Plane-Based Registration of Construction Laser Scans with 3D/4D
Building Models." Advanced Engineering Informatics, 26(1), 90-102.
Cornea, N. D., Silver, D., Min, P. (2007). "Curve-Skeleton Properties, Applications, and
Algorithms." Visualization and Computer Graphics, IEEE Transactions On, 13(3), 530-548.
Dey, T. K., and Zhao, W. (2004). "Approximate Medial Axis as a Voronoi Subcomplex."
Comput. -Aided Des., 36(2), 195-202.
Golparvar-Fard, M., Pena-Mora, F., Savarese, S. (2009). "D4AR – A 4-Dimensional
Augmented Reality Model for Automating Construction Progress Monitoring Data Collection,
7
© ASCE
Computing in Civil Engineering 2015 40
8
© ASCE
Computing in Civil Engineering 2015 41
Abstract
This paper introduces a new proximity alarming technology for roadway work
zones. Roadway work zones are dynamic in nature and offer workers with limited
work space, contributing to dangerous work environments for construction workers
who are constructing and maintaining the infrastructure. Hazardous proximity
situations can be encountered especially when the ground workers operate in close
proximity to heavy construction equipment. Past research effort has been made in
aiming at providing proximity sensing technologies for construction workers. These
technologies, however, still have limitations that defer extensive deployment, which
include accuracy, cost, required hardware, and ease of use. This study focuses on
creating and evaluating a feasible technology that overcomes the drawbacks found in
other technologies. Using Bluetooth sensing technology, a proximity detection and
alert system was created. Experimental results demonstrate the created system’s
ability to provide alerts to equipment operators and pedestrian workers at pre-
calibrated distance in real-time. The proximity detection and alert device
demonstrated its capabilities to provide, with an appropriate alarm, an additional layer
of hazard avoidance to pedestrian workers in roadway work zone environments.
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 42
LITERATURE REVIEW
© ASCE 2
Computing in Civil Engineering 2015 43
systems (Behzadan et al. 2008). Furthermore, capabilities of Bluetooth have been used as
wireless sensor networks for resource tracking at building construction sites (Shen et al.
2003). The typical maximum range of one Bluetooth enabled devices was recorded as 50
meters for location tracking purposes (Behzadan et al. 2008). Because Bluetooth has been
successfully evaluated for other construction industry applications, the capabilities of this
system could potentially detect and alert workers during hazardous proximity situations.
Bluetooth sensing technology was selected to create a proximity detection and
alert system for the roadway work zone environment. This system provides a wireless
and rugged technology that is capable of functioning in the harsh outdoor roadway
work zone environment. This Bluetooth technology is thought to be able to (1) provide
intensifying alerts, depending on the degree of dangerousness, in real-time for workers
in hazardous situations, (2) allow for mitigation risk, (3) operate with minimal nuisance
alerts, and (4) create an additional layer of protection for pedestrian workers.
The Bluetooth technology proximity detection and alert system is comprised
of three components (Figure 1) to communicate in real-time and provide alerts to
workers in potentially hazardous situations.
Figure 1. Bluetooth proximity detection and alert system EPU (beacon) mounted
on a wheel loader (left), and PPU held by a test person (right).
© ASCE 3
Computing in Civil Engineering 2015 44
FIELD TESTS
The EPU of the created proximity detection and alert system contains several
beacons mounted to each side (or large surface) of a piece of construction equipment
(Figure 1). The experimental trials simulated operating functions of a roadway work
zone. Two types of tests were conducted to simulate (1) a scenario with static
equipment and a mobile pedestrian worker, and (2) a scenario with mobile equipment
and a static pedestrian worker (Figure 3). Both RFID and magnetic field proximity
detection and alert systems were also subjected to the same experimental trials to
establish a benchmark for comparison. All experimental trials were conducted
outdoors with clear weather conditions, low wind speed, and a temperature of
approximately 32 degrees Celsius. A clear, flat, paved ground surface with no
obstructions was used as a test bed for all trials.
The coverage area experimental trials were designed to simulate the
interaction between a stationary piece of construction equipment and a mobile
pedestrian worker (Figure 3). These trials assessed the reliability of the Bluetooth
technology sensing system to detect and provide an alert when the mobile pedestrian
worker crossed into the pre-calibrated hazardous proximity zone. Two pieces of
construction equipment, such as 1) a wheel loader and 2) a small dump truck, were
used for the coverage area experiments. The PPU was placed in the pedestrian
worker’s right pocket for all trials. The experimental test bed was outlined as shown
in Figure 3, and tests were run at eight different, equally spaced, angles (45 degree
offsets, 0°, 45°, … , 270°, 315°), 20 times each. The distance at breach into the pre-
calibrated hazardous proximity zone was measured for each of the tests as well as for
each of the proximity sensing systems.
For the mobile equipment and static pedestrian worker test (Figure 3), 20
ground makers were positioned at 1.5 meters intervals along the straight line parallel
to the wheel loader’s travel path. The wheel loader approached the simulated
pedestrian worker (traffic cone) in a forward travel direction at a constant speed of 8
© ASCE 4
Computing in Civil Engineering 2015 45
kilometers per hour and stopped, and the distance was measured once the EPU alert
was activated. This procedure was repeated for 20 times for the three proximity
sensing systems. All three PPU’s (RFID, magnetic field, and Bluetooth) were
positioned at the static location on top of a traffic code approximately a 1 meter
vertical distance from the ground surface.
Figure 3. Test bed for static equipment and mobile pedestrian worker (left),
mobile equipment and static pedestrian worker experimental test bed (right).
Figure 4 shows the average alert distance for the two simulations of static
equipment and mobile worker simulation; one is a simulation with a wheel loader
(left) and the other is with a truck (right). Although the Bluetooth sensing system
offered a relatively stable mean alert distances on average, there exists an unexpected
drop of the mean value at angle 315°. It means that the received signal strength at that
particular angle was weaker than other directions. Potential reasons for this
discrepancy are identified. 1) The battery level may degrade the performance of the
beacon attached at angle 315°, 2) the beacon itself possess not well-functioning signal
transmitters, and 3) the signals from the beacon may have been affected by the
surroundings, such as a boom, its arm, and/or even air. Comparing the three
proximity sensing systems, the magnetic sensing system performed the most reliably
in this set of trials. Comparing with the two different simulation cases, the on-average
alert distances have increased with the simulation of truck. Partial reason this could
be that a truck typical contain more flat surfaces, contributing less to obstruction for
communication between a transmitter and a receiver.
No nuisance alerts were recorded during the experimental trials. Of 320
simulations for conducted on the Bluetooth technology proximity system, a total of 1
false negative alert was recorded, which represented less than 1% of the total test
sample. The recall value for all approach angle trials was 1.0 except 0.95 for the
approach angle 270° with the wheel loader simulation. Neither the RFID nor the
magnetic field proximity detection and alert system experienced any nuisance alerts.
The magnetic field proximity sensing system also recorded no false negative alerts,
however, the RFID system failed to provide alerts 11 times throughout the trials.
© ASCE 5
Computing in Civil Engineering 2015 46
Figure 4. Average alert distance (coverage area) for static equipment and mobile
worker: wheel loader (left) and truck (right).
Figure 5 displays a box plot of the results from the mobile equipment static
pedestrian worker experiments; the top and bottom black line is the maximum and
minimum alert measured alert distance, the box represents the interquartile range, and
the red line in the box represents the median value. Table 1 below summarizes our
statistical analysis results. None of the proximity sensing systems assessed (RFID,
magnetic, and Bluetooth) experienced any nuisance alerts, however the Bluetooth
system recorded two false negative alerts and the RFID system recorded four false
negative alerts. Although the magnetic field system successfully provided alerts for
all the 20 trials, it is important to note that the alert distances of the magnetic system
were much smaller than the other two systems. Another interesting observation was
that comparing with the static equipment and mobile worker trials, the magnetic
sensing device offered much smaller alert distances, which may not be a sufficient
distance to take a proper action for avoidance of collision.
Figure 5. Box plots for mobile wheel loader and static pedestrian worker trials.
© ASCE 6
Computing in Civil Engineering 2015 47
During the experimental trials, the research team also collected and analyzed
other metrics including set-up time, calibration time and required infrastructure
(Table 2). The set-up duration and infrastructure required (including exterior power
access and antenna mounting) for the magnetic field and RFID proximity detection
and alert systems were increased when compared to the Bluetooth technology system,
mainly because the Bluetooth technology system does not require an antenna
mounting or access to an external power source. The research team also found the
time to calibrate the proximity alert zone for the RFID and magnetic field system was
much longer than the Bluetooth technology proximity sensing system. This is mainly
due to the created application for set-up and calibration of the Bluetooth technology
proximity sensing system.
CONCLUSION
This research was to develop and evaluate the reliability and effectiveness of a
Bluetooth proximity detection and alert system when deployed in a roadway work zone
environment. Two sets of experiments were designed to test the Bluetooth technology
proximity detection and alert system for its capability to provide alerts in real-time for
pedestrian workers and equipment operators during hazardous proximity situations.
The tests simulated various interactions between ground workers and construction
equipment on the Bluetooth technology proximity detection and alert system. A
commercially available RFID and magnetic field proximity detection and alert system
were subjected to the same experimental trials to serve as a benchmark for comparison.
Analyzed data demonstrated that the developed Bluetooth proximity sensing
system is acceptable to detect the presence of hazardous proximity situations within
roadway work zones in real-time. The performance of the Bluetooth proximity
sensing system was satisfactory for the both simulations. When compared to the
RFID and magnetic field proximity detection and alert systems, the developed
Bluetooth proximity sensing system required the least amount of infrastructure and
time for calibration. The magnetic field proximity sensing system recorded the
highest reliability and accuracy values when compared to the RFID and Bluetooth
technology proximity sensing system. The major disadvantage with the magnetic
sensing system was, however, a big drop in the alert distance when subjected to
experimental trials with mobile equipment. Also, the research team noted that the
functionality of the Bluetooth proximity sensing system enabled it to continue to alert
during obstructions but with a greater level of signal attenuation. This property was
not found true when using the RFID proximity detection and alert system.
© ASCE 7
Computing in Civil Engineering 2015 48
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 49
Abstract
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 50
most recent standard came into effect on September 15, 2014, and is expected to
significantly reduce the energy use of refrigerators by 25% to save approximately 5.6
quads of energy for the next 30 years ( Energy Efficiency and Renewable Energy
Office, 2011). However, in the U.S., more than 60 million refrigerators currently in
operation are over 10 years old, resulting in a total energy cost of $4.7 billion
annually (Energy Star, 2014). Residents continue to use these less efficient
refrigerators due to economic reasons (replacement cost) as well as a serious lack of
refrigerator energy efficiency evaluation tool.
The current refrigerator replacement strategy is to compare the energy
performance of current refrigerator with the new model or to compare monthly utility
bills among families with similar sizes. However, there is little information of the
refrigerator other than it is Energy Star labelled and the only method to compare is to
use the monthly energy bills. What’s worse, these two information sources have their
own limitations. On one hand, Energy Star label shows the energy efficiency
performance of a refrigerator. This efficiency is determined at the equipment’s peak,
healthy performance and does not relate the performance under faulty conditions.
Examples of faulty conditions include coolant leakage, freezer degradation, etc. On
the other hand, monthly bills are impacted by human behaviors; however, they do not
represent the energy efficiency performance of the refrigerator. What is needed is a
new method that can present both the refrigerator working conditions and energy
efficiency performance. Such a method is beneficial for owners of large residential
portfolio of buildings and other users to know their refrigerator’s health and
performance in a more informed manner to make timely and wise replacement
decisions. To further enhance, machine learning techniques can be employed to
extract features related to health and energy performance.
In recent years, clustering as a main technique to partition data into groups,
has been used to classify building electricity customers (Chicco, Napoli, & Piglione,
2006), predict future building energy demand (Duan, Xie, Guo, & Huang, 2011), and
detect abnormal behaviors (Li, Bowers, & Schnier, 2010). Some researchers have
proposed an appliance signature identification solution using k-means clustering to
identify different appliances (Chui, Tsang, Chung, & Yeung, 2013). Meanwhile,
some researchers used the operation patterns of household appliance to estimate its
energy consumption (Li , Miao, & Shi, 2014). The above research works have proven
load signature and clustering as effective means to classify and characterize different
household appliances, however, there is little research focusing on the operational
pattern classification for household appliances. In fact, a certain appliance should
have different operational pattern as its operational status change, which is significant
if it were to be connected to Smart Grid technologies. This study fills this gap by
introducing k-means clustering technique to classify the operational cycles of typical
household refrigerator. The objective of the research work discussed in this paper is
to aid the evaluation method of refrigerators’ energy performance through classifying
and recognizing different operational patterns during undisturbed cycles.
Household refrigerators are cyclic appliances whose operations are consist of
repeating cycles. A typical refrigerator cycle includes both run time and idle time.
Run time is where the refrigerator turns on to refrigerate or to defrost, while idle time
is where the refrigerator is off and no electricity is consumed. These operation cycles
© ASCE
Computing in Civil Engineering 2015 51
are further divided into two major groups: disturbed cycles and undisturbed cycles,
based on whether occupants have any impact, such as door opening and
loading/unloading food. Since operation cycles are impacted by occupants’ usages
which typically happen in daytime, the undisturbed cycles occur consecutively in
time period between late night and early morning when there is little or no
interference from occupants. Figure 1 shows a typical operation cycle which typically
occurs without occupant’s interference.
Comparing with disturbed cycles which contain the impact of human
interferences, undisturbed cycles are more persuasive to represent the energy
efficiency of the refrigerator which is affected by the construction of doors, the
insulation technologies, and the refrigeration techniques. In this paper, we focus on
the pattern recognition of undisturbed cycles in order to extract the household
refrigerator’s operation characteristics. The rest of the paper is organized as follows.
In section 2, we present the basic idea of the proposed clustering method. Section 3
describes the process of data collection and pre-processing. The hourly energy
performance analysis and clustering results are shown in section 4. In section 5, we
discuss the conclusion and future works.
700
Energy Intensity (Watt *Min)
600
500
400
300
200
100
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Time (Min)
METHODOLOGY
© ASCE
Computing in Civil Engineering 2015 52
terms of its centroid which is defined as the mean value of the objects within the
cluster. The difference between each object in the cluster and the cluster centroid is
measured by the Euclidean distance which is defined as follows:
where is the object, is the cluster centroid, is the dimension of the data,
is the ith attribute value of the object, is the ith attribute value of the cluster
centroid.
An error function is defined as:
where is the sum of the squared error for all objects in the data set. The
error function aims to make the resulting k clustering as compact and as separate as
possible. In other words, k-means algorithm can be treated as an optimization
problem which minimizes the error function.
Data collection. Data were collected from a typical two story residential
house located in Lincoln, Nebraska. A nonintrusive monitoring system called
eMonitor (PowerWise Systems, 2014) was used to monitor minute-by-minute
electricity use data at the circuit level. To obtain sufficient data, the monitoring period
started from January 16, 2011 to April 15, 2011 with a total number of 83 calendar
days. As the interval equals to one minute, a total of 119,520, i.e., 1,440*83, data
points were collected for each circuit. The minute-by-minute energy consumption
data of refrigerator allows the determination of the paired turn-on and turn-off events
of the refrigerator. Each operating cycle can be identified from such turn-on and turn-
off events. A total number of 2,165 cycles were identified from the monitored data.
Feature Extraction. Since clustering determines the similarity and
dissimilarity based on the attributes describing the objects, appropriate attributes
which can help distinguish objects of different groups should be selected and
extracted from the initial database. In this study, many attributes of the operation
cycle, such as run time, idle time, startup time, shutdown time, total electricity
consumption, average electricity consumption intensity, maximum electricity
consumption intensity, and minimum electricity consumption intensity, are
considered for clustering. Since the refrigerator operation cycles are distinguished
from the perspective of different working status, for example, auto defrost status,
general operating status, and cooling status, certain attributes which are able to
describe such difference should be selected. As shown in the figure 2, a comparison
of electricity consumption flows between different working statuses is conducted in
order to find out appropriate attributes.
The comparison indicates that auto defrost cycle has a significant higher
electricity consumption intensity which is around 650 Wattmin. The run time and idle
time are 27 and 14 minutes respectively. It can be concluded that auto defrost cycle
© ASCE
Computing in Civil Engineering 2015 53
results in more energy consumption than general operating cycle by increasing run
time, decreasing idle time, and switching to high level. As a result, two attributes
namely, the total electricity consumption which reflects both the run time and average
electricity consumption intensity, and the maximum electricity consumption intensity
which describe the freezer working status are selected as the attributes to describe the
operation cycle.
700
Energy Intensity (Watt *Min)
600
500
400
300
200
100
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Time (Min)
General Operating Cycle Auto Defrost Cycle
RESULTS
© ASCE
Computing in Civil Engineering 2015 54
clusstering resullts. It can bee seen from the figure 4 (b) that diffferent operaation cycles
are successfullyy partitionedd and the bouundaries of different
d clussters are cleaar.
9000
((Watt Min))
8000
7000
6000
gy Consumption
5000
p
4000
3000
2000
Energy
1000
0
1 2 3 4 5 6 7 8 9 10 11 12 13
1 14 15 16 17 18 19 200 21 22 23 24
Tim
me
Figure 3.
3 Hourly avverage energgy consump ption.
Based on
o the clusteering results, we summaarize the chaaracteristics of different
clussters in orrder to reccognize diffferent clustters and evvaluate the clustering
perrformance. The
T results are a shown inn table 1. Cyycles in clusster 1 are refferred to as
the general wo orking cyclees as they happen
h mosttly (576 outt of 629) annd steadily.
Cyccles in clusteer 2 are recoognized as auto-defrost
a w
working cyccles since they have the
greatest averag ge maximum m energy inntensity valuue which inndicates the running of
deffrost heater. Cycles in cluster 3 arre identifiedd as auto-deefrost affecteed working
cyccles as they occur right after auto-ddefrost cyclees. In respoonse to the auto-defrost
a
cyccles, these cy
ycles use moore time andd consumes more m energyy than generaal operating
cyccles to recoveer the refrigeerator interioor temperatuure.
Figure 4.
4 Distributiion for cyclees start betw
ween 1:00 a.m.
a to 6:59 a.m.
© ASCE
Computing in Civil Engineering 2015 55
Figure 5. Clustering
C reesults for cyycles start between
b 1:000 a.m. to 6:559 a.m..
Tabble 1. Energ
gy performaance summaary of differrent clusterss.
Cluuster No. of Aveerage maximmum energy Aveerage total ennergy
N
No. cycles intensity (W
Watt) consummption (Wattt * min)
1 5776 165.922 3516.41
2 277 616.811 6558.74
3 266 194.122 8409.08
CO
ONCLUSION
This stu
udy classifieed and recoognized the undisturbedd refrigeratoor operation
cyccles based onn two attribbutes namelyy, maximum m energy inteensity and total
t energy
connsumption. An
A unsupervvised machinne learning algorithm
a called k-means clustering
wass used for the classiffication. Ressults show that the undisturbed
u refrigerator
opeeration cycles can be distinctly classified
c innto three grroups nameely, general
worrking cycles, auto-defrrost workingg cycles, and a auto-deffrost affecteed working
cyccles, by the proposed
p method. The energy
e efficciency relateed index of each
e group,
succh as maximu um energy intensity,
i tottal energy coonsumption, run time, annd idle time
cann be used fo or refrigerattor energy efficiency
e evvaluation annd fault deteection. The
recoognition of different operation
o paatterns is siignificant inn the researrch area of
estaablishing ap
ppliance loadd profile, annticipating peak
p load, and
a intelligeent control.
Futture work will focus on the ressearch of pattern p recoognition forr disturbed
refrrigerator operation cyclles in orderr to compreehend the household
h reefrigerator’s
eneergy performmance and heealth, and subbsequent inttegration witth smart techhnologies.
AC
CKNOWLEDGEMENT
T
© ASCE
Computing in Civil Engineering 2015 56
The authors would like to acknowledge Dr. Jonathan Shi for providing data
used in this study.
REFERENCE
© ASCE
Computing in Civil Engineering 2015 57
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 58
geometric information (Resop & Hession, 2010). Efficient tolerance analysis can
considerably reduce the total amount of wastes during construction, improve
productivity, and contribute to a leaner process (Milberg & Tommelein, 2005, Sacks
et al., 2009). Lack of detailed tolerance analysis of prefabricated Mechanical,
Electrical, and Plumbing (MEP) can bring problems of field fit-ups that lead to
misalignments due to deviations from their as-designed dimensions (Bosché, 2010).
In current practice, Geometric Dimensioning and Tolerancing (GD&T)
provides a platform for communicating engineering tolerances (Henzold, 2006) . It
provides computational supports to aid engineers in designing the geometry of the
component within allowable tolerance variations. American Society for Mechanical
Engineers (ASME) Y14.5-2009 sets standards for present dimensioning and tolerance
controlling, but studies show that these standards are inadequate (Zhang, 1997).
Milberg and Tommelein inferred that the Architectural Engineering Construction
(AEC) industry lacks appropriate tools for reliable tolerance analysis and identifying
the interactions of tolerances between different components (Milberg & Tommelein,
2005). This paper examines a methodology that generates detailed tolerance
information based on dense 3D point clouds and as-designed models to support
construction quality control.
Accelerated construction projects use prefabricated components to improve
productivity and construction quality. However, improper dimensioning and
tolerancing of the prefabricated components often results in ineffective quality control
and pose interferences in construction workflows (Bosché, 2010). Traditional
surveying techniques lack in capturing and visualizing detailed geometric information
of the components that are essential for examining the tolerance information
(Jaselskis, Cackler, Andrle, & Gao, 2003). (Kim, Cheng, Sohn, & Chang, 2014)
stated that presently a certified inspector who uses tapes and calipers for QA executes
the dimensional assessment of precast concrete elements manually. This study also
claims that manual methods are not reliable, time consuming, and costly. Efficient
and effective data storage and management systems are important for reliable
communications between different trades involved in a construction project (Bosché,
2010, Kim et al., 2014).
Milberg and Tolmmelein formalized a tolerance mapping system using
“Tolerance Networks” that analyses tolerance accumulation across “tolerance
networks” that represent the interrelationships between connected components
(Milberg & Tommelein, 2005). They illustrated a conceptual framework of a
“proactive tolerance control system” that can identify and analyze correlated
tolerance information during both product and process design. However, they focused
on formulating a virtual model, while lacking detailed geometric data from the field
to validate the tolerance network models. Another issue is that their study focused on
flat concrete walls and rectangular windows, whereas curvilinear and interwoven
geometries of MEP systems pose higher challenges of misalignments. To overcome
such limitations, in this research, we are using three-dimensional (3D) imagery
techniques to capture high quality data for automated dimensional and quality
assessments of prefabricated/precast components.
Modeling as-is conditions of building structures under construction can be
helpful for efficient progress tracking and dimensional quality control (Akinci &
© ASCE 2
Computing in Civil Engineering 2015 59
Boukamp, 2003). Bosche and Haas developed a methodology that can automatically
extract 3D Computer Aided Design (CAD) objects from imagery data for reliable
dimensional quality assessments of building components (Bosche & Haas, 2008).
This approach is capable of detecting construction defects, misalignments, and
geometric variations that affect the quality control of the construction process.
Follow-up studies of these researchers used robust Iterative Closest Point (ICP) fine-
registration process of a real time erection of an industrial building’s steel structure
for a dimensional compliance control of the as-built geometries (Bosché, 2010).
However, precise modelling of as-built geometries of building systems from 3D
imagery data requires high level of detail and accuracy (Turkan, Bosché, Haas, &
Haas, 2012).
After capturing detailed 3D geometric information, deriving tolerance
information of the prefabricated components is tedious. It requires intense manual
data processing to interpret the captured data. This paper identifies the challenge of
data processing and proposes an automated framework that identifies the deviations
of the as-built geometries from as-designed conditions. The authors developed an
algorithm that can identify spatial changes of the prefabricated components from their
as-designed models. Using the detected deviations of individual components, the
algorithm creates a tolerance network to understand how prefabrication and
installation errors of components influence each other. The generated tolerance
network represents components as its nodes and the connections (joints) between
components as edges joining those nodes. Every node (vertex) contains the “local
attributes” about prefabrication errors of the object such as deviations in lengths; radii
etc., while the edge joining the vertices contain the “global attributes” about
installation errors around joints. More specifically, the global attributes associated to
edges include the relative orientation between the adjacent vertices (components) and
the position of the edge (Joint) with respect to the origin. Tolerance networks have
the potential to aid engineers to identify critical components that has higher impacts
on error propagation and misalignments in field assemblies. These critical
components act as the centers of a network and their prefabrication/installation errors
will cascade throughout the interconnected network. Hence, identifying such regions
prior to the construction process helps in maintaining the stability of the construction
workflow and significantly reduces reworks and wastes.
PROPOSED METHODOLOGY
© ASCE 3
Computing in Civil Engineering 2015 60
© ASCE 4
Computing in Civil Engineering 2015 61
and rotational errors at joints) of the sections of ducts. The algorithm then utilizes the
attributes to generate a tolerance network that could support tolerance analysis, and
provide a framework for visualization of the accumulation of tolerance issues across
the network. The authors validated the approach using data from a real building.
CASE STUDY
In order to validate the developed methodology, the authors used the data
collected from a real building site. The data includes a set of laser scans and as-
designed models of a mechanical room in an under construction Agriculture and Bio-
systems engineering building at Iowa State University. The authors selected a part of
the data (Figure 3(a), 3(b)) that includes seven connected prefabricated components
composed of 14 straight cylindrical segments (Figure 3(c)).
(a) (b)
(c)
Figure 3. (a) As-designed layout of the pipes (b) As-built laser scan (c) As-
designed Model showing different Cylindrical Segments
The change detection algorithm developed in (Kalasapudi et al., 2014) has
successfully matched the sections of pipes extracted from as-built point cloud and the
as-designed model. It automatically computes the local and global information for the
segmented pipes from both model and data. Due to unavailability of superior quality
data to extract the radius information ( r), the authors only used length as a local
attribute as of now. However, both orientation and position of the connection were
analyzed as global attributes. The algorithm then calculates the change in these local
(length) and global (orientation and position) attributes for generating the tolerance
network.) Table 1(a) shows the change in length ( l) of each individual segment
between as-planned model and as-is data. Table 1(b) shows the change in orientation
( ), and position ( x, y, z) of the connections between each individual segments
© ASCE 5
Computing in Civil Engineering 2015 62
for both model and data. Using this information, the authors generate a tolerance
SEG 12 (-0.08)
SEG 10 (0.05)
SEG 8 (-0.21)
SEG 9 (-0.09)
JOINT 4
SEG 5 (-0.06)
SEG 4 (-0.04)
SEG 6 (0.06)
SEG 1 (0)
JOINT 2
SEG 7 (-0.05)
SEG 14 (-0.06)
JOINT 9
SEG 3 (-0.05)
SEG 13 (-0.13)
SEG 11 (-0.14)
network (
Figure 4), which can guide engineers in analyzing the propagation of
prefabrication and installation errors and its accumulation across the network.
SEG 12 (-0.08)
SEG 10 (0.05)
SEG 8 (-0.21)
SEG 9 (-0.09)
JOINT 4
SEG 5 (-0.06)
SEG 4 (-0.04)
SEG 6 (0.06)
SEG 1 (0)
JOINT 2
SEG 7 (-0.05)
SEG 14 (-0.06)
JOINT 9
SEG 3 (-0.05)
SEG 13 (-0.13)
SEG 11 (-0.14)
Figure 4. Generated Tolerance Network (Blue are critical nodes and Red is a
major path for error propagation)
(a) (b)
© ASCE 6
Computing in Civil Engineering 2015 63
© ASCE 7
Computing in Civil Engineering 2015 64
quality data for supporting it. Adaptive imaging technologies will enable capturing
complex geometries within less time (Song, Shen, & Tang, 2012). Future work will
thus include integrating adaptive imaging techniques with spatial change pattern
analysis to increase the quality of tolerance analysis and realignment planning.
ACKNOWLEDGEMENTS
The authors would like to express their appreciation to Dr Yelda Turkan from
Iowa State University for providing the 3D BIM models and 3D laser scanning data.
REFERENCES
Akinci, B., & Boukamp, F. (2003). Representation and integration of as-built
information to IFC based product and process models for automated
assessment of as-built conditions. NIST SPECIAL PUBLICATION SP.
Bosché, F. (2010). Automated recognition of 3D CAD model objects in laser scans
and calculation of as-built dimensions for dimensional compliance control in
construction. Advanced Engineering Informatics, 24(1), 107–118.
doi:10.1016/j.aei.2009.08.006
Bosche, F., & Haas, C. (2008). Automated retrieval of project three-dimensional
CAD objects in range point clouds to support automated dimensional QA/QC.
Information Technologies in Construction, 13(October 2007), 71–85.
Henzold, G. (2006). Geometrical Dimensioning and Tolerancing for Design,
Manufacturing and Inspection. Geometrical Dimensioning and Tolerancing
for Design, Manufacturing and Inspection (pp. xi–xii). Elsevier.
doi:10.1016/B978-075066738-8/50000-X
Jaselskis, E., Cackler, E., Andrle, S., & Gao, Z. (2003). Pilot study on improving the
efficiency of transportation projects using laser scanning, (January).
Kalasapudi, V. S., Turkan, Y., & Tang, P. (2014). Toward Automated Spatial Change
Analysis of MEP Components using 3D Point Clouds and As - Designed BIM
Models. In 2014 Workshop on 3D Computer Vision in the Built Environment
(p. 8).
Kim, M.-K., Cheng, J. C. P., Sohn, H., & Chang, C.-C. (2014). A framework for
dimensional and surface quality assessment of precast concrete elements using
BIM and 3D laser scanning. Automation in Construction.
doi:10.1016/j.autcon.2014.07.010
Milberg, C., & Tommelein, I. (2005). Application of Tolerance Mapping in AEC
Systems. Proceedings Construction Research …, (April), 5–7.
Resop, J., & Hession, W. (2010). Terrestrial laser scanning for monitoring
streambank retreat: comparison with traditional surveying techniques. Journal
of Hydraulic Engineering, (October), 794–798.
Sacks, R., Koskela, L., Dave, B. A., & Owen, R. (2009). The Interaction of Lean and
Building Information Modeling in Construction, 1–29.
Song, M., Shen, Z., & Tang, P. (2012). Data Quality Qriented 3D laser scan planning.
In Construction Research Congress 2014 (pp. 1–10).
© ASCE 8
Computing in Civil Engineering 2015 65
Turkan, Y., Bosché, F., Haas, C., & Haas, R. (2012). Toward automated earned value
tracking using 3D imaging tools. Journal of Construction …, 139(4), 423–433.
doi:10.1061/(ASCE)CO.1943-7862.0000629.
Wong, R. W. M., Hao, J. J. L., & Ho, C. M. F. (2013). Prefabricated Building
Construction Systems Adopted in Hong Kong City University of Hong Kong.
Yang, S., Wang, C., & Chang, C. (2010). RANSAC Matching : Simultaneous
Registration and Segmentation. In Robotics and Automation (ICRA), 2010
IEEE International Conference on (pp. 1905 – 1912).
Zhang, H.-C. (1997). Advanced Tolerancing Techniques (p. 587). John Wiley & Sons.
© ASCE 9
Computing in Civil Engineering 2015 66
ABSTRACT
Approaches to increase the energy efficiency of commercial buildings face the
unique challenge of reconciling the discordant spatial scales at which centralized build-
ing systems, organizations, and individual occupants operate and utilize a building.
Despite the importance of understanding spatial variability within a building, methods
to spatially analyze data from wireless sensor networks (WSN) and building manage-
ment systems (BMS) are limited. In this paper, we introduce the application of novel
and highly scalable techniques from the field of discrete signal processing on graphs
(DSPG ) to the challenging problem of understanding the spatial variability of individ-
ual and organizational energy efficiency within commercial buildings. We collect and
process individual-level electricity consumption data for two floors of a commercial
building in Denver, Colorado and demonstrate the merits of our approach on the em-
pirical dataset. Preliminary results indicate that occupants in different organizations
exhibit largely different spatial patterns of energy efficiency compared to those in a
single organization.
Keywords: Energy efficiency, energy analysis, signal processing, spatial analysis
INTRODUCTION
Commercial buildings account for nearly 20% of energy usage in the United States
(U.S. Department of Energy 2010). The proliferation of low-cost and ubiquitous wire-
less sensor networks (WSN) has enabled us to gather large amounts of data on how
buildings and more importantly how occupants within such buildings consume energy.
Recent research has utilized sensor-based data at different scales to understand how
1
© ASCE
Computing in Civil Engineering 2015 67
human dynamics relate to commercial buildings (Gulbinas and Taylor 2014; Azar and
Menassa 2014) and to benchmark building energy efficiency (Woo and Gleason 2014).
While whole building analysis provides value, commercial buildings data must be an-
alyzed at a more granular level to effectively account for the discordant spatial scales
at which organizations and individuals operate within a building. Related work on a
sub-building level has been focused on dynamic frameworks to sync occupancy and
thermal preferences with HVAC set points (Schoofs et al. 2011; Jazizadeh et al. 2013),
to optimize the scheduling of meetings to enhance energy efficiency (Majumdar et al.
2012) and to develop analytics of plug-level electricity data to target areas for energy
efficiency improvements(Ortiz et al. 2012). Therefore, a deeper exploration of energy
data spatial analysis is warranted. In this paper, we introduce a novel technique from the
field of discrete signal processing on graphs to spatially analyze the energy efficiency
of commercial building occupants. Understanding the spatial variability of energy ef-
ficient behavior among occupants can inform more effective operational strategies and
other interventions.
PROBLEM STATEMENT
We consider the general problem of understanding the spatial variability of energy
efficiency in a commercial building. This problem is unique and challenging due to the
unstructured nature of spatial data (i.e., spatial data does not have a natural index and
therefore does not fit or order cleanly into a vector or matrix). We propose representing
occupants or spaces in a building as a graph G = (V, A) where N is the number of
occupants or individual spaces within a building and V = [v0 ...vN 1 ] is the set of
nodes corresponding to such individuals or spaces. We define A as the graph weighted
adjacency matrix of the graph that describes the physical relationship (i.e., distance)
between nodes. The graph G is defined generally, but in this case we restrict the form
to an undirected graph, as the relationship between nodes based on physical distance
yields a symmetric matrix A (i.e., Aij = Aji ), and we construct an edge between nodes
in the network. Using this graph as a basis, we define the input data as a graph signal:
s : vn 7! sn (1)
We assume that each element sn is a real number, representing in this application the
energy consumption or occupancy values associated with a node. Each signal can be
represented as a vector as follows:
s = [s0 , ..., sN 1]
T
2 RN (2)
Figures 1 and 2 illustrate how this structure translates to a real commercial office
building. Figure 1 is for a floor with multiple organizations with each organization
indicated by the color of the node. Figure 2 is for a floor with a single organization.
Nodes are representative of where occupant workstations are located, and the adjacency
matrix is constructed by calculating the physical distance between nodes. We note that
this graph and signal structure are highly scalable and extensible to large dimensional
building data corresponding to thousands of occupants well beyond what is depicted in
Figures 1 and 2.
2
© ASCE
Computing in Civil Engineering 2015 68
DSP ON GRAPHS
We propose the use of the DSPG framework introduced in (Sandryhaila and Moura
2013) to spatially analyze and understand energy efficiency within a commercial build-
ing. As with classic discrete signal processing (DSP), DSPG utilizes a Fourier trans-
form to expand a signal into a Fourier basis in the graph spectral domain. Generally, in
DSPG a graph Fourier transform corresponds to the Jordan decomposition of the graph
adjacency matrix A but in the case of a symmetric graph adjacency matrix A 2 RN ⇥N ,
A is diagonizable and eigendecomposition of A is possible. For simplicity, we assume
the eigenvalues of A are distinct. The distinct eigenvalues o , 1 , ..., N of the adja-
cency matrix A are the graph frequencies and form the spectrum of the graph. The
eigenvector that corresponds to a frequency n is the frequency component correspond-
ing to the nth frequency.
The eigendecomposition is given as follows:
A = Q⇤Q 1
(3)
3
© ASCE
Computing in Civil Engineering 2015 69
with the ith column of matrix Q being the eigenvector qi corresponding to eigenvalue
i . The graph Fourier transform is given as follows:
ŝ = Fs (4)
where F = Q 1 is the graph Fourier transform matrix. The values sˆn of the signal’s
graph Fourier transform (4) characterize the frequency content of the signal s.
In the graph spectral domain, the ordering of frequencies is often difficult to discern
because the frequencies can be complex valued for instances the adjacency matrix is
not symmetric. For this reason, the DSPG framework introduces the concept of total
variation on a graph (T VG ) which is based on the concepts of graph shift and total
variation from classical DSP. The total variation of an eigenvector of the matrix A takes
the form:
T VG (v) = 1 ||v| |1 (5)
max
DATA COLLECTION
We collected energy-use data for 27 occupants in a commercial building in Denver,
Colorado using BizWatts (Gulbinas et al. 2014a), a socio-technical energy feedback
system, from April 17th to July 9th, 2013 in order to test and demonstrate the merits of
our approach. Individual plug-load monitors were installed at each individual’s work-
station and recorded time, location, real power (W), current (A), voltage (V), and power
factor data every 5 minutes. The interval data was uploaded to a central server every 15
4
© ASCE
Computing in Civil Engineering 2015 70
minutes. Because occupants in the building were not completely stationary, this data
provides a typical location or ”snapshot” of occupant energy usage in a space. How-
ever, it should be noted that the framework proposed in this paper is flexible and would
allow for more dynamic non-stationary data to be analyzed. The building occupants
were distributed over two floors (i.e., fourth, fifth floors) and were full-time employ-
ees who typically occupied the building during regular working hours; from 9:00am to
5:00pm from Monday to Friday. The fourth floor consisted of occupants from multiple
organizations, while the fifth floor consisted of occupants from a single organization.
Typically connected appliances included computers, monitors, space heaters, and elec-
tronics chargers. Additional details regarding the sensor network architecture and data
collection software can be found in (Gulbinas et al. 2014a).
After the data collection period was finished, a processing algorithm was applied
to quantify the energy efficiency of each building occupant. We define an individ-
ual building occupant’s energy efficiency, EE, as the percentage of time spent in low
energy-use states over a specified period and time-range. EE values range from 0 to 1,
with 1 representing the most energy efficient occupants. We compute an EE value for
each occupant across the entire study period for two non-work hour ranges: morning
(i.e., pre-work hours) and evening (i.e., post-work hours). We formally define EE with
the following equation:
Pr
[hl ⇥ p(LCl ) ⇥ W D] + 24 ⇥ p(LCn ) ⇥ NWD
EE = l=1 Pr (7)
l=1 [hl ⇥ W D] + 24 ⇥ NWD
where r is the number of workday non-work hour ranges, hl is the number of hours
for each non-work hour range l, and p(LCl ) is the probability that the workday non-
work hour range is assigned to a low energy cluster. Low energy clusters are clusters
that possess a center with a mean energy value below a defined threshold ( ). In this
analysis, was set to 7Wh per hour to reflect estimates of average power consumed
by connected appliances in the off state. WD and NWD are the number of workdays
and non-workdays, and p(LCn ) is the probability that a non-workday is assigned to low
energy clusters.
A more detailed explanation of the energy efficiency metric, the algorithms used to
derive EE values and illustrative examples can be found in (Gulbinas et al. 2014b).
5
© ASCE
Computing in Civil Engineering 2015 71
multiple organization floor having values approximately 10 dB lower than the single
organizational floor in both cases.
The above observations suggest that for both floors and time periods the signal
varies slowly across the graph, with occupants located close to each other having similar
energy efficiency values. In other words, occupants in close proximity to each other are
observed to have consistent energy consumption behavior. Comparing the two floors,
the single organization can be seen to have a significantly higher PSLR value than the
multiple organizations during both time periods. This suggests that energy efficiency
(EE) values vary spatially more for occupants from multiple organizations than those
in a single organization. This observed result is also consistent with the notion that
6
© ASCE
Computing in Civil Engineering 2015 72
occupants within a single organization share similar behaviors and culture (Hofstede
1980) and could be exploited to successfully diffuse energy efficient practices to other
occupants within a commercial building. Because energy efficiency values are based
on the percentage of time spent in low energy level clusters, similar spatial similarity
within an organization could also indicate that energy efficient behavior (i.e., turning
off one’s computer before leaving work) is more consistent within a single organization
rather than across multiple organizations. Characterizing both organizations and occu-
pants through the proposed spatial analysis could prove to be valuable in the formation
of proactive commercial building and organizational energy management strategies.
Overall, this initial work represents a key first step towards introducing methods
to spatially analyze electricity consumption data in commercial buildings. This paper
contributes a framework to analyze electricity consumption and other building occupant
data spatially. While the data set analyzed in this study was small, the framework
proposed is extensible and highly scalable to datasets of large commercial buildings
with several thousand occupants. Furthermore, we demonstrate the framework’s initial
applicability on real electricity consumption data collected from a test-bed building.
Our work contributes to the growing body of literature on analytics of commercial
building data (Schoofs et al. 2011; Majumdar et al. 2012; Ortiz et al. 2012; Jazizadeh
et al. 2013) by extending such work to incorporate spatial analytics.
ACKNOWLEDGMENTS
Researchers were partially or wholly supported by the Department of Energy Build-
ing Technologies Program, National Science Foundation under Grants No.1142379,
No.1461549 and the Air Force Office of Scientific Research under Grant No. FA95501210087.
We also thank the Center for Urban Science + Progress, New York University for its
support for Jain and Moura during the development of this paper.
REFERENCES
7
© ASCE
Computing in Civil Engineering 2015 73
8
© ASCE
Computing in Civil Engineering 2015 74
Abstract
INTRODUCTION
1
© ASCE
Computing in Civil Engineering 2015 75
advances in remote sensing, such as digital aerial triangulation (DAT) and unmanned
airborne systems (UAS), there is the potential for a new method to be used for
permanent deformation evaluation. Using hyperspatial resolution (less than 1-
centimeter or 0.5-inch) multispectral digital aerial photography (HRM-DAP) acquired
from an UAS as input, DAT was employed to generate hyperspatial resolution digital
surface models (DSMs) for pavement surfaces. Specifically, it is the intent of this
study to examine if pavement permanent deformation can be detected through
analysis of the hyperspatial resolution DSMs generated from DAT, and if so, how
well those estimates correlate to those from existing evaluation protocols.
BACKGROUND
2
© ASCE
Computing in Civil Engineering 2015 76
system must be driven over each pavement segment to assessed, which results in a
time-consuming process because a single data image can only cover a small area
which is usually less than five square meters (McGhee 2004). It is also potentially
dangerous to inspectors because these vehicles need to be operated on roadways. It
should be noted that despite the improved efficiency and effectiveness of automated
rutting data collection, most of the rutting data collected from the field remain
incomplete, inaccurate, and inconsistent (Qiu 2013).
3
© ASCE
Computing in Civil Engineering 2015 77
locations of the input images enable far more precise geo-registration than
traditionally supported through low-cost code based GPS systems and, therefore,
minimizes the geo-registration processing time.
DAT has been used to facilitate research in many fields, such as forestry, land
cover change, coastal management, and agriculture (Okuda et al. 2003; Turner et al.
2012; Kim et al. 2014; Grenzdörffer 2014). However, it has not been applied to
pavement surface 3D reconstruction to permit the assessment of rutting depth. We
therefore explore the application of DAT to HRM-DAP to supplement or replace
current rutting depth measurement methods. With UAS, it is possible to acquire sub-
centimeter scale aerial photography. With DAT, it is possible to generate sub-
centimeter scale DSMs for standardized evaluation of rutting distresses, potentially
reducing cost and duration of surveys while improving the comparability of results.
METHODOLOGY
Approximately 150 overlapping aerial images per site were used for DAT
after blurry and oblique images were excluded. Overlapping images were combined
4
© ASCE
Computing in Civil Engineering 2015 78
into a single image mosaic and used to estimate terrain height through DAT. The
commercial software Agisoft performs DAT with minimal human intervention and at
a significantly low cost. For each site, millions of control points were identified to
build a dense cloud, and then a triangulated irregular network (TIN) mesh was
generated based on the identified control points. Once these processes were
completed, the DSMs and orthophotos were exported as raster datasets in tiff format.
DSMs and orthophotos were generated at a spatial resolution of 3-milimeters. When
compared to 6 independent GCPs collected by RTK, the overall horizontal accuracy
(root-mean-square-error [RMSE]) for all data collection sites is 0.004 meters while
the vertical accuracy is 0.006 meters. The number of image frames used and accuracy
information for each data collection site is reported in Table 1.
DSMs were used to reconstruct the 3D pavement surfaces and only points
matching the manual evaluation locations on the DSMs were used for comparison.
When measured manually, rutting depth was measured with a wooden bar and a
measuring tape. The minimum scale for the tape is 0.001-meter. The length and width
for the wooden bar is 1.2192-meters and 0.02-meter. Figure 3 shows the DSMs with
the actual measure points and wooden bars overlaid. Using the measure point as the
center, two polygons with a size of 0.6096 meters by 0.02 meters were created to
simulate the position of the wooden bar. The height information within the polygons
was extracted to find the highest points in each of the two polygons to be used later
for rutting depth calculation. Figure 4 illustrates the calculation process.
As shown in Figure 4, we consider the two highest points of the rutting
section points A and B and the distance from Point C to D the rutting depth. Points A,
B, and C will have the same height if the heights of Points A and B are equal.
However, under most circumstances the heights of A and B are different. Therefore, a
weighted average method was used to estimate C:
5
© ASCE
Computing in Civil Engineering 2015 79
Rutting depth measured from the DSMs was compared to manual results to
examine the feasibility of using the DAT-based method to detect and assess the
rutting depth. Linear regression revealed that depths measured by these two methods
fit closely to the regression line, but a paired t-test was not performed because these
data clearly violate the assumption that there is no linearity between the two groups of
sample values (Carroll and Ruppert 1996). Rutting depths measured from these two
methods were therefore compared with orthogonal regression analysis since it does
not assume independence between variables. Orthogonal regression examines the
linear relationship between two continuous variables and is often used to test whether
two instruments or methods are measuring the same thing (Staiger and Stock 1997).
6
© ASCE
Computing in Civil Engineering 2015 80
CONCLUSIONS
REFERENCES
Bogus, S. M., Song, J., Waggerman, R., and Lenke, L. (2010). “Rank correlation
method for evaluating manual pavement distress data variability.” Infrastructure
Systems, 16(1), 66 – 72.
Carroll R. J., Ruppert, D. (1996). “The use and misuse of orthogonal regression in
linear error-in-variable models.” The American Statisticians, 50(1), 1 – 6.
Grenzdörffer, G. J. (2014). “Crop height determination with UAS point clouds.” The
International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Science, Volume XL-1, ISPRS Technical Commission I Symposium,
Denver, Colorado.
Haas, R., Hudson, W.R., and Zaniewski, J. (1994). Modern pavement management,
Krieger, Malamar, Fla.
Kim, B. O., Yun, K., and Lee, C. (2014). “The use of elevation adjusted ground
control points for aerial triangulation in coastal areas.” KSCE Journal of Civil
Engineering, 18(6), 1825 – 1830.
McGennis, R.B., Anderson, R.M., Kennedy, T.W., Solamanian, M. (1994).
Background of superpave asphalt mixture design and analysis, Report No.
FHWA-SA-95-003, Federal Highway Administration, Washington, D.C.
Okuda, T., Suzuki, M., Adachi, N., Quah, E.S., Hussein, N. A., Manokaran, N. (2003).
“Effect of selective logging on canopy and stand structure and tree species
composition in a lowland dipterocarp forest in peninsular Malaysia.”
Forest Ecology and Management, 175, 297 – 320.
Paterson, W.D.O. (1987). Road deterioration and maintenance effects: models for
planning and management, The Johns Hopkins University Press, Baltimore, MD.
Qiu, S. (2013). Measurement of pavement permanent deformation based on 1mm 3D
pavement surface model (Doctoral dissertation). Oklahoma State University.
Staiger, D., and Stock, J. H. (1997). “Instrumental variables regression with weak
instruments.” Econometrica, 65(3), 557 – 586.
Turner, D., Lucieer, A., and Watson, C. (2012). “An automated technique for
generating georectified mosaics from ultra-high resolution unmanned aerial
vehicle (UAV) imagery, based on structure from motion (SfM) point clouds.”
Journal of Remote Sensing, 4, 1392 – 1410.
Vaitkus, A., ygas, D., and Kleizien , R. (2014). “Research of asphalt pavement
rutting in Vilnius city streets.” Proceedings of The 9th International Conference
“Environmental Engineering”, Vilnius, Lithuania.
Wang, Z. Z. (1990). Principle of photogrammetry (with remote sensing). Publishing
House of Surveying and Mapping, Beijing, China
Wang, Y. (2007). Digital simulation test of asphalt mixtures using finite element
method and X-ray tomography images (Doctoral dissertation). Virginia Tech.
Yuan, X., Fu, J., Sun, H., and Toth, C. (2009). “The application of GPS precise point
positioning technology in aerial triangulation.” ISPRS Journal of
Photogrammetry and Remote Sensing, 64, 541 – 550.
Zomrawi, N., Hussien, M. A., and Mohamed, H. (2013). “Accuracy evaluation of
digital aerial triangulation.” Intl J. of Eng. and Tech., 2(10), 7 – 11.
8
© ASCE
Computing in Civil Engineering 2015 82
Xiaoning Ren1; Zhenhua Zhu2; Chantale Germain3; Bryan Dean4; and Zhi Chen5
1
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, QC, Canada H3G 1M8. E-mail: [email protected]
2
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, QC, Canada H3G 1M8. E-mail: [email protected]
3
Direction Ingénierie de Production Hydro-Québec, 855 Ste-Catherine est, Montréal,
QC, Canada H2L 4P5. E-mail: [email protected]
4
Dir. Administration et Contrôle, Hydro-Québec Équipement, 855 Ste-Catherine est,
8 étage Montréal, QC, Canada H2L 4P5. E-mail: [email protected]
5
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, Canada H3G 1M8. E-mail: [email protected]
Abstract
High definition construction cameras have been increasingly placed on
construction sites to record jobsite activities into time-lapse videos. These videos
have been used in several research studies to promote construction automation related
to productivity analysis, site safety monitoring, etc. However, most videos tested in
these studies were collected at daytime. It is not clear whether the time-lapse site
videos collected at night are still useful, considering construction work might be
performed both day and night for various reasons. The main objective of this paper is
to investigate the effectiveness of recording jobsite activities with the time-lapse
videos collected by construction cameras at night or other low ambient illuminant
conditions through a case study. The construction site of the Romaine Complex
project in Quebec is selected as the test bed. The cameras have been placed on the site
to record the jobsite activities from day to night. All the collected site videos are
classified based on their ambient illuminant conditions. Then, the videos under low
ambient illuminations were tested with the object recognition techniques that have
been commonly used in existing construction research studies. The recognition results
indicated that the videos collected at night or other low ambient illuminant conditions
are still useful and could be considered as an important source for site data sensing
and analysis.
© ASCE 1
Computing in Civil Engineering 2015 83
INTRODUCTION
In recent years, the object recognition technology of utilizing high resolution
construction cameras has been increasingly recognized as an effective and efficient
way to retrieve construction site information. The cameras could record jobsite
activities. This activity information, when being retrieved, is useful for monitoring the
construction performance (Leung et al. 2008), analyzing the construction productivity
(Gong and Caldas 2010), and maintaining control work zone safety (Yang et al. 2010).
However, to the authors' knowledge, most site videos tested for the retrieval
of construction site information in existing studies were collected at daytime with
good luminance. As concerns the real projects, construction work might be performed
both day and night in order to catch up with the project schedule and shorten the
construction duration. Therefore, it is not clear whether the time-lapse site videos
collected at night or under other low illuminant conditions are still useful for the
recognition of project information retrieval, such as the recognition of project-related
entities (e.g. construction equipment) on the construction site.
The objective of this paper is to investigate the effectiveness of construction
site videos collected under low ambient illuminant conditions on the construction site
information retrieval. In order to achieve the objective, a high resolution construction
camera was mounted to record the jobsite activities, the Romaine Complex project in
Quebec. The Romaine Complex project is going to build a 1,550-MW hydroelectric
complex comprising four hydropower generating stations on the Rivière Romaine.
The collected videos are first classified in accordance with the specific illuminant
conditions. For those videos with low illuminant, they are extracted into images in
every 1 minute. Then, images are transformed to equalized-histogram ones though
grayscale conversion. At the end, existing object recognition techniques could be
used to recognize project-related entities on construction sites based on the equalized
histogram images. The recognition results indicate that the site videos with low
luminance are applicable for the equipment recognition. And the findings of the case
study have the potential to enrich the site data analysis source.
BACKGROUND
High definition cameras are increasingly adopted to monitor construction
performance on-site. Considering that the cameras could capture a wealth of
construction site information for both decision-makers and construction professionals.
For example, in construction performance monitoring, cameras are instrumental in
monitoring the real-time construction progress, allowing for early detection of
problems (e.g. construction deviations) and better planning for following tasks (Bohn
and Teizer 2010). Everett et al. (1998) proposed an innovative time-lapse videos
application to document and observe construction projects. Navon (2006) conducted
the research on automated measurements of project performance using time-lapse
videos for monitoring and control.
In resource tracking and recognition, Brilakis and Soibelman (2005) lodged a
novel method for open site tracking and site communication with construction
cameras based on machine vision. Brilakis et al. (2008) presented a novel method for
open site tracking with construction cameras based on machine vision. Chi and
© ASCE 2
Computing in Civil Engineering 2015 84
Caldas (2011) proposed a method for automated object detection for the moving
equipment and personnel using video cameras for heavy-equipment-intensive
construction sites. The on-site camera system was employed to fulfill the three-
dimensional tracking of construction resources (Park et al. 2012).
In safety control, Teizer and Vela (2009) had tested and demonstrated the
feasibility of tracking personnel on construction sites using the snapshots captured by
both statically placed and dynamically moving cameras. Yang et al. (2010) developed
a machine-learning-based multiple workers tracking scheme using video frames for
the onsite safety concerns. Vision-based motion capture techniques were introduced
to detect unsafe actions in site videos (Han et al. 2012). Thus, the adoption of high
resolution construction cameras is not only beneficial to project decision-makers,
namely project scale and scope control, work zone safety monitoring, etc, but
instrumental to the promotion of automation in civil engineering. However, most
existing studies were investigated using data in the daytime with good illuminant
conditions based on the aforementioned study review. In hence, to determine the
feasibility of the equipment recognition using videos collected under low luminance
is the key motivation of this study.
The common limitation of current studies is that videos used for the object
recognitions were captured in the daytime with good illuminant conditions. On the
other hand, researchers, in computing field, have investigated the image processing
techniques on the basis of night version images for decades. Waxman et al. (1997)
presented a night vision colorization technique for processing the multispectral night
vision imagery. Teo (2003) proposed the effect of the Contrast Limited Adaptive
Histogram Equalization (CLAHE) process on night vision and thermal images.
However, few researchers extend the image processing techniques on the night
version images to the field of engineering. Consequently, to apply the image
processing techniques on the night version images is another motivation of this study.
© ASCE 3
Computing in Civil Engineering 2015 85
excavator-back, because the position of the excavator boom substantially affects the
recognition result.
IMAGE PROCESSING FRAMEWORK
In order to achieve the goals mentioned above, a novel image processing
framework has been proposed. The framework includes three main steps. Any input
color image with low illuminant conditions is pre-processed though grayscale
conversion and histogram equalization. Then, existing object recognition techniques
could be used to recognize project-related entities on construction sites. The
framework has been illustrated in Figure 1. The construction equipment could be
recognized based on its HOG features.
Specifically, the first step in the proposed framework is to convert the video
frames into 8-bit grayscale. A grayscale image, in computing version, is often the
result of measuring the intensity of light at each pixel, and each pixel is a shade of
gray, normally from 0 (black) to 255 (white). In this study, the image processing is
mainly on the basis of grayscale images.
The second step is to transform the grayscale images so that the histogram of
images is equalized. The histogram equalization method is instrumental to analyze
and improve the visual quality of images, thereby increasing the contrast of images
with backgrounds and foregrounds that are both bright or both dark. Through the
adjustment, intensities can be better distributed on the histogram, which allows local
areas with lower contrast to obtain a higher contrast.
The last step is to implement the object recognition algorithm based on the
equalized histogram images. In order to improve the recognition accuracy, the
training samples of the excavator are divided into two sets, excavator-front and
excavator-back, because the position of the excavator boom substantially affects the
detection result. And among the 480 test images, there are 480 mages for excavator-
front, 178 for excavator-back and 71 containing the tipper truck.
CASE STUDY
Project Background
For the sake of securing the required data, the construction camera was
mounted on the test bed, the Romaine complex project funded by Hydro- Québec
which is going to build four hydropower generating stations on the Rivière Romaine.
Since there are two work shifts on the construction jobsite from day to late night, the
construction camera system featuring a 12 megapixel SLR digital camera was used to
record the jobsite activities all day along. The core high resolution camera provides
clearer images with greater coverage. The images are determined by the number of
pixels. And more pixels would improve the sharpness of the image, which are
© ASCE 4
Computing in Civil Engineering 2015 86
insttrumental to
o the recognition results. As shown in figure 2,, (a) and (b)) shows the
settting-up of th
he constructtion camera and the im
mage sample from the sttandardized
view
wpoint is dissplayed in (cc).
(a) (b) (
(c)
F
Figure 2: (a) camera settiing-up; (b) standardized
s viewpoint; (c) camera setting-up
s
(a) (b)
(c) (d)
F
Figure 3: (a) original RGGB image; (bb) grayscale image (c) hiistogram equualization
imagge; (d) tipperr truck recoggnition resultt.
© ASCE 5
Computing in Civil Engineering 2015 87
(a) (b)
(c) (d)
F
Figure GB image; (b) grayscale image;
4: (a) original RG i (c) hiistogram equualization
image;; (d) front-exxcavator recoognition resuult.
(a) (b)
(c) (d)
F
Figure 5: (a) original RG
GB image; (b) grayscale image;
i (c) hiistogram equualization
image;; (d) back-exxcavator recoognition resuult.
© ASCE 6
Computing in Civil Engineering 2015 88
The preliminary test results indicated that the construction site videos
collected at night or under other ambient illuminant conditions are still useful with
appropriate image processing techniques. In an addition, a comparison experiment
was investigated to indicate the effectiveness of the videos. More precisely, the
identical recognition algorithm was used to recognize the same type of equipment,
tipper trucks and excavators, in the original RGB images. Take the recognition result
for the tipper truck as an example, the result for the tipper truck in the images
processed by the proposed method is 14 out of 71, whereas the recognition result for
the tipper truck in the RGB images is 3 out of 71. The results in the processed images
are 5 times better than those in the raw images.
Table 1 shows the detailed comparison results of each equipment. 137 out of
480 equalized-histogram images containing the excavator-front can be recognized.
On the other hand, the recognition results in the RGB images are 21 out of 480.
Manifestly, the proposed image processing framework is effective for improving the
construction equipment recognitions results, which in turn is instrumental for the
retrieval of construction site information under low illuminance conditions.
Table 1: Comparison Experiment Results
Recognition results
Equipment type
In RGB images In equalized-histogram images
Tipper truck 3/71 14/71
Excavator-back 0/178 34/178
Excavator-front 21/480 137/480
ACKNOWLEDGEMENT
This paper is based in part upon work supported by the National Science and
Engineering Research Council (NSERC) of Canada. Any opinions, findings, and
conclusions or recommendations expressed in this paper are those of the author(s) and
do not necessarily reflect the views of the NSERC.
© ASCE 7
Computing in Civil Engineering 2015 89
REFERENCES
Brilakis, I., Cordova, F., and Clark, P. (2008). “Automated 3D vision tracking
for project control support.” Proc., Joint US-European Workshop on Intelligent
Computing in Engineering, 487–496.
Brilakis, I and Cordova, F and Clark, P Automated 3D Vision Tracking for
Project Control Support. In: Joint US-European Workshop on Intelligent Computing
in Engineering, 2008-7-2 to 2008-7-4, Plymouth, UK pp. 487-496
Brilakis, I., Park, M.-W., and Jog, G. (2011). “Automated vision tracking of
project related entities.” Adv. Eng. Inf., 25(4), 713–724
Bohn, J. and Teizer, J. (2010). “Benefits and Barriers of Construction Project
Monitoring Using High-Resolution Automated Cameras.” J. Constr. Eng. Manage.,
136(6), 632–640.
Chi, S. and Caldas, C. (2011). “Automated object identification using optical
video cameras on construction sites.” Computer-Aided Civil and Infrastructure
Engineering, 26(5): 398-380.
Everett, J., Halkali, H., and Schlaff, T. (1998). ”Time-Lapse Video
Applications for Construction Project Management.” J. Constr. Eng. Manage., 124(3),
204–209.
Han, S., Lee, S., and Peña-Mora, F. (2012) Vision-Based Motion Detection
for Safety Behavior Analysis in Construction. Construction Research Congress 2012:
pp. 1032-1041.
Gong, J. and Caldas, C.H. (2010). “Computer Vision-Based Video
Interpretation Model for Automated Productivity Analysis of Construction
Operations.” Journal of Computing in Civil Engineering, 24(3): 252-263.
Gonzalez, R.C., Wood, R.E. s, Eddins, S.L., (2004). “Digital Image
Processing Using Matlab”, Prentice Hall, Upper Saddle River, New Jersey
Navon, R. (2006). “Research in automated measurement of project
performance indicators.” Autom. Constr., 16(2), 176–188.
Park, M.W., Makhmalbaf, A., and Brilakis, I. (2011). “Comparative study of
vision tracking methods for tracking of construction site resources.” Autom. Constr.,
20(7), 905–915.
Rezazadeh Azar, E. and McCabe, B. (2012). ”Automated Visual Recognition
of Dump Trucks in Construction Videos.” J. Comput. Civ. Eng., 26(6), 769–781.
Song, J., Haas, C., and Caldas, C. H. (2006). “Tracking the location of
materials on construction job sites.” J. Constr. Eng. Manage., 132(9), 911–918.
Teo, C.K., (2003). “Digital Enhancement of Night Version and Thermal
Image.”, Master’s thesis, Naval Postgraduate school Monterey.
Teizer, J., Lao, D., and Sofer, M. (2007). "Rapid automated monitoring of
construction site activities using ultra-wideband." Conf. Proc. of the 24th Int.
Symposium on Automation and Robotics in Construction, IAARC.
Yang, J., Arif, O., Vela, P. A., Teizer, J., and Shi, Z. (2010). “Tracking
multiple workers on construction sites using video cameras.” Adv. Eng. Inform.,
24(4), 428–434.
Waxman, A.M., Gove, A.N., Fay, D.A., Racamato, J.P., Carrick, J.E., Seibert,
M.C. Savoye, E.D., (1997). Color Night Vision: Opponent Processing in the Fusion
of Visible and IR Imagery, J. Neural Networks 10(1), 0893-6080
© ASCE 8
Computing in Civil Engineering 2015 90
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 91
© ASCE 2
Computing in Civil Engineering 2015 92
Thus, this research aimed at development of two data fusion platforms with
capability not only to integrate the power consumption data with the BIM-related data,
but also to protect residents’ privacy concerns regarding their electricity usage
patterns. Research work includes: (1) development of a tool to automatically
transform a Revit file into Unity3D so that residents can see when and where the
power is consumed and potential energy saving suggestions in a more interactive
way; (2) development of a local platform, called local Real-Time Replay Platform
(RTRP), to call a BIM-based energy analysis tool, Ecotect, to create the energy use
baselines for residents for comparison; (3) development of a central platform, called
central RTRP, to receive the aggregated power consumption data from several local
RTRPs of different households so as to compile the neighborhood energy use
baselines for all residents to compare with. This paper presents the preliminary
research results, which cover the first work item, and discusses the future work plan
and challenges. Section 2 reviews related work in the literature. Section 3 describes
the implementation of RTRP. Section 4 discusses RTRP’s limitations, and Section 5
presents conclusions and future work.
RELATED WORK
Methods for collection and analysis of power consumption data for residential
energy saving have been proposed and examined in the literature. Mattern et al.
(2010) classified these methods as two categories: (1) Single Sensor Approach, where
a sensor will be placed in a circuit to measure its electricity usage or be placed in a
main switch to measure for the entire power demand of a household. Other advanced
methods such as non-intrusive load monitoring (NILM) can be used to disaggregate
the total consumption data so as to provide more specific information about electricity
consumption at device level (Hart 1992). Kolter and Johnson (2011) provided a
public data set for this type of energy disaggregation research. (2) Multiple Sensor
Approach, where a sensor will be installed for every device, and it is mostly
expensive in the form of smart power outlets such as Belkin WeMo Insight Switch,
which can simplify the deployment work and was therefore utilized in this research.
In sum, collection of real-time power consumption data is now feasible, but the
second type of approaches has difficulty in aggregating each device’s data while the
first type of approaches has some challenges pertaining to disaggregating data.
Behavioral interventions for residential energy saving have been investigated
in the literature as well. Allcott and Mullainathan (2010) reported the experiment
results conducted by Opower to prove the concept. However, other researchers such
as Alahmad et al. (2012) reported a statistically insignificant reduction in mean
electrical consumption in houses when compared to a randomly selected control
sample by using three different sensors with feedback functionality. Indeed, the
human behavior is very complex, and Sorrell et al. (2009) argued that occupants
might change their energy usage characteristics by adopting bad consumption habits,
or the so-called rebound effect. In sum, the information presented to residents for
energy saving may need careful design and comprehensive testing before actual
deployment.
© ASCE 3
Computing in Civil Engineering 2015 93
RESEARCH METHODOLOGY
The proposed RTRP utilizes Revit, the Unity3D engine, and many Wi-Fi
smart power outlets (e.g. D-Link Wireless Smart Plug or WeMo) to create a local
platform that is capable of replaying any series of events recorded by the sensors in a
building. Different from using surveillance cameras for infrastructure monitoring,
applying RTRP not only can provide richer, digitalized details for analysis, but also
can avoid privacy concerns should there be confidential areas within a building. As
shown in Figure 1, the following paragraphs describe the main components of RTRP.
Database: The sensor database can handle problems such as the huge volume
of sensor data. Three tables were created for sensor data: Sensor Attributes, Sensor
Records, and Material Attributes. Sensor Attributes is the table that stores the
attributes of sensors, such as identification number, the highest amount of wattage
that a sensor can undertake, etc. Sensor Records is the table that stores all the data
sent back by a sensor at any time, such as timestamp, current position, wattage and
temperature. Material Attributes is the table that lists all the behaviors for each
infrastructure material type, like the material type’s burning, explosion or breaking
temperature. The open source database, PostgreSQL with PostGIS, was utilized here
for storing the sensor data.
Web-based Management Interface: A user-friendly, Web-based
Management Interface was created to enable users’ interactions with the sensor
database. With the interface, users do not need to log onto the database as it is not a
straight-forward task for those who do not have experience of database management.
This also insures the database security as the users cannot manipulate the data tables
directly.
In addition, the Web-based Management Interface was coded in PHP,
Hypertext Preprocessor, which is an intuitive web programming language that
supports most web functions and asynchronous JavaScript and XML (AJAX), not
only avoiding the full page reloads upon requests, but also reducing the page
processing time. The compatibility of PHP for SQL is well established and most
database systems such as Microsoft SQL Server, MySQL and PostgreSQL are all
compatible with PHP. To insure the database security and sensor data privacy, users
of the Web-based Management Interface are required to log in.
BIM Material Standardization: The objects in Revit, such as rebar, wall,
column and furniture all can be customized and modified. From a designer’s point of
view, in addition to ensuring the consistency of the object parameters, a designer can
edit these objects more efficiently in Revit. However, the materials in Revit are
© ASCE 4
Computing in Civil Engineering 2015 94
© ASCE 5
Computing in Civil Engineering 2015 95
burn; non-flammable materials such as glass will break; electronic devices will
explode. To simulate the status change of an object, the physical particle effects is
applied for material burning and explosion and for the mesh cutting algorithm with
random forces to split an object for material breakage.
© ASCE 6
Computing in Civil Engineering 2015 96
security, only users who have the authorization are allowed to log onto the system.
RTRP has a logon interface and after logon users can see the function menu on the
left, which includes Materials Management (material parameters setup), Sensor
Management (sensor parameters setup) and Time Table Management (sensor data
managing). Materials Management is for defining all material parameters, which
include a material’s burning, explosion and breakage temperatures. Sensor
Management is to setup the type and the tripped watts of each sensor inside the
infrastructure. Time Table Management is about the real-time sensor data, such as
the relative position of the infrastructure, the temperature or the humidity the sensor
collected.
DISCUSSION
© ASCE 7
Computing in Civil Engineering 2015 97
reflect the ultimate goal of achieving residential energy saving. First, the simulation
of material burning and related smoke creation is critical to improving the realism
aspect, as they capture the airflow near the surrounding objects. Such simulation
requires a lot of computational time and cannot reflect the real-time infrastructure
condition. For this reason, only the physic particle system was utilized to simulate
material burning. Second, the availability of sensors that can withstand high
temperature and can still transmit data is limited. Currently the range of temperature
sensors is from below -200°C to well over 2000°C. Considering that the highest
temperature of a fire scene is typically around 1250°C, the only challenge is how to
combine the heat-resistant sensors with wireless data transmission capability. Third,
setting up sensors inside an existing building might not always be feasible but
constructing a sensory building is very much doable nowadays. Finally, data yielded
by the sensors might include some error and the data accuracy is still yet to be
improved as the technology advances in the future.
CONCLUSIONS
ACKNOWLEDGMENTS
The authors would like to thank the Ministry of Science and Technology in
Taiwan for financial support under grant numbers: MOST104-3113-E-008-002 &
MOST103-2221-E-008-054-MY3, as well as Prof. Ken-Yu Lin from University of
Washington at Seattle for her invaluable suggestions.
REFERENCES
Alahmad, M.A., Wheeler, P.G., Schwer, A., Eiden, J. and Brumbaugh, A. (2012). “A
Comparative Study of Three Feedback Devices for Residential Real-Time
Energy Monitoring.” IEEE Transactions on Industrial Electronics, 59(4),
2002-2013.
Allcott, H. and Mullainathan, S. (2010). “Behavior and Energy Policy.” Science, 327,
1204-1205.
Azar, E. and Menassa, C.C. (2012). “Agent-Based Modeling of Occupants and Their
Impact on Energy Use in Commercial Buildings.” Journal of Computing in
Civil Engineering, 26(4), 506-518.
Davito, B., Tai, H., and Uhlaner, R. (2010). “The smart grid and the promise of
demand-side management.” McKinsey on smart grid in summer 2010,
© ASCE 8
Computing in Civil Engineering 2015 98
<https://www.smartgrid.gov/document/smart_grid_and_promise_demand_sid
e_management> (Dec.1, 2014).
Hart, G.W. (1992). “Nonintrusive appliance load monitoring.” Proceedings of the
IEEE, 80(12), 1870.
Khan, A. and Hornbæk, K. (2011). “Big Data from the Built Environment.” Large’11,
ACM, Beijing, China.
Kolter, J.Z. and Johnson, M.J. (2011). “REDD: A Public Data Set for Energy
Disaggregation Research.” SustKDD 2011, ACM, San Diego, California,
USA.
Kumar, S., Hedrick, M., Wiacek, C., and Messner, J.I. (2011). “Developing an
experienced-based design review application for healthcare facilities using a
3d game engine.” Journal of Information Technology in Construction, 16, 85-
104.
Laskey, A. and Kavazovic, O. (2010). “Opower: Energy efficiency through
behavioral science and technology.” XRDS, 17(4), 47-51.
Mattern, F., Staake, T., and Weiss, M. (2010). “ICT for Green – How Computers Can
Help Us to Conserve Energy.” e-Energy’10, ACM, Passau, Germany.
Rüppel, U. and Schatz, K. (2011). “Designing a BIM-based serious game for fire
safety evacuation simulations.” Advanced Engineering Informatics, 25(4),
600-611.
Sorrell, S., Dimitropoulos, J., and Sommerville, M. (2009). “Empirical estimates of
the direct rebound effect: A review.” Energy Policy, 37(4), 1356–1371.
Taherian, S., Pias, M., Coulouris, G., and Crowcroft, J. (2010). “Profiling Energy Use
in Households and Office Spaces.” e-Energy’10, ACM, Passau, Germany.
U.S. Department of Energy (DOE). (2014). Energy Saver: Tips on Saving Money &
Energy at Home, U.S. Department of Energy.
© ASCE 9
Computing in Civil Engineering 2015 99
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 100
© ASCE 2
Computing in Civil Engineering 2015 101
WA
AVELET TRANSFOR
RM FOR PA
AVEMENT DISTRESS
S DETECTIION
There exist several forms of thee wavelet traansform. Thee most comm mon are the
Dauubechies (Daaubechies, 1990) and thee Haar waveelet (Haar, 19910). The Haar H wavelet
is chosen
c in thiis work for pavement
p disstress detecttion because it is highly suitable for
parrallel implemmentation. Innvented by Alfred
A Haar,, the Haar wavelet
w is peerformed by
calcculating the average difffference and the average sum of a paair of input values.
v The
summ is stored in the approoximation subband,
s whhile the diffe
ference is stored in the
detaail subband. In case of twwo-dimensioonal data, suuch as imagees, the proceedure is first
appplied to the rows
r of the image and after
a that to the columnns of the imaage or vice-
verrsa. The resu
ulting proceddure for one level
l of decoomposition is
i shown in Figure
F 3.
Figure 1: Pavemen
nt crack imaage Figurre 2: 3-level wavelet traansform of
a cracck image
© ASCE 3
Computing in Civil Engineering 2015 102
correctly. However, STD incorrectly isolated 2.6 % of the images which actually do
not contain distress as distress images , while HAWCP did not isolate any image
wrongly. Hence, HAWCP is used in the work presented in this paper.
(1)
© ASCE 4
Computing in Civil Engineering 2015 103
© ASCE 5
Computing in Civil Engineering 2015 104
methodology developed for multi-GPU processing for the case of two devices. Intra-
frame load balancing is applied by distributing the images between the two devices.
First, two images are loaded sequentially into arrays by the CPU. After the pixel
values were transferred on the GPU devices, the kernels are executed on both devices
in parallel for all quadruples of pixels. The schema can be extended for more than two
devices by adding more parallel kernels.
PERFORMANCE TESTS
© ASCE 6
Computing in Civil Engineering 2015 105
Such wavelet transform execution time allows for real-time processing of videos with
a frame rate of up to 170 frames per second.
The speed-up of the OpenCL implementation compared to the sequential C++
implementation is:
6000
1200 Sequenti
5000 al C++
1000
Milliseconds
4000
800
OpenCL
3000
600 CPU
2000
400
1000 Nvidia
200
0 GPU
0
2048x2048
256x256 512x512 1024x1024
Speed-up
30
25 Sequential Sequential
C++ time/ C++ time /
20 Sequential Nvidia
Intel GPU Sequential
15 C++ time/ C++ time / Multi-GPU
time
Intel GPU Nvidia time
10
time Multi-GPU
5
time
0
256x256 512x512 1024x1024 2048x2048
© ASCE 7
Computing in Civil Engineering 2015 106
REFERENCES
Daubechies, I. (1990). The Wavelet Transform, Time-Frequency Localization and Signal Analysis.
IEEE Transactions on Information Theory, vol. 36, no. 5, pp. 961 – 1005
Levitz, D. (2014). Across the U.S., the Worst Pothole Season in Recent Memory. Online available at:
http://www.citylab.com/work/2014/03/across-us-worst-pothole-season-recent-memory/8735/ Accessed
03.12.2014
Haar, A. (1910). Zur Theorie der Orthogonalen Funktionensysteme. Mathematische Annalen, vol. 69,
pp. 948 - 956
Intel Corporation (2013). Intel® SDK for OpenCL* Applications 2013 R2 Optimization Guide.
Document Number: 326542-003US
Khronos OpenCL Working Group (2013). The OpenCL Specification, Version: 2.0. Document
Revision 19
National Comparative Highway Research Program (NCHRP) (2004). Automated Pavement Distress
Collection Techniques: A Synthesis of Highway Practice. NCHRP synthesis 334, Transportation
Research Board, Washington, USA
Deutscher Städte- und Gemeindebund (DStGB) (2014). PKW-Maut für alle Straßen richtiger Ansatz –
Beteiligung der Kommunen an den Einnahmen unverzichtbar. Online available at:
http://www.dstgb.de/dstgb/Home/Pressemeldungen Accessed 03.12.2014
Sharma, B. and Vydyanathan, N. (2010). Parallel Discrete Wavelet Transform using the Open
Computing Language: a performance and portability study. 2010 IEEE Int. Symp. Parallel and
Distributed Processing, Workshops and Ph.D. Forum (IPDPSW), pp.1 - 8
Stürmer, M. et al. (2012). Fast wavelet transform utilizing a multicore-aware framework. PARA'10
Proceedings of the 10th international conference on Applied Parallel and Scientific Computing, vol. 2,
pp. 313-323, Springer-Verlag Berlin, Heidelberg
Catanzaro, B. (2010). OpenCL™ Optimization Case Study: Simple Reductions. Online available at:
http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-
case-study-simple-reductions/ Accessed 03.12.2014
Zhou, J. et al. (2006). Wavelet-based pavement distress detection and evaluation. Optical Engineering,
vol. 45, no. 2
© ASCE 8
Computing in Civil Engineering 2015 107
Abstract
Learning curve analysis has been used for decades in construction industry
to study the effect of experience on productivity, especially in repetitive jobs.
Accurate and reliable estimate of time and cost are the benefits of performing a
learning curve analysis. While learning curves exist in all levels of a project,
detailed analysis of workers’ learning curve for short-term activities (that only
span for a couple of minutes) require meticulous manual observation. This
demand in manual effort overshadows the economic benefits from such analyses.
Also, manual observations are time consuming, subjective and prone to human
errors. This research outlines how data for automating such learning curve
analyses can be gathered from a construction site. For this purpose, real-time
location data using Global Positioning System (GPS) technology is collected from
the workers as they perform their regular activities. Data acquired from GPS
technology is used with occupancy grid analysis to calculate the amount of time
spent by workers in specific area, which is used to demonstrate the spatio-
temporal analysis of learning curve. A linear construction activity is presented as a
case study. Results include automatic generation and visualization of learning
curves for workers. The proposed method enables minute analysis of learning
curves in activity level which can be directly associated to project level by
following work breakdown structure. The method can be used in construction
field for improving estimation, scheduling and training by project managers.
INTRODUCTION
One of the important aspects in construction management field is
estimation. Estimation can be for time, budget or labor effort. It has been observed
that the resources being used for repetitive construction activities exhibit a
learning behavior and time taken to perform the same activity decreases over time.
The reasons have been attributed to increased worker familiarization, improved
equipment and crew coordination, improved job organization, better engineering
support, better day-to-day management and supervision, development of more
efficient techniques and methods, development of more efficient material supply
system and stabilized design leading to fewer modifications and rework (Thomas
© ASCE
Computing in Civil Engineering 2015 108
BACKGROUND
Log linear, the plateau, the Stanford-B, the DeJong and S-model are five
widely known types of learning curves (Yelle, 1979). Among these, the log linear
curve can be divided into three parts in which first part constitutes to a little
improvement in productivity with previous work experience, second part results in
a sharply drop in the time taken with the help of gained experience and final part
is demonstrated by a horizontal line which indicates no additional improvement
observed (Couto & Teixeira, 2005). Standard form of this learning curve is
calculated as;
b
y = ax
where, y is the number of direct labor hours required for xth unit of production and
a is the amount of labor hours need for first unit production, x is the cumulative
number for amount of production and finally b is the learning rate (Schilling et al.,
2003). In practice the value for learning rate varies, it can also be negative
affecting the productivity negatively (Chen and Taylor, 2014). Techniques like
improved Line-of-Balance for attaining goals under strict deadlines are also used
in some cases. (Zhang et al.,2014)
The main concern in all kinds of learning curves’ formation is collection
and implementation of data pertaining to the curve. Information associated with
activities should be collected accurately and the forecasted results should be
validated against real world case. Current methods of data collection for learning
curves are manually recorded observations and are prone to error. Manual method
also consumes enormous time and resources. In order to avoid this, Ultra
Wideband (UWB) technology was used which could collect real-time location
data from the construction resources (workers, equipment and material) (Cheng et
al., 2011). Additionally, Radio-Frequency Identification (RFID) technology was
used to track material transportation, work crew and control labor productivity,
which is a major reason for excessive cost resulting from time waste. Data
required for site monitoring and field mobility analyses were gathered using
© ASCE
Computing in Civil Engineering 2015 109
passive RFID tags installed on construction resources (Costin et al, 2012). Global
Positioning System (GPS) (Pradhananga and Teizer 2012) and vision-based
techniques (Park et al., 2011) are other technologies tested and proven in
construction environment for resource tracking.
© ASCE
Computing in Civil Engineering 2015 110
facility had indoor and outdoor practice facilities amounting to over 80,000 square
foot area. During the data collection, ten workers tagged with GPS tags were
actively involved in installing metal stripes on the roof of the facility. The workers
had to stop a couple of times in a day to haul material. Other than that, the
workers spread in a linear pattern from east to west along a stripe on the roof and
on succesfull installation of the stripes, the entire crew moved together from north
to south. Figure 1 shows the distribution of workers and direction of their
progress. One GPS tag was placed on each workers and data was recorded at the
rate of 1Hz for their entire working shift which lasted from 7:00 AM in the
morning to 6:00 PM in the evening.
© ASCE
Computing in Civil Engineering 2015 111
© ASCE
Computing in Civil Engineering 2015 112
PRELIMINARY RESULTS
Figure 4 shows the learning curve for a worker. Three spikes in time were
observed while the workers hauled material to the roof. Four distinct observations
were made from the figure. Each observation has been discussed through a trend
line in the Figure 4. The nature and properties of the trend lines are not discussed
in this paper but the slopes of the lines are of interest. Figure 4 represents the
result from a small sample set with limited number of data points. The aim of this
study is the exploratory analysis of the potential of generation of learning
behavior. An extensive data collection and rigorous analysis will be needed to
validate the concept statistically. The preliminary results only investigates the
positivity and negativity of the trend and not the validity of the trend itself.
© ASCE
Computing in Civil Engineering 2015 113
workers took breaks. So, although a negative trend is seen, it is not certain that it
is because of the learning effect.
Trend line 3 and 4: Figure 4 shows the normal working pattern of the workers.
The normal pattern was disturbed by material hauling. Trend line 3 and Trend line
4 shows the trends while the workers performed their regularly scheduled tasks.
Both trend lines 3 and 4 have negative slopes indicating learning behviors. It is
also interesting to observe that the frequency at the end of trend line 3 is higher
than the start of trend line 4. This reinforces the concept of learning and forgetting
curve. It can be assumed that the harmony of the worker did not continue after a
distraction due to material hauling. As a result, the progress was retarded during
the start of another regular session. The worker, however, seem to pick up speed
and overcome the effect of disturbance quickly.
CONCLUSION
This paper includes the real case study of a roof construction examined in
order to show the learning effect on repetitive jobs. The jobsite was divided into
grids and total amount of time that each worker spends inside the grid cells were
determined. Since the roof construction job is examined under two kinds of job,
which were hauling material to the roof and time for material installation, the
response obtained for one worker was investigated in multiple parts. This paper
shows the learning effect on short term activities in which improved equipment
and crew coordination and worker familiarization bring about decrease in time
consumption. Further studies need to be done for predictive estimation for the
time and cost needs for future activities based on real case data. Owing to learning
curve studies, time, budget and manpower resource can be scheduled for short
term activities. It can be used by estimators and managers to improve resource
allocation and accurate understanding of productivity.
ACKNOWLEDGEMENT
The authors would like to acknowledge the support of Barton-Malow
Company during data collection. Any opinions, findings, or conclusions included
in this paper are those of the authors and do not necessarily reflect the views of
Barton-Malow.
REFERENCES
Anzai, Y. and Simon, H. A. (1979). The theory of learning by
doing.Psychological review, 86(2), 12.
Chen, J. and Taylor, J. E. (2014). Modeling Retention Loss in Project Networks:
Untangling the Combined Effect of Skill Decay and Turnover on
Learning. Management in Engineering , 1-10.
Cheng, T., Venugopal, M., Teizer, J. and Vela, P. A. (2011). Performance
valuation of ultra wideband technology for construction resource location
© ASCE
Computing in Civil Engineering 2015 114
© ASCE
Computing in Civil Engineering 2015 115
Abstract
INTRODUCTION
Buildings account for more than 40% of CO2 emissions in the United States
(U.S.EIA 2014) and recent research has indicated that CO2 emissions of commercial
buildings are expected to increase faster than all other types of buildings (except
industrial buildings) at an annual rate of 0.8% (U.S.EIA 2014). Several strategies at
both the utility and building-scale are being developed to improve energy efficiency
and ultimately reduce the rate of energy-use associated with buildings. One approach
that has seen broad adoption is the ability to benchmark building energy-use
performance relative to a large set of other similar buildings. Through this approach,
public tools like the EPA Portfolio Manager (EnergyStar 2014) enable buildings to
identify potential opportunities and set goals for energy efficiency improvements.
However, even as energy-use data standards become increasingly granular on a
temporal scale ( Green Button Data, 2014), they are usually limited to whole-building
spatial granularity. Therefore, energy efficiency improvement identification
© ASCE 1
Computing in Civil Engineering 2015 116
BACKGROUND
© ASCE 2
Computing in Civil Engineering 2015 117
ME
ETHODOLO
OGY
In the lo
ocal energy--use classificcation, each occupants’ energy-use data d is used
inddependently tot create an energy-use codebook.
c W cluster a wide range of potential
We
coddebook sizess for each inndividual too assure thatt we have captured
c eveery possible
behhavior combination. Asssuming that we w have L = 78 occupannts and eachh individual
hass a potentiaal of havingg between Klocal = {2,3,…,50} different d behhaviors, we
gennerated L × 505 energy-use profiles using
u K-meaans clusteringg algorithm.. Therefore,
eacch individuaal has 50 ennergy-use prrofiles Klocal = {2,3,…,550} generated by their
eneergy consum mption data. The K-meaans clusterinng input is structured
s a a Q × P
as
mattrix where Q is equal to the length of o the study (i.e. 87 days) and P is equal to 25
dataa points (i.ee. 24 hours)) per day. The
T results of o each clustering reporrt the error,
clusster centers, and daily ennergy-use paatterns-clusteer centers asssociation.
Previou
us research has
h implemeented a globbal approachh to classifyy the users’
eneergy consum
mption patternns. The main reason behhind global clustering iss to provide
© ASCE 3
Computing in Civil Engineering 2015 118
© ASCE 4
Computing in Civil Engineering 2015 119
The ressults obtaineed from thee clustering algorithms enable the analysis of
occcupants’ eneergy-use coddebook sizess and their associated
a e
errors. We conducted
c a
sidee by side comparison between
b inddependently (i.e. individdually) geneerated local
coddebooks (i.e.. locally gennerated codebbooks) and the
t local coddebooks extrracted from
the global cod debooks to investigate the effect of o various codebook
c siizes on the
coeefficient of variance
v callculated withh respect to the groundd truth data (i.e. energy
dataa captured by
b smart metters). Figure 3 depicts tw wo boxplots tot visualize the trend of
erroors for both algorithms. The error asssociated witth locally generated codeebooks is at
its minimum whenw the num mber of patterns equalss 12. Howevver, the variance of the
erroor decreasess by increasiing the size of the codeebooks. Onee of the implications of
thiss is that thhe larger thee size of thhe codeboook, the highher the connsistency of
occcupants’ energy estimatiion. On the other hand, globally gennerated codeebooks also
shoow a decreassing trend in error by inccreasing the codebook siize and the overall
o error
of clustering iss relatively higher thann independeent local coddebooks witth a higher
stanndard deviattion. However, the variaance of the errror does noot decrease significantly
by increasing th he size of gloobal codebooks.
Figgure 3. Loca
al codebooks error (i.e. coefficient of variance (CV)) versus the size
of the
t codeboo ok; (a) indep
pendently geenerated coodebooks; (bb) extracted
d local
coddebooks fromm global dicctionary
To analyze the size of globally extracted loccal codebooks, with resppect to their
global codeboo ok size, we plotted
p a color-coded suurface graphh to visualize the trend.
Figgure 4 showss a cone-shaaped graph with
w a higheer density off the same siize globally
gennerated codeebooks, whenn the size of o the globall codebook is i low. In laarger global
coddebook sizees, a wider range off local coddebooks are uniformlyy covered.
Furrthermore, th
he graph shoows a non-liinear trend where
w the sizze of globallly extracted
locaal codebookks are asympptotic to 35 where
w the gloobal codeboook size goess up to 100.
Thiis trend has important implicationss regarding global codeebook saturaation where
more global clu uster centerss will not im
mprove the accuracy
a of the
t behaviorr modeling,
andd instead deccreases the unity
u and robbustness of the global codebook.
c Fuurthermore,
the trend can provide
p infoormation reggarding the predictabilitty of occupants whose
© ASCE 5
Computing in Civil Engineering 2015 120
local codebook size stays low even when the global codebook size increases
substantially.
In order to further investigate the trend of error variation in both clustering
algorithms, we compared the independently generated local codebooks, to the same
size globally generated local codebooks for each person and fitted a normal
probability distribution function to measure the variation of the errors for independent
local codebooks, globally extracted local codebooks, and the difference between the
two of them. Figure 5 illustrates the CV value trends for each local codebook. When
comparing Figure 5 (a) and (b) it is noticeable that the range of globally made local
codebooks are higher than independent local codebooks. Also, the mean of the
distributions in (b) are higher than (a). When looking at the longitudinal distribution
trends in (a) and (b), both variances decrease as the size of the codebooks increase,
however the local codebooks probability density function demonstrate a higher
density and a lower CV mean which makes it more desirable compared to the
globally extracted local codebooks. Furthermore, the results in Figure 5 (c) implies
that the variance of differences between (a) and (b) will increase as the global
codebook expands and suggest the reduction of reliability in occupants’ energy-use
behavior modeling.
Figure 4. Number Occupants with the same globally extracted local codebook
size at each specific global dictionary size
© ASCE 6
Computing in Civil Engineering 2015 121
algorithm where independent local codebooks could be compared among each other
without significantly increasing error. Also, there is room for further progress in
determining the effect of data resolution on local and global energy-use clustering.
CONCLUSION
In this paper, we build on previous work (Gulbinas et al. 2015) and compare
the relative effectiveness of various approaches to simplified representations, or
codebooks, of building occupant energy-use behavior. Specifically, we compared the
degree of error associated with the application of a codebook that is constructed from
an aggregated pool of building occupants versus one that is an aggregation of
codebooks constructed for each individual occupant. We introduced the
methodologies behind the construction of the codebooks and their comparative
analysis. Due to high intra-class variability of occupants’ energy-use behavior, the
study showed that generating local codebooks using individuals’ energy data is a
more effective approach to energy-use profiling and facilitate the development of
automated energy efficiency programs in buildings. There is a great potential in
targeted energy efficiency programs using occupants energy-use behavior models
where each occupant receives the relevant information with respect to the observed
inefficiencies in their behavior. Thus, the optimization of energy-use profiles is an
important step to enable the underlying algorithms of such EE programs to perform
reliably.
ACKNOWLEDGMENTS
© ASCE 7
Computing in Civil Engineering 2015 122
REFERENCES
Albert, A., et al. (2011). Segmenting consumers using smart meter data. Proceedings
of the Third ACM Workshop on Embedded Sensing Systems for Energy-Efficiency
in Buildings, ACM.
Kwac, J., et al. (2014). "Household energy consumption segmentation using hourly
data." Smart Grid, IEEE Transactions on 5(1): 420-430.
Mutanen, A., et al. (2011). "Customer classification and load profiling method for
distribution systems." Power Delivery, IEEE Transactions on 26(3): 1755-1763.
Smith, E. M., et al. (2004).System and method for energy management,Google Patent
© ASCE 8
Computing in Civil Engineering 2015 123
Abstract
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 124
RESEARCH METHOD
© ASCE
Computing in Civil Engineering 2015 125
ease of availabiility, open source andd/or low coost. The devveloped sysstem was
succeessfully testeed on a con
nstruction site and evaluuated througgh focus gro oups with
indusstry experts. This work focuses
f on thhe challengees that are enncountered in order to
extennd the existting prototyppe into a ffull-scale syystem with multiple
m sennsors and
conseequent issues related to data
d manageement.
COSM
MOS OVER
RVIEW
CoSMoS was designned with foour basic ellements to achieve thee goal of
visuaalizing real-tiime sensor data
d of confiined spaces (Riaz
( et al., 2014). Thesse element
were: BIM softw ware; BIM database;
d Seensing motes; and Appllication Proggramming
Interffaces (API) with
w Revit Add-In.
A The prototype syystem focuseed on the folllowing:
m
monitoring teemperature and
a oxygen levels
l using wireless sennsing motes;
agggregating th he sensing motes
m valuess to a centralized WSN ggateway mote;
saaving the aggregated sennsor values tto external database;
exxtracting dattabase sensoor values for visualizing a BIM modeel with colorr codes;
generating nootifications iff sensor valuues exceed thhe defined thhreshold lim
mits; and
A
Android baseed mobile application forr H&S manaagers for rem mote monitorring.
COSM
MOS DATA
A MANAGE
EMENT
© ASCE
Computing in Civil Engineering 2015 126
Data
a Storage
Data
a Visualizatiion
CoSMoS is invoked
as a Revit Addd In from
Revitt Architectuure GUI as
showwn in Figure 3. A self-
updatting GUI iss displayed
with sensor valuesv of
tempeerature andd oxygen.
BIM plays a signnificant role
in thhis process since the
senso
or data of thhe confined
spacee is popullated with
relevaant parameetric data Figure 3: Invoked external applicattion from Revvit GUI
from the BIM software.
blishing a daata link that uniquely tiees sensed data to a speciific room is all that is
Estab
required to pair collected
c datta to the spaace that it deescribes. In case of CoSSMoS, this
data link is accoomplished by b assigningg the room element’s aautomaticallly created
uniquueID from Autodesk
A Reevit to each associated sensor.
s If thhe sensor vallues cross
over the definedd temperatuure/oxygen level l threshholds, system m will highhlight the
locatiion on a BIM
B model with red ccolor and will w generatte sound alarms and
smarttphone basedd notificationns to take timmely actionss to avoid haazardous situ
uations.
© ASCE
Computing in Civil Engineering 2015 127
CHALLENGES
There exist many sensor data related challenges when integrating BIM with
sensor technology for safety management. Some of these are discussed below:
Once huge sensor data is collected from multiple motes, this raises a challenge
on managing and retrieving relevant data from a database efficiently. Developed
prototype system uses SQL server, a traditional method to store data, based on the
relational database management systems (RDBMS). Recent research highlights that
sharp growth of data makes traditional RDBMSs inadequate to handle huge volume
and heterogeneity of sensor data. However, there is no consensus in the literature on
which type of database performs best for real time sensor applications. For example,
to achieve reliability and availability of data distributed file systems (Vasenev, 2014)
and NoSQL databases (Ramesh, 2014) are the suggested choices.
A database survey shows that database performance results vary for scan and
insert operations for sensor values. Veen et al. (2012) compared performance of
PostgreSQL (SQL database) with Cassandra and MongoDB (both being NoSQL
databases). Their results showed that Cassandra was the best choice for large critical
sensor applications, MongoDB performed best for small-medium sized non-critical
sensor application where insert performance is significant and PostgreSQL performs
best for read intensive applications. Hence, for CoSMoS, being more insert intensive,
Cassandra and MongoDB (NoSQL) require further investigation. Furthermore,
Pungila, C. et al. (2009) measured sensor data management performance using SQL
databases against Hypertable (distributed data storage system), IBM Informix
DatabaseTM (Time-Series) and Oracle Berkely DB. It was shown that all three
performed better than SQL databases. Also, IBM Informix (Time-Series) showing
lower insert performance but overall better performance than Hypertable. This work
also showed that consideration for database tool depends on the system architecture
and its ability to: communicate at very high data exchange rates; insert performance at
high data rate; and the ability to hold extremely large data amounts. Presently,
experiments are being designed for CoSMoS to compare the performance of various
database tools (SQL Server, Cassandera, MongoDB, IBM Informix, Hypertable).
Performance measures include: data exchange rates for read and write cycles; insert
performance including single insert versus bulk insert; physical versus virtual
performance due to the nature of CoSMoS data framework. Virtualization has opened
a new dimension in sensor data management and provisioning of virtualized cloud
based storage can significantly decrease physical infrastructure while improving the
logical presentation of sensor data. Another area to look for in the database layer is to
enable multiple asynchronous event triggers. Asynchronous event triggers are used to
perform multiple actions that are independent from each other in terms of their
execution when a defined condition is reached (Celko, 2014). For example, in case of
CoSMoS such action can be a visual alert on a BIM model along with a sound when a
certain sensor threshold or unusual reading is observed in a sensor data stream.
Due to the availability of wide variety of sensors for environmental
monitoring, the datasets collected from motes vary with respect to consistency and
redundancy resulting in storage of meaningless data. Moreover, analytical procedures
have stringent requirements on data quality. Therefore, acquired sensor data needs to
be pre-processed when collected from different types of sensing motes, so as to
enable effective data analysis for safety management. Pre-processing includes data
© ASCE
Computing in Civil Engineering 2015 128
cleaning and elimination of redundancy of data that will not only improve data quality
but will also reduce storage requirements. Developed prototype system discarded
incomplete sensor data packets transmitted by the motes and averaged multiple sensor
readings in order to provide BIM based visualizations through an interactive user
interface. However, for a deeper understanding of sensor data incomplete data packets
should be incorporated in the data analysis. Processed and organized sensor data will
not only help safety managers to monitor environmental conditions but will also help
them to identify failure cause–effect patterns in order to prepare preventive safety
plans in future.
As CoSMoS is expanded to a full-scale application, vast real time data from
numerous sensors is likely to become available. Using knowledge discovery and data
mining techniques in databases, patterns and predictions can be revealed. As a result,
accuracy of the sensor data acquired for visualization through a BIM model to aid
decision support is a huge challenge. Environmental monitoring wireless sensing
motes have serious resource constraints (Madden et al., 2002; Zhao et al., 2002)]. In
particular, they have limited: communication bandwidth (1-100 Kbps); storage;
transmission data rate; processing capabilities; and battery life, which may cause
motes to operate for certain number of hours. For example, wireless sensing mote
used for this research has 8MHz processor, 10 KB programming memory and 250
Kbps data rate only. This asks for special network management algorithms for sensor
data streams that can explicitly incorporate these resource constraints e.g. putting an
idle mote to sleep mode for energy efficient mechanism. Moreover, long distance
transmission by sensing motes is not energy efficient as energy consumption by the
mote is a linear function of the transmission distance. One method to prolong network
lifetime while preserving network connectivity is to set up a small number of costly,
but more powerful relay motes whose main function is to communicate with other
sensing or relay motes. One of the main challenges lies in the selection of relay motes
in the network in order to ensure the effectiveness of communication between sensing
motes and relay motes. This network management will result in more reliable data.
Another serious challenge of real-time sensor data streams is the
understanding of incomplete values in the acquired data. Many reasons can contribute
in loss of sensor values such as sensor network topology changes or packet loss.
These issues can arise due to communication failures or poor links, fading of signal
strength due to weak mote battery and packet collision between multiple transmitters.
Zhao et al. (2002) have shown the seriousness of this problem experimentally.
Finally, another prevalent challenge in sensor data acquisition processes is
imprecision, either due to delay of database update or because of noisy sensor
readings. In the former case, the massiveness of sensor readings, the limited battery
life and wireless bandwidth may not allow for instantaneous and continuous updates,
and therefore, the database state may lag the state of the real world. The latter case,
however, is due to inaccuracies of sensor measurements such as noise from external
sources, inaccuracies in the measurement technique and synchronization errors. The
cost of imprecise sensor data can be very significant if there has to be an immediate
decision making by the H&S manager or an actuator activation is required.
CONCLUSION
CoSMoS application investigates the integration of BIM with wireless sensors
to monitor environmental conditions of confined spaces. The designed application is
© ASCE
Computing in Civil Engineering 2015 129
REFERENCES
Attar, R., Hailemariam, E., Breslav, S., Khan, A., and Kurtenbach, G. (2011). Sensor-enabled
cubicles for occupant-centric capture of building performance data. ASHRAE Annual
Conference (pp. 1-8).
Bureau of Labor Statistics (2013). Fatal occupational injuries by industry, All U.S., 2013
[Online]. Available: http://stats.bls.gov/iif/oshwc/cfoi/cftb0277.pdf (accessed on Dec
2, 2014)
Cahill, B., Menzel, K., and Flynn, D. (2012). BIM as a centre piece for optimised building
operation. [Online] (accessed Dec 21, 2013) Available:
http://zuse.ucc.ie/~brian/publications/2012/BIM_centre_piece_OBO_ECPPM2012.pd
f
Celko, Joe. (2013) Streaming Databases And Complex Events. Complete Guide To Nosql. 1st
ed. Boston: Morgan Kaufmann, 2013. Pp. 63-79.
Cook, D., and Das, S. (2004). Smart environments: Technology, protocols and applications:
John Wiley & Sons.
Guven, G., Ergen, E., Erberik, M., Kurc, O. and Birgönül, M. (2012) Providing guidance for
evacuation during an emergency based on a real-time damage and vulnerability
assessment of facilities. J. Comput. Civ. Eng., pp. 586–593
Guinard, A., McGibney, A., and Pesch, D. (2009). A wireless sensor network design tool to
support building energy management. 1st ACM Workshop on Embedded Sensing
Systems for Energy-Efficiency in Buildings (pp. 25-30).
Katranuschkov, P., Weise, M., Windisch, R., Fuchs, S., and Scherer, R. J. (2010). BIM-based
generation of multi-model views. [Online] Available:
http://www.hesmos.eu/plaintext/downloads/paper_114_final.pdf (accessed Mar 3,
2014)
Madden, S., Franklin, M. J. and Hellerstein, J. M. (2002) TAG: a Tiny AGgregation Service
for Ad-Hoc Sensor Networks. In Proceedings of 5th Annual Symposium on operating
Systems Design and Implementation (OSDI), December 2002.
© ASCE
Computing in Civil Engineering 2015 130
Ozturk, Z., Arayici, Y., and Coates, S. P. (2012). Post occupancy evaluation (POE) in
residential buildings utilizing BIM and sensing devices: Salford energy house
example. [Online] Available: http://usir.salford.ac.uk/20697/1/105_Ozturk.pdf
(accessed Mar 13, 2014)
Pungila , C., Fortis, T. and Aritoni, O. (2009) Benchmarking Database Systems for the
requirements of sensor readings. IETE Technical Review, Sep – Oct 2009, Vol 26,
Issue 5, pp 342 - 349
Piza, H.I., Ramos, F.F., and Zuniga, F. (2005). Virtual sensors for dynamic virtual
environments. 1st IEEE Int. Workshop on Computational Advances in Multi-Sensor
Adaptive Processing (pp. 177-180): IEEE.
Ramesh, D. and Kumar, C. (2014) A scalable generic transaction model scenario for
distributed NoSQL databases, Journal of Systems and Software. [online] Available:
http://www.sciencedirect.com/science/article/pii/S0164121214002684
Riaz, Z., Arsalan, M., Kiani, A. and Azhar, S. (2014) CoSMoS: Confined Spaces Monitoring
System, a BIM and Sensing Technology integration for construction Health and
Safety, Automation in Construction. Volume 45, pp. 96 – 106.
Setayeshgar, S., Hammad, A., Vahdatikhaki, F., and Zhang, C. (2013). Real time safety risk
analysis of construction projects using BIM and RTLS. [Online] Available:
http://www.iaarc.org/publications/fulltext/isarc2013Paper224.pdf (accessed Oct 21,
2013)
Shiau, Y.C. and Chang C.T. (2012) Establishment of Fire Control Management System in
BIM Environment http://onlinepresent.org/proceedings/vol5_2012/11.pdf (2012)
(accessed on 10 March 2014)
Vanlande, R., Nicolle, C., and Cruz, C. (2008). IFC and building lifecycle management.
Automation in Construction, 18 (1), 70-78.
Vasenev, A., Hartmann, T. and Dorée, A. (2014) A distributed data collection and
management framework for tracking construction operations, Advanced Engineering
Informatics, Volume 28, Issue 2, Pages 127-137
Veen, J., Waaij, B. and Meijer, R. (2012) Sensor Data Storage Performance: SQL or NoSQL,
Physical or Virtual. IEEE Fifth International Conference on Cloud Computing,
Honolulu, HI, 24-29 June 2012, pp 431 – 438.
Zhao, J., Govindan, R. and Estrin, D. (2002) Computing aggregates for monitoring WSN. 1st
IEEE International Workshop on Sensor Network Protocols and Applications.
© ASCE
Computing in Civil Engineering 2015 131
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 132
design approach helps, to some extent, considering the complex relationship between
the building’s construction, operation and its impact on environment, the building
rating systems cannot be used for exhaustive building sustainability assessment. This
is due to the fact that building rating systems are unable to meaningfully capture the
interaction between buildings, urban spaces, facilities, and infrastructures.
Additionally, they fail to consider project’s contextual circumstance in their analysis.
Although existing building rating systems should not be underestimated, their
deficiencies emerged as an encouragement for reconsidering the spatial boundary of
sustainability assessment and taking the assessment to higher level where project
context could be meaningfully considered (Conte and Monno 2012; Haapio and
Viitaniemi 2008).
Neighborhoods are the smallest in spatial terms to meaningfully addressing
sustainability issues based four pillars of sustainability particularly environmental,
social, economic, and institutional (Alborg 1994; Berardi 2013). The importance of
neighborhood resulted in development of different Neighborhood Sustainability
Assessment (NSA) tools; the most prominent ones are spin-offs from the building
level sustainability tools (Sharifi and Murayama 2013) such as LEED-Neighborhood
Development or LEED-ND, BREEAM-Communities, CASBEE-UD, DGNB-NSQ,
and Pearl Community in the UAE. Although it is difficult to categorize sustainability
issues definitively, as they frequently impact all dimensions of sustainability, NSA
tools have structured their assessments based on categories. By assigning categories,
NSA tools seek to provide some clarity about the intention of each issue. Among
these categories, there are similarities as well as significant differences (Sharifi and
Murayama 2014). One of the similarities among most of the tools is inclusion of
location-related criteria in different categories. The project site location and how it
integrates with its neighborhood based on the context of neighborhood can have a
significant impact on a range of environmental factors, energy consumption, energy
spent on residents’ transportation, local ecosystems, and the use of existing
infrastructures (WBDG 2014). The emphasis on location is significantly stronger in
tools originating from North America such as LEED, as urban sprawl has put higher
environmental, economic burden. Figure 1 lists the percentage of points assigned to
project location in the NSA tools discussed in this paper.
Among others, the requirements of location-based criteria are often complex, involve
several data sources, and take considerable time to evaluate. Since project site
location have a significant weight in the assessment, it is important to analyze the
projects in a cost effective and accurate manner especially during project feasibility
study phase. As sustainability gains popularity, automation in decision-making
process can play a key role for effective project assessments. For this reason, a GIS-
based decision support system is introduced in this paper. GIS has been applied in
different areas of urban planning since the 1960s (Yeh 1999) with a continuously
increasing trend. This increasing popularity can be attributed to strong capture,
management, manipulation, analysis, modeling, and display capabilities of GIS as
well as abundant sources of GIS urban information. This paper discusses a
Geographic Information System (GIS) based decision support system that aids
© ASCE 2
Computing in Civil Engineering 2015 133
ownners, develoopers, and other stakehollders to anallyze the project site locaation and its
inteegration with
h the neighborhood in a sustainable manner.
m
Figure 1. The
T percenttage of pointts assigned to criteria, which are directly
d
affected by projecct location and
a its urbann context in
n five differeent.
GIS
S-BASED DECISION
D SUPPORT SYSTEM
Thee GIS-based d decision suupport systemm provides a means forr swiftly andd accurately
idenntifying site parcels based on the crriteria and geetting an imm
mediate insiight on how
the selected siite meets thhe requirem ments of susstainability goals.
g This GIS-based
deccision supporrt system forr smart projeect location comprises
c of three moduules namely
(a) database module,
m (b) analysis
a moddule, and (cc) interface module, Figgure 2. The
© ASCE 3
Computing in Civil Engineering 2015 134
dataabase mod dule deals with data cleaning, integrationn and seleection and
trannsformation.. The analyssis module deals with performing
p the necessarry analyses
andd the interfacce module iss responsiblee for interaccting the useer by gettingg inputs and
presenting the output.
o
Figure 2. Data
D flow strructure.
Dattabase mod dule. For identifying sm mart project location, thhis system uses
u several
dataa layers. Th hese were iddentified from publicly available daatabases maaintained by
Staate and/or cou unty websitees. The relationship of thhese data layyers and the criteria was
estaablished prio or to analyssis. Figure 3 shows how w these datta layers aree related to
deffined criteriaa. After all daata sources are
a adequateely retrievedd, analysis is performed.
As previously noted, LEED D-ND has significant
s emphasis on location of the project
witth a variety of
o criteria. Foor these reassons, LEED--ND’s defineed values weere used for
the minimum required
r perfformance vaalues for eacch criterion. However, thhe user can
deffine differentt requiremennts for analyssis.
Figure 3. Data
D layers used in anaalysis and th
heir relation
nship with criteria.
© ASCE 4
Computing in Civil Engineering 2015 135
Analysis module. The analysis module was developed using Arcpy, a language built
on Python, a widely used high-level programming language. Arcpy enables GIS
functions, particularly those available in ArcGIS tool (ArcGIS 2012). Since each
project must be analyzed based on the criteria established, a top-down design was
utilized which essentially involved decomposing the complex problem and solve it
through different functions. The structure is comprised of five auxiliary, nine
analyses, and one manager function. The auxiliary functions, using user input data,
prepare and create necessary data for performing analysis in main functions; and
executed in conjunction to analysis functions. These functions are: 1) transit schedule
integration, 2) surface status calculator, 3) development status calculator, 4)
intersection density calculator, and 5) parcel data extractor. The analysis functions are
responsible for analyzing the site based on the given criteria. The analysis functions
are: 1) transportation function, 2) accessibility function: analyzing site’s accessibility
to neighborhood assets, 3) water and waste water service function: to analyze the
project in regards to connectivity to water and waste water systems, 4) agricultural
land function, 5) sensitive habitats function, 6) wetlands function, 7) water bodies
function, 8) floodplains function, and 9) neighborhood connectivity function:
analyzing the intersection density in project vicinity. Each function returns a “yes” or
“no” value and saves the analysis results in the form of GIS shapefiles, in the local
hard drive. The manager function reads the results of the nine analysis functions and
returns the final result of analysis to the user interface module.
User interface module. The interface module is dedicated to communication with the
user and to get user input on the project site location data. As an important goal of
this tool is to have minimal user input, users are prompted with only two inputs, i.e.,
the type of analysis criteria (user defined or LEED-ND, in this case) and project
address.
CASE STUDY
For this paper, a project site located at the corner of 11th St. and P St. in Washington
D.C. is considered for analysis. The total area of this parcel is 117 square meters and
is currently recorded as residential. The goal is to check if the project site meets
sustainability criteria discussed in the previous sections.
GIS data. Table 2 lists GIS databases used for this case study. These data sources
were downloaded from District of Columbia Open Data website.
Analysis and results. The goal of this case study project is to analyze the
sustainability of the project location selected by user. This process is further
discussed in the following steps.
Step 1: User input. The introductory page prompts the user whether they want to
perform the analysis using pre-defined values of LEED-ND or defining their value.
Following this, the user is prompted to enter the parcel address, Figure 4.
© ASCE 5
Computing in Civil Engineering 2015 136
Step 2: Geocoding. Using geocoding, the address is translated into a point on the
map and the corresponding parcel is identified. However, since, in some cases,
geocoding can involve some inaccuracies, the information of the parcel is returned to
the user for confirmation. The developed area is based on the ratio of impervious
surface on the selected site to the total area of site. Here, the impervious surfaces
consist of buildings, swimming pools and asphalt pavements. The percentage of
development is calculated based on LEED-ND’s definition. The analysis initiates
after user’s confirmation.
Step 3: Analysis. According to the scope of this paper, the transportation module was
selected for further discussion. For analyzing connectivity to transportation
infrastructure a network analysis was performed. The network analysis is designed to
use the street polyline shapefile as the network line, the parcel as the origin and the
bus/subway stations as the destination. The cutoff range for bus and subway are
assigned as ¼ mile and ½ miles respectively to correspond with LEED-ND
requirements.
© ASCE 6
Computing in Civil Engineering 2015 137
Step 4: Results. The output of each module is either “yes” or “no.” While “yes”
indicates that the selected site meets the defined criteria for that module, “no”
indicates that the project has failed to reach the minimum requirements for at least
one criterion After analysis is complete, the core function returns the final “yes” or
“no” value completing the analysis. In both cases, a complete document is provided
for the user, explaining the performance of project against each criterion. Figure 5
shows a sample of the result of transportation function which shows the distance of
the site location to bus stops for the studied project.
Figure 6. The Network Analyst identified the eligible transit stops (in this case,
bus stop). The central point is the project site and the lines represent the route to
the eligible stops.
DISCUSSION
Conducting a typical project analysis manually may take several days to weeks,
costing thousands of dollars and may also require a great effort in data collection. To
alleviate this problem, this paper proposes a GIS-based decision support system for
analyzing smart project location. Using a case study project neighborhood located in
Washington DC, this paper discusses the execution of the decision support system.
The proposed system is based on minimal user input and can greatly save time and
money for developers, especially during the feasibility study phase of the project.
Additionally, this system can be used for larger scale decision processes by
identifying all eligible sustainable locations in a municipality or a region and produce
a database of sustainable locations. This database can be used for smarter
policymaking (for example incentivizing development in these locations) in regional
level.
© ASCE 7
Computing in Civil Engineering 2015 138
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 139
Thermal Comfort and Occupant Satisfaction of a Mosque in a Hot and Humid Climate
1
Department of Civil Engineering, Ege University, 35100 Bornova, Izmir, Turkey.
E-mail: [email protected]; [email protected]; [email protected]
Abstract
INTRODUCTION
Thermal comfort is a key factor that might affect comfort, health, and occupants’
performance (Mendes et al, 2013). It is influenced by a range of environmental and individual
factors, both objective and subjective, including air temperature, the temperature of the
surrounding surfaces, the air movement, the relative humidity, and the rate of air exchange
(Ormandy and Ezratty, 2012). Conventional thermal comfort theories are generally used to
make decisions, whereas recent research in the field of thermal comfort clearly shows that
important effects are not incorporated (Peeters et.al, 2009). As the conventional theories of
thermal comfort are set up based on steady state laboratory experiments, they might not
represent the real situation in specific types of buildings such as mosques and churches, which
have intermittent operation schedules. A recent study on indoor environmental conditions in
mosques indicates that thermal comfort cannot be correlated with ISO 7730 and ASHRAE 55-
2004 standards (Al-Ajmi, 2010). Moreover, occupants can adjust their clothing and activity in
response to thermal stress in their environment in typical buildings. However, this adjustment
is to a certain extent in mosques due to predefined clothing ensembles and activities.
© ASCE
Computing in Civil Engineering 2015 140
The main objective of this study is to investigate thermal comfort conditions and
satisfaction of occupants in a naturally ventilated historic mosque in a hot-humid climate. The
following sections of the paper describe the experimental design and test site. Then, findings
and conclusions are presented.
EXPERIMENTAL DESIGN
In order to obtain quantitative data on the prevailing actual conditions, the following
data collection methods were used: (1) a physical measurement of certain parameters that
influence the thermal comfort conditions and (2) a questionnaire as the subjective
measurement.
Measurements were taken every minute at a height of 1.1 m from the ground level as
advised in the prescriptions of the ASHRAE Standard 55-2010 (ASHRAE, 2010). Indoor air
temperature (Ta), relative humidity (RH) and air velocity were measured via the TESTO
Thermo-Anemometer Model 435-2. All equipment was calibrated before each experiment to
ensure reliability and accuracy in the readings recorded during the field studies. The main
characteristics of the measurement system employed in this work are shown in Table 1.
Data was collected for an hour with 5 minutes intervals. These parameters were then
used to calculate the PMV and PPD indices, in accordance with Fanger’s model. PMV and
PPD indices based on Fanger’s model are widely used to understand occupant perception and
satisfaction in buildings. Fanger (1970) defined the PMV as the index that predicts, or
represents, the mean thermal sensation vote on a standard scale for a large group of persons
for any given combination of the thermal environmental variables, activity and clothing
levels. The index provides a score that corresponds to the seven point ASHRAE thermal
sensation scale, which is presented in Table 2. The PMV should be kept zero with a tolerance
© ASCE
Computing in Civil Engineering 2015 141
of ±0.5 scale
s units in
i order to ensure a coomfortable indoor
i envirronment acccording to thhe
internatioonal standarrds. The PP
PD is an inndex that predicts the percentage of thermallly
dissatisfieed people and
a is expreessed as a function
fu of the
t PMV byy Fanger. The T functionnal
relationshhip betweenn the PMV and
a PPD inddices are illu ustrated in Figure
F 1. As can be seeen
from the Figure, 5% of the occuppants are stilll dissatisfied
d at the PMVV neutral (0).
Table 2.
2 Seven poiint thermal sensation sccale
Scale -3 -2 -1 0 +1 +2 +3
Sensation
n Cold Cool Slightlyy Neutrall Slightlyy Warm
m Hot
cool warm
Fig
gure 1. The relationship
p between the
t PMV an
nd PPD indices
Thhe PMV mo odel has beenn validated bby the majorrity of studiees as an accu
urate predictor
of occupaant perceptioon in differeent climatic conditions (Fanger,
( 20002). Fanger’ss PMV moddel
is based on theoreetical analyssis of hum man heat exxchange by steady staate laboratorry
experimeents in Nortthern Europpe and Ameerica (Hump phreys and Nicol, 20022). The meaan
radiant teemperature (T
( r) used in PMV calcullations was estimated ussing the regrression moddel
shown in n Equation (1)
( as a fun nction of thee indoor airr temperaturre measured d proposed by
b
Nagano (2004)
( underr a determinaation factor of 0.99.
×Ta-0.01, R2=0.99
Tr =0.99× (1)
Thhe operativee temperatuure (To) waas determineed from thee indoor airr temperatuure
measuredd (Ta) and the mean radiaant temperatture (Tr), as seen
s in Equaation (2).
To = A×T
Ta +(1-A) ×T
Tr (2)
A = 0.5 for
f w <0.2 m/s
m
A = 0.6 for
f 0.2 <w <00.6 m/s
A = 0.7 for
f 0.6 <w <1 m/s
© ASCE
Computing in Civil Engineering 2015 142
Subjective measurements
The subjective study involved collecting data via questionnaires which were prepared in
accordance with the ASHRAE 55-2010 standards. The questionnaire was developed to gauge
how participants feel towards their thermal environment and were asked to rate their thermal
sensation on ASHRAE seven point scale. The questionnaires were distributed after each
physical measurement and the subjects were required to make only one choice from the scale
for each question. Responses of the questionnaire were used to gather the quantified thermal
sensation of the occupants (from -3 to +3), and, thus, to calculate the Actual Mean Votes
(AMV) of the participants. The AMV and PMV indices were then compared to understand the
difference between perceived and actual indoor thermal conditions. Moreover, participants’
clothing insulation values (clo) were determined via the questionnaire. A total of 95 responses
were collected and all of the responses were included in the analysis.
Physical and subjective measurements were carried out in a historical mosque, which
was constructed in 1907. The mosque is naturally ventilated and has undergone retrofitting
recently, and, thus, it is expected to maintain acceptable thermal comfort conditions. In order
to analyze the effectiveness of retrofitting in terms of thermal comfort, this mosque was
selected as a test site.
The mosque is located in Izmir (38°N, 27°E), which is on the Aegean Sea coast. The
climate is hot and humid during cooling season whereas heating season is mild with high
rainfall levels. The study was conducted during August and September, characterized by the
mean maximum temperatures 33°C and 29°C, respectively. Average monthly temperatures
and relative humidity values are shown in Figure 2.
80
70
60
50
40
30
20
10
0
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
A total of 2 field tests were conducted in the main worship with dimensions of 178 m2
and mean height of 21.5 m. The floor plan and the measurement point are presented in Figure
3.
© ASCE
Computing in Civil Engineering 2015 143
FINDINGS
Forty five and fifty occupants participated in the study during August and September,
respectively. The participants were all males with the majority of participants (28%) in the
age range of 36-45. The percentage of participants in the age range of 18-25, 26-35, 46-55,
56-65 and 65 and more were 25%, 17%, 14%, 8% and 7%, respectively.
As all participants performed light activities for 15 minutes per prayer requirements
and were seated for the rest of the monitoring period. Accordingly, metabolic rate was
determined as 1.2 met per ASHRAE 55-2010. The total clothing insulation value was
calculated by using a list for each participant. Clothing ensembles were similar to each other
which consisted of underwear, long-sleeve shirt, thin trouser and socks. Accordingly, the total
clothing insulation values were calculated as 0.53 clo.
© ASCE
Computing in Civil Engineering 2015 144
Figure 4 shows the thermal sensation of participants at the end of the prayer in August
and September. In August, 7% and 2% of respondents stated that they feel warm (+2) and hot
(+3), respectively. Only 11% indicated that they feel neutral (0). Although indoor air
temperature (with a mean value of 32.8 0C) was high, 36% of the participants indicated that
they feel slightly cool (-1), whilst 29% feel cool (-2). This might be due to the fact that the
participants opened all windows to let air movement in, and, thus, air velocity in indoor
environment was higher (with a mean value of 0.80 m/sn) in August compared to September
(with a mean value of 0.15 m/sn). Moreover, it was observed that occupants preferred to sit in
front of the windows during data collection in August. The actual mean vote (AMV) of the
participants was found to be (-0.58) which corresponds to slightly cool. In September, most of
the participants indicated that they feel either slightly cool (44% or neutral (30%). 16% of
respondents stated that they feel cool (-2). None of the participants felt hot or cold. AVM of
the participants was found to be (-0.62) which corresponds to slightly cool.
© ASCE
Computing in Civil Engineering 2015 145
Figure 4. Distribution
D of thermal sensation vvotes
Analysis of participa
ants’ thermaal preferencee
F
Figure 5. Diistribution oof thermal preference
p vvotes
Inn August, thhe PMV wass calculated as 1.04 (sliightly warm m) which corrresponds to a
PPD equual to 28%. In Septem mber, PMV was calculaated as 0.744 (slightly warm) whicch
corresponnds to a PPD D equal to 16%. Accorrding to the ASHRAE55-2010 stanndard, therm mal
acceptabiility of indoor environmments is conssidered to bee within limmits of -0.5, 0,
0 +0.5 whicch
accounts for 90% of a PPD. Thee PPD and PM MV indices show that thhermal comffort conditionns
were not within the limits
l whichh are definedd in the stanndard, and, tthus, were not
n acceptablle.
Figure 6 illustrates thhe acceptablle ranges forr operative temperature
t and relativee humidity ffor
the mosqque in Augu ust and Septeember. The shaded areaa shows the comfort zon ne boundaries
where thee PMV is beetween -0.5 and +0.5 acccording to the t ASHRAE E 55 standarrd and the reed
circles syymbolizes thhe environmmental condittions accordding to the measuremen
m nts. As can beb
seen, commfort zones for
f August and
a Septembber differs withw respect tto operative temperaturees.
Operativee temperatuures can be considered to be 31.0 ±2 and 27..0 ±2 °C for August annd
Septembeer, respectively.
© ASCE
Computing in Civil Engineering 2015 146
(aa) (b)
CONC
CLUSIONS
S
The main objective of this studdy was to investigate the thermaal comfort
condittions and occcupant therrmal comforrt sensationss in a naturaally ventilatted historic
mosquue which is in
i a hot and humid climaate. Indoor environment
e al conditions including
indoorr air temperaature, relativve humidity and air velo
ocity were mmonitored and collected
via data loggers duuring two Frriday prayers in August and Septembber 2014. A total of 95
occupaants were surveyed too understand the therm mal sensatioon and preference of
occupaants. The PMMV and PPD D indices w
were calculateed and comppared to thee calculated
perceived satisfacttion of particcipants whicch were presented as the AMV.
REFE
ERENCES
© ASCE
Computing in Civil Engineering 2015 147
© ASCE
Computing in Civil Engineering 2015 148
ABSTRACT
Falls are the single most dangerous safety accident within the construction
industry, representing 33% of all fatalities in construction. Numerous unrecognized
near-miss falls exist behind every major fall accident. The detection of near-miss fall
occurrence therefore helps the identification of fall-prone workers/tasks and invisible
jobsite hazards and thereby can prevent fall accidents. This paper presents and
evaluates the feasibility of a threshold-based approach for detecting the near-miss falls
of construction iron-worker. Kinematic data of subjects are collected through an IMU
sensor attached to the subjects’ sacrum; the subjects then perform walking on a steel
beam structure. Fall-related features—sum vector magnitude (SVM), and normalized
signal magnitude area (SMA)—are used to detect near-miss falls. Threshold values of
these features are defined to achieve the best accuracy in near-miss fall detection based
upon experiment data. According to selected threshold values, iron-workers’ near-miss
falls were detected. The result of this research demonstrate the opportunity of utilizing
SVM and SMA in documenting workers’ near-miss fall incidents in real-time.
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 149
In the construction industry, fall accidents are one of the leading causes of
occupational fatalities, representing 33% of all fatalities in construction. In order to
reduce the number of falls, numerous studies have been conducted focusing on the
cause of fall accidents for related occupations in construction. Among construction
trades, iron-workers are exposed to the highest lifetime risk of fatal injuries (CPWR
2013). According to Beavers (2009), during 2000 to 2005, almost 75% fatalities of
iron-worker were caused by falls. Iron-workers often work at great heights from the
ground and use heavy steel materials and equipment for steel erection. In addition, the
working surface of iron-works is narrow due to the small widths of steel beam as
compared to other trades. Selection of fall prevention measures for iron-worker is thus
limited (e.g., safety harness and personal fall arrest system) due to the open edges of
their workplace. In this context, additional and adequate fall protection measures are
necessary in order to reduce the number of fall accident and the risk of fall accidents.
One promising accident prevention technique is to detect a leading indicator
that generally occurred before the accident. According to Bird and Germain (1966),
numerous near-misses exist behind a major accident. In general, a near-miss is
considered as an event in which no damage or loss occurs at the time. However, near-
misses can develop into accidents under slightly different conditions. Thus, near-miss
detection related approaches have been well applied in chemical processes, airline, and
nuclear-related industries to reduce the likelihood of accidents occurring (Phimister et
al. 2003). Similar with previous research, the likelihood of a fall accident’s occurrence
may be reduced through the detection of near-miss falls. In this context, detection of
near-miss fall occurrence helps to identify fall-prone workers/tasks and invisible jobsite
hazards, and can thereby prevent fall accidents.
Regarding iron-worker’s near-miss fall detection, our research group used
Inertial Measurement Units (IMU) and a Machine Learning algorithm (i.e., One-class
Support Vector Machine) to detect the near-miss falls of iron-worker (Yang et al. 2014).
This implemented algorithm necessitates a post-processing stage and high
computational cost for near-miss fall detection. However, the real-time detection of
near-miss falls requires the development of online algorithms that allow workers’ near-
miss falls to be detected on-board without the need to send raw data for off-line analysis.
In this context, this research investigated the feasibility of a threshold-based approach
to detect the near-miss falls of iron workers, which can be implemented as an online
algorithm usable on a wearable sensor unit with limited on-board processing
capabilities.
RESEACH BACKGROUND
The loss of balance (LOB) is one of the major proximal causes of fall accidents
for iron-workers with a “lack of fall protection” and “fall protection not secured”
(Beavers et al. 2009). Also, LOB itself has a high relationship with fall accidents that
generally occurred when a worker loses balance. In this context, many studies focus on
detecting fall accidents through monitoring a subject’s kinematic data using body-
attached sensors such as accelerometer, gyroscope, or both (Kangas et al. 2008; Lai et
al. 2011). Accelerometer implemented to collect acceleration of subject's body
movement with 3-axes and gyroscope can measure the orientation of body with three
© ASCE
Computing in Civil Engineering 2015 150
different angle. In the biomedical research area, detection of fall accidents from
Activities of Daily Living (ADL) is widely studied to protect elderly and disabled
individuals from unrecognized fall accidents. Most of this research focuses on detecting
actual fall accidents rather than near-miss falls. Only a few studies sought the detection
of near-miss falls—called “near falls” or “fall portents” in other research (Dzeng et al.
2014; Weiss et al. 2010).
In one near-miss fall detection study, Weiss (2010) used body-attached tri-axial
accelerometer to detect near-falls during treadmill walking and tested two features—
Sum Vector Magnitude (SVM) and normalized Signal Magnitude Area (SMA)—as
well as other derived features to garner useful information in detecting near-falls
against various features. This research utilized threshold-based classifiers, which
designates near-falls when values exceed a certain threshold value. Dzeng (2014)
applied smart-phone embedded accelerometer and gyroscope to detect fall and fall-
portents during construction tiling work; their study also used SVM as a fall-detection
feature. While these studies demonstrated the feasibility of threshold-based detection
of near-miss falls using SVM and SMA, their feasibility in the constrained working
environment of iron workers (e.g. narrow width of I-beams) needs to be further
investigated.
= ( ) +( ) +( )
Where Ax, Ay, and Az are the acceleration (m/s2) of the mediolateral, anterior-
posterior, and vertical axes. In previous research, the SVM measured the intensity of
movement and was widely used in fall detection (Bersch et al. 2014); the SMA metric
was applied to distinguish static and dynamic activities (Pannurat et al., 2014). Thus,
this research applied SMA and SVM metrics as features for detecting iron-workers’
near-miss falls during walking on a steel beam structure. The threshold value of SMA
and SVM metrics were studied using experiment data and were labeled. Then, near-
miss falls from walking were classified using SMA and SVM threshold values. In this
research, all of the computations were conducted using a custom-made MATLAB
(R2014a, Mathworks) program.
DATA COLLECTION
© ASCE
Computing in Civil Engineering 2015 151
Steel Frame
All experiment processes were documented through video (29.97 fps) and used
as a reference data for near-miss fall labeling. Before further processing, due to the
decimal point of IMU frequency (51.2-Hz), frame rate of video and frequency of IMU
data were changed to have 18.73 fps and 32-Hz respectively to prevent a loss of data
in processing. Labeling a near-miss fall is one of the most important steps in near-miss
fall detection research; in this research, the experiment organizer labeled near-miss falls
based upon recorded video data for each data window—0.5s. Label of near-miss fall
corresponded to moments when experiment subjects have one of the following
conditions: 1) cannot maintain the speed of walking due to losing balance with body
sway or arm swing, 2) have obvious body sway or swing motion regardless of walking
speed. During the experiment, actual falls, near-miss falls, and walking and turning
movements were also documented.
Each data window (i.e., 0.5 s) included a 50% overlap to increase the robustness
of the dataset for near-miss detection. Based on the labeling process, 165 near-miss
falls were identified from the 1,925 total data samples. Before computing the SVM and
SMA using accelerometer data, this research normalized raw acceleration data, which
has an offset due to the gravity of the Earth and the orientation of the accelerometer.
Instead of using a complex orientation computation method, this research used
© ASCE
Computing in Civil Engineering 2015 152
Yuwono's (2012) method and subtracted the initial values of acceleration data (t = 0)
from every acceleration data. Also, in order to reduce the noise of IMU data, this
research applied 3rd order Butterworth IIR low-pass filter with a cut-off frequency 4-
Hz for acceleration data because most of the signal energy of human activity is located
below 3-Hz. Then this research computed SMA, SVM, and mean value of Angular
velocity in the vertical plane, as illustrated in Figure 2.
Since turning motions are included in this dataset, the mean value of angular
velocity in vertical plane was computed as a feature to identify turning motions.
Turning motions of subjects during their walking tasks on a rectangular-shaped
structure created higher angular velocity on the vertical plane (in yaw direction) as
compared to other classes (Figure 2-Left). The rule to detect turning motions can be
simply defined based on the mean angular velocity of the vertical plane. This threshold
value of angular velocity (TAV, 29.4 degree/sec) was then determined in a way to
minimize the number of misclassified near-miss falls and walking data.
Figure 2. Angular velocity (Left), SMA (Middle), and SVM (Right) values for
different movements
© ASCE
Computing in Civil Engineering 2015 153
Figure 3. ROC curves of near-miss fall detection using SMA (left) and SVM
(right)
Also, this study found that the variability of the threshold values of SMA and
SVM between subjects is not great. When finding the threshold value for the dataset
from each subject, similar threshold values appeared in both SMA (subject 1: 1.95 and
subject 2: 1.99) and SVM (subject 1: 1.31 subjec2: 1.52). This resonance indicates that
© ASCE
Computing in Civil Engineering 2015 154
the threshold values to detect near-miss falls can be uniformly defined across different
subjects. However, additional tests, including subjects with diverse physiological
characteristics, are necessary to confirm the generality of the threshold values. In
addition, this study found that turning motions could be one of the causes that created
near-miss falls in the experiments; however, the suggested rule-based approach may
not be capable of detecting near-miss during such turning motions. Moreover, the
appropriate definition of near-miss falls is important to increase the predictor’s
accuracy. During the experiment, a few number of ambiguous motions (e.g., change of
speed, subject’s individual difference in reaction when losing balance) occurred. This
ambiguity impacted the labelling process and decreased the overall accuracy of near-
miss fall detection. This situation may indicate the needs to break down near-miss falls
into several different types (e.g., body sway, slowing walking speed).
© ASCE
Computing in Civil Engineering 2015 155
REFERENCE
Beavers, J., Moore, J., and Schriver, W. (2009). “Steel Erection Fatalities in the
Construction Industry.” Journal of Construction Engineering and Management,
135(3), 227–234.
Bersch, S. D., Azzi, D., Khusainov, R., Achumba, I. E., and Ries, J. (2014). “Sensor
Data Acquisition and Processing Parameters for Human Activity Classification.”
Sensors (Basel, Switzerland), 14(3), 4239–4270.
Bird, F. E., and Germain, G. L. (1966). Damage Control. New York: American
Management Assoc.
CPWR. (2013). The Construction Chart Book. The Center for Construction Research
and Training.
Dzeng, R. J., Fang, Y. C., and Chen, I. C. (2014). “A feasibility study of using
smartphone built-in accelerometers to detect fall portents.” Automation in
Construction, 38, 74–86.
Kangas, M., Konttila, A., Lindgren, P., Winblad, I., and Jämsä, T. (2008). “Comparison
of low-complexity fall detection algorithms for body attached accelerometers.”
Gait & Posture, 28(2), 285–291.
Lai, C. F., Chang, S. Y., Chao, H. C., and Huang, Y. M. (2011). “Detection of Cognitive
Injured Body Region Using Multiple Triaxial Accelerometers for Elderly
Falling.” IEEE Sensors Journal, 11(3), 763–770.
Pannurat, N., Thiemjarus, S., and Nantajeewarawat, E. (2014). “Automatic Fall
Monitoring: A Review.” Sensors, 14(7), 12900–12936.
Phimister, J. R., Oktem, U., Kleindorfer, P. R., and Kunreuther, H. (2003). “Near-Miss
Incident Management in the Chemical Process Industry.” Risk Analysis, 23(3),
445–459.
Weiss, A., Shimkin, I., Giladi, N., and Hausdorff, J. M. (2010). “Automated detection
of near falls: algorithm development and preliminary results.” BMC Research
Notes, 3(1), 62.
Yang, K., Aria, S., Stentz, T. L., and Ahn, C. R. (2014). “Automated Detection of Near-
miss Fall Incidents in Iron Workers Using Inertial Measurement Units.”
Construction Research Congress 2014, American Society of Civil Engineers,
935–944.
Yuwono, M., Moulton, B. D., Su, S. W., Celler, B. G., and Nguyen, H. T. (2012).
“Unsupervised machine-learning method for improving the performance of
ambulatory fall-detection systems.” BioMedical Engineering OnLine, 11(1), 9.
© ASCE
Computing in Civil Engineering 2015 156
Abstract
INTRODUCTION
Over the past few years, the availability of inexpensive point-and-shoot, time-
lapse, and smartphone cameras have significantly increased the number of images
that are being taken on construction sites on a daily basis (Bae et al. 2014; Han and
Golparvar-Fard 2014a). The application of these large collections of site images
together with 4D Building Information Models (BIM) for monitoring state of work-
in-progress (WIP), creates an unprecedented opportunity for developing workflows
that can systematically record, analyze, and communicate progress deviations. To do
so, construction firms assign field engineers to filter, annotate, organize, and present
© ASCE 1
Computing in Civil Engineering 2015 157
data for project coordination purposes. However, the cost and complexity of the
collection, analysis and reporting performance deviations results in sparse and
infrequent monitoring practices, and thus some of the gains in efficiency are
consumed by the monitoring cost.
To address the challenges associated with the analysis of these collections of
site images, many researchers have focused on devising methods that register and
compare 4D BIM with the site images. While these methods have shown promising
results, their accuracy has remained at large a function of quality, completeness, and
frequency of the input –collections of the images. In most cases, current collections
do not support the desired frequency or completeness for as-built modeling and
automated progress monitoring. The specific challenges are:
RELATED WORK
© ASCE 2
Computing in Civil Engineering 2015 158
Methods that register site images with BIM and 3D point cloud models.
To register BIM with site images, several methods are proposed which can be
categorized based on the technique used for image acquisition: unordered vs. time-
lapse/fixed viewpoints. (Golparvar-Fard et al. 2009; Kim and Kano 2008; Rebolj et al.
2008; Zhang et al. 2009) propose different pose estimation methods for registering
BIM over time-lapse images. Using unordered collections of site photos, (Golparvar-
Fard et al. 2011) present image-based 3D reconstruction procedures based on a
pipeline of Structure-from-Motion (SfM) and Multi-View Stereo algorithms to
generate as-built 3D point clouds. These point cloud models are then manually
registered with BIM by solving for the similarity transformation. (Karsch et al. 2014)
leverages BIM and presents a constrained-based procedure to improve image-based
3D reconstruction. Their experimental results show that the accuracy and density of
image-based 3D reconstruction and back-projection of BIM on unordered and un-
calibrated site images can be improved compared to the state-of-the-art.
Methods that analyze 3D geometry of the as-built scene or the
appearance information contained in the images. (Golparvar-Fard et al. 2012)
leverages integrated scenes of dense image-based 3D point clouds and BIM, and
infers the state of WIP based on expected-vs.-actual occupancy in the scene. However,
occupancy based methods are not capable of differentiating different stages of
operations involved in construction of an element. Instead of relying on geometry,
(Han and Golparvar-Fard 2014a;b) present appearance-based material classification
methods for monitoring operation-level construction progress. Their method also
leverages 4D BIM and 3D point clouds generated from site images. Yet, through
reasoning about occlusions, each BIM element is back-projected on all images. From
these back-projections, 2D image patches are sampled per element and are classified
into different material types. By reasoning about the observed frequency of different
material types, their method is capable of tracking operation-level progress beyond
activities shown in a typical work breakdown structure of a schedule.
Studies that leverage UAV for as-built modeling purposes. To facilitate the
process of data collection, (Siebert and Teizer 2014; Zollmann et al. 2014) propose
leveraging UAVs for collecting progress images. These methods primarily rely on
GPS for navigations thus their application for autonomous navigations on building
sites in dense urban areas and interior spaces is still limited.
Although the analysis of the site images with BIM is automated to some
degree, yet the data collection has remained a time-consuming process for most part
and does not necessarily follow any specific strategy. While BIM plays an important
role on guiding the data collection and analytics, its application for model-direction
collection of site images is still unexplored.
This paper proposes a framework for acquisition of site images using UAVs
which has potential for guaranteeing accuracy and completeness of as-built 3D
modeling and providing information at the level of detail necessary for monitoring
WIP. The approach relies on a detailed BIM that can serve as a basis for project
control, 3D as-build modeling, model-based reasoning, and communication of
© ASCE 3
Computing in Civil Engineering 2015 159
progress deviations (Figure 1). The following provides the specific steps that can
streamline the process of data acquisition and analytics for complete assessment of
work-in-progress:
© ASCE 4
Computing in Civil Engineering 2015 160
Accoun nting for occclusions durring back-prrojection off BIM into site s images.
By identifying and removving static and a dynamicc occlusionss, the resultting images
from
m back-projection of BIIM can be more
m accuratee for appearaance-base deetection. To
do so, the methhod of (Karssh et al., 20114) could bee used to maake use of thhe BIM and
the sparse set of
o 3D points computed duuring the SfM fM proceduree (Sec 5). Byy projection
points in the BIM,
B it is preedicted wheether or not this point iss in front off the model.
Beccause the baack-projecteed point clouuds are typiically sparsee, the binaryy occlusion
predictions can n be floodeed by superppixels and finally smooothened with a cross-
© ASCE 5
Computing in Civil Engineering 2015 161
Figgure. 4: Iden
ntifying occclusions and
d removing by
b analyzin
ng point clou
ud vs. BIM..
Figgure. 5: Extrraction of im
mage patchees (left) and
d material cllassification
n (right).
© ASCE 6
Computing in Civil Engineering 2015 162
Figgure. 6: (a) The viewp point of thee camera caan significaantly affect the object
reccognition reesult while viewpoint is i normal to t the desirred plane gives
g better
result (b) 4D BIM
B to identify the wayypoints in UAV
U path pllanning.
Discusssion on on ngoing casee studies an nd the rolee of BIM (geometry,
apppearance, an nd interdepeendency). ToT leverage thet conceptss introducedd above, the
authhors are cu urrently connducting tw wo pilot prrojects on a building project in
Chaampaign, ILL (see Figuree 7 for an example
e as-bbuilt point cloud
c generaated on this
proj
oject) and alsso on a stadiuum project in Sacramentto, CA.
Figure. 7: Top
p Row: Den
nse 3D pointt cloud moddel of constrruction; Botttom Raw:
Superimposing 4D BIM
B on an aerial
a photoo taken duriing construcction.
CO
ONCLUSION
© ASCE 7
Computing in Civil Engineering 2015 163
AKNOWLEDGEMENT
This material is in part based upon work supported by the National Science
Foundation under Grant CMMI-1427111. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
REFERENCES
Bae, H., Golparvar-Fard, M., and White, J. (2014). “Image-Based Localization and
Content Authoring in Structure-from-Motion Point Cloud Models for Real-Time
Field Reporting Applications.” Journal of Computing in Civil Eng, 637–644.
Golparvar-Fard, M., Peña-Mora, F., Arboleda, C. A., and Lee, S. (2009).
“Visualization of Construction Progress Monitoring with 4D Simulation Model
Overlaid on Time-Lapsed Photographs.” Journal of Computing in Civil Eng.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2011). “Integrated Sequential
As-Built and As-Planned Representation with Tools in Support of Decision-
Making Tasks in the AEC/FM Industry.” Journal of Construction Engineering
and Management.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2012). “Automated Progress
Monitoring Using Unordered Daily Construction Photographs and IFC-Based
Building Information Models.” Journal of Computing in Civil Eng, 147–165.
Han, K., and Golparvar-Fard, M. (2014a). “Appearance-based Material Classification
for Monitoring of Operation-Level Construction Progress Using 4D BIM and
Site Photologs.” Automation in Construction, Elsevier.
Han, K., and Golparvar-Fard, M. (2014b). “Automated Monitoring of Operation-level
Construction Progress Using 4D BIM and Daily Site Photologs.” Construction
Research Congress 2014, D. Castro-Lacouture, J. Irizarry, and B. Ashuri, eds.,
American Society of Civil Engineers, 1033–1042.
Karsch, K., Golparvar-Fard, M., and Forsyth, D. (2014). “ConstructAide: analyzing
and visualizing construction sites through photographs and building models.”
ACM Transactions on Graphics (TOG), ACM, 33(6), 176.
Kim, H., and Kano, N. (2008). “Comparison of construction photograph and VR
image in construction progress.” Automation in Construction, 17(2), 137–143.
Rebolj, D., Babi , N. ., Magdi , A., Podbreznik, P., and Pšunder, M. (2008).
“Automated construction activity monitoring system.” Advanced engineering
informatics, Elsevier, 22(4), 493–503.
Siebert, S., and Teizer, J. (2014). “Mobile 3D mapping for surveying earthwork
projects using an Unmanned Aerial Vehicle (UAV) system.” Automation in
Construction, Elsevier, 41, 1–14.
© ASCE 8
Computing in Civil Engineering 2015 164
Snavely, N., Garg, R., Seitz, S. M., and Szeliski, R. (2008). “Finding Paths through
the World’s Photos.” SIGGRAPH.
Wu, C. (2013). “Towards linear-time incremental structure from motion.”
Proceedings - 2013 Int Conference on 3D Vision, 3DV 2013, 127–134.
Zhang, X., Bakis, N., Lukins, T. C., Ibrahim, Y. M., Wu, S., Kagioglou, M., Aouad,
G., Kaka, A. P., and Trucco, E. (2009). “Automating progress measurement of
construction projects.” Automation in Construction, Elsevier, 18(3), 294–301.
Zollmann, S., Hoppe, C., Kluckner, S., Poglitsch, C., Bischof, H., and Reitmayr, G.
(2014). “Augmented Reality for Construction Site Monitoring and
Documentation.” Proceedings of the IEEE, IEEE, 102(2), 137–154.
© ASCE 9
Computing in Civil Engineering 2015 165
INTRODUCTION
The National Environmental Policy Act (NEPA) requires any federally funded
or involved transportation projects to go through an environmental review process to
evaluate their impact on the environment. The environmental review process not only
affects transportation decision making by taking environmental concerns into account,
but also affects the project development process in terms of time and cost. According
© ASCE 1
Computing in Civil Engineering 2015 166
© ASCE 2
Computing in Civil Engineering 2015 167
usually consider very few different types of relations between concepts, and (3)
usually allow for low information specificity levels (Castells et al. 2007).
In recent years, a number of research works have been conducted on
improving IR in the construction domain. Soibelman et al. (2008) combined vector-
space-model with document classification information to retrieve documents related
to a project model object, and developed a domain-specific thesaurus to improve the
retrieval of construction product information from the internet. Demian and
Balastoukas (2012) investigated the effects of granularity and context when
measuring relevance and visualizing results for retrieving building design and
construction content, and found out that users performed better and were more
satisfied when the search results were displayed with their context information in
terms of the related discipline, building components, and sub-component objects. Fan
et al. (2015) implemented three machine learning algorithms to enhance the retrieval
results through user feedbacks, and utilized project-specific term dictionary and
dependency grammar information to facilitate feature selection.
Although a number of studies on IR have been conducted in the construction
domain, there still exist many challenges in developing IR systems that can efficiently
retrieve relevant information for decision making: (1) Most of the existing IR systems
in the construction domain are built on keyword-based IR models, such as the vector-
space-model, and lack formal representation of context; (2) Most of the IR efforts in
the construction domain focused on retrieving project-based information and lack
support for retrieving project-independent information, such as regulations; (3) Most
of the current IR systems in the construction domain have not been compared with
other state-of-the-art IR systems in terms of retrieval performance.
© ASCE 3
Computing in Civil Engineering 2015 168
Step 3: Data Preprocessing. To prepare the raw text data for the
implementation of SA algorithms, the Bag of Word (BOW) model (Manning et al.
2009) was used to represent the document. In this model, a document is represented
as an unsorted set of words with their corresponding weight that represents the
discriminating power of the word. In order to represent the document in the BOW
model, the following three commonly-used techniques for data preprocessing were
conducted: (1) Tokenization: Tokenization is the process of breaking the text into
elements (called tokens) such as words, phrases, symbols, or other meaningful
elements. In this work, a single word was regarded as a common token, and a list of
special tokens which consist of terminologies in the TEPR domain was also
developed. Examples of the special tokens include “Categorical Exclusion”,
“Environmental Assessment”, and “Environmental Impact Statement”, which refer to
the three different environment review actions required by the federal law; (2)
Stopword Removal: Stopwords are those words that have high frequency but low
discriminating power, which indicate they have little value in helping select
documents that match a user need; (3) Lemmatization: Lemmatization is the process
of removing inflectional endings and return the base or dictionary form of a word,
which is known as the lemma. For example, after the lemmatization, the words
“mitigates”, “mitigated”, and “mitigating” would all be transformed into their lemma
“mitigate”.
© ASCE 4
Computing in Civil Engineering 2015 169
noises as a result of the concept expansion, and (2) expand concept terms with
domain-specific context terms. Third, to determine the relevance of the annotation,
the Term Frequency-Inverse Document Frequency (TF-IDF) weights of the concept
terms were calculated, and the relevance of the annotation was then determined by the
TF-IDF weights and the relevance between the expansion concept terms and the
original concept terms.
© ASCE 5
Computing in Civil Engineering 2015 170
© ASCE 6
Computing in Civil Engineering 2015 171
distance between the two concepts, while the other methods utilize other path features
(location of the MIS of the two concepts, and the number of descendants of the two
concepts) that are not effective for SA as indicated by the results.
ACKNOWLEDGEMENT
© ASCE 7
Computing in Civil Engineering 2015 172
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 173
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 174
construction projects against various regulations could save time, cost, and reduce
human errors (Zhong et al. 2012). To facilitate ACC, a computer-interpretable, user-
understandable, and unambiguous representation is needed for construction
regulations (Garrett and Palmer 2014). To address this need, a semantic logic-based
information representation (IRep) and compliance reasoning (CR) schema for
representing regulatory information and design information was proposed in prior
work (Zhang and El-Gohary 2014b). A challenge is then how to extract design
information from building information models (BIMs) and transform the extracted
design information into this semantic logic-based representation. This extraction and
transformation method needs to support reliable information transfer between the
BIMs and this semantic logic-based representation in an automated way. This paper
aims to address this challenge by proposing a new BIM information extraction (IE)
method. The proposed method utilizes java standard data access interface (JSDAI)
and semantic natural language processing (NLP) techniques. The remaining sections
of this paper present the details of the proposed method and its preliminary testing on
extracting information from a Duplex Apartment BIM model.
BACKGROUND
© ASCE 2
Computing in Civil Engineering 2015 175
PROPOSED METHOD
Input: IFC-Based BIMs. The proposed method aims to process BIMs in IFC
format (i.e., files having the extension name of “.ifc”, referred to as IFC files
hereafter) based on the IFC schema. The IFC schema is the main data model schema
to describe data in the building and construction industry, which is registered with
ISO as ISO 16739. The IFC format is neutral and platform independent
(BuildingSMART 2014). The BIMs in IFC format use “STEP physical file” format
defined as ISO10303-21 (BuildingSMART 2014). In a “STEP physical file”, each
line is assigned a line number. Each line represents an entity. An entity in IFC format
represents either a concept or relation. For example, “IFCBUILDINGSTOREY” is an
entity representing a concept “building storey”, and “IFCRELVOIDSELEMENT” is
an entity representing a relation “voids element” that defines the relation between an
“opening element” and the “void” made by the “opening element”.
© ASCE 3
Computing in Civil Engineering 2015 176
Output: Logic Facts. The proposed method outputs text files carrying
processed information represented as logic facts, following the previously proposed
IRep and CR schema (presented in Zhang and El-Gohary 2014a). In the proposed
IRep and CR schema, two types of facts are used: concept facts and relation facts. A
concept fact defines a constant as an instance of a certain concept. For example,
“door(door6652)” defines the constant “door6652” as an instance of the “door”
concept. A relation fact defines a relationship between an instance of a concept and
an instance of another concept or a value. For example, has(project34, site38274)
defines the association relation between an instance of project “project34” and an
instance of site “site38274.” The number in an instance is the line number of the
instance in its source IFC file. The use of these line numbers satisfies three purposes:
(1) identifying instances, (2) distinguishing instances, and (3) establishing links
between the logic facts and lines in their IFC source file. Both concept facts and
relation facts are represented as predicates. A predicate is the building block of a
logic clause. A predicate consists of a predicate symbol and one or more arguments
(i.e., constants or variables) in parenthesis following the predicate symbol [e.g., the
predicate “door(door6652)” has one predicate symbol “door” and one argument
“door6652”, where “door6652” is a constant].
© ASCE 4
Computing in Civil Engineering 2015 177
Start
Initialize variables;
Compile IFC schema to use;
End
Is the entity an
Yes
aggregate type?
Subroutine S1
No
Iterate through each sub-entity SE
in the aggregate, process each SE Look up the entity in the compiled
using the subroutine S1; IFC schema to get the names of its
attributes A1, A2, … An;
Yes
In the semantic look up subtask, each extracted entity name, attribute name,
and attribute value (i.e., if the attribute value is of entity type or enumeration type) is
looked up in the used IFC schema version. The matched name or enumeration type
value in the IFC schema is then used to convert the extracted name/value into
underscore-connected terms. For example in Figure 3, an entity
“OWNERHISTORY” is looked up in the IFC schema to find the matched entity
name “IfcOwnerHistory”. Then the term boundary information in “IfcOwnerHistory”
(i.e., represented by capitalization) is used to convert the extracted
“OWNERHISTORY” to “owner_history.” This is needed because the output of this
ITr step is to instantiate logic rules based on the semantic logic-based representation.
To enable that instantiation, the semantic information of each term in an entity or
attribute name needs to be semantically matched with terms in concepts and relations
from regulatory requirements (represented in logic rules). In the entities/attributes
transformation subtask, a rule-based NLP approach is selected (Zhang and El-Gohary
2014a). Three main NLP-based transformation rules are used: (1) an entity is
transformed into a concept fact (i.e., a predicate) by using the name of the entity as
the name of the predicate and using the name of the entity concatenated with the line
number as the argument (i.e., an entity constant) of the predicate. For example, in
Figure 3, the beam entity is transformed into a concept fact “beam(beam36686),”
with the name of the entity “beam” being the predicate name and the concatenation of
the entity name and the line number “beam36686” as the predicate argument; (2) an
attribute of an entity is transformed into a relation fact (i.e., a predicate) by using the
name of the attribute preceded by “has_” as the name of the predicate, using the
© ASCE 5
Computing in Civil Engineering 2015 178
corresponding entity constant as the first argument of the predicate, and using the
value of the attribute as the second argument of the predicate (if the value is not a
reference to another entity). For example, in Figure 3, the attribute “global_id” for the
beam entity is transformed into a relation fact “has_global_id(beam36686,
2OrWItJ6zAwBNp0OUxK_l8);” and (3) if the value of an attribute is a reference to
another entity, then the referred entity constant is used as the second argument of the
predicate. For example, in Figure 3, the attribute “owner_history” for the beam entity
is transformed into a relation fact with the referred entity constant owner_history33 as
the second argument: “has_owner_history(beam36686,owner_history33).”
#33=IFCOWNERHISTORY(#32,#2,$,.NOCHANGE.,$,$,$,0);
#36605=IFCLOCALPLACEMENT(#38,#36604);
Lines in IFC-
#36685=IFCPRODUCTDEFINITIONSHAPE($,$,(#36602,#36684)); I
Based BIMs
#36686=IFCBEAM('2OrWItJ6zAwBNp0OUxK_l8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-
Wide Flange:W310X60:208816',#36605,#36685,'207325');
(OWNERHISTORY, 33,
[owninguser,owningapplication,state,changeaction,lastmodifieddate,lastmodifyinguser,lastmodifyingapplication,creationdate],
[#32,#2,none,NOCHANGE,none,none,none,0]);
Tuples as (LOCALPLACEMENT,36605,[placementrelto,relativeplacement],[#38,#36604]);
output of IE (PRODUCTDEFINITIONSHAPE,36685,[name,description,representations],[none,none,(#36602,#36684)]);
II
(BEAM, 36686,
[globalid,ownerhistory,name,description,objecttype,objectplacement,representation,tag],['2OrWItJ6zAwBNp0OUxK_l
8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-Wide
Flange:W310X60:208816',#36605,#36685,'207325']);
(owner_history, 33,
[owning_user,owning_application,state,change_action,last_modified_date,last_modifying_user,last_modifying_application,cre
ation_date], [#32,#2,none,NOCHANGE,none,none,none,0]);
Tuples after
(local_placement,36605,[placement_rel_to,relative_placement],[#38,#36604]);
Semantic III
(product_definition_shape,36685,[name,description,representations],[none,none,(#36602,#36684)]);
Look up
(beam, 36686,
[global_id,owner_history,name,description,object_type,object_placement,representation,tag],['2OrWItJ6zAwBNp0OU
xK_l8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-Wide
Flange:W310X60:208816',#36605,#36685,'207325']);
owner_history(owner_history33).
local_placement(local_placement36605).
product_definition_shape(product_definition_shape36685).
beam(beam36686).
Logic Facts has_global_id(beam36686,2orwitj6zawbnp0ouxk_l8).
as Output of has_owner_history(beam36686,owner_history33). IV
ITr has_name(beam36686,m_w-wide_flange:w310x60:w310x60:207325).
has_object_type(beam36686,m_w-wide_flange:w310x60:208816).
has_object_placement(beam36686,local_placement36605).
has_representation(beam36686,product_definition_shape36685).
has_tag(beam36686,207325).
© ASCE 6
Computing in Civil Engineering 2015 179
attributes in these 100 lines and generating the target logic facts for them. The
proposed automated IE and ITr algorithms were applied to the
“Duplex_A_20110907.ifc” file, and the output results were compared with the
manually developed gold standard. The experimental results are summarized in Table
1. For the 100 concept facts and 328 relation facts corresponding to the 100 lines of
data, 100% precision was achieved. In addition, it took only 15.02 seconds to process
all the 38,898 lines of data. Yet, two limitations of the proposed method are
identified: (1) some output logic facts are not interpretation-friendly. For example, the
universal pre-fix for relation facts (i.e., “has_”) does not semantically fit in cases like
the following predicate P1; (2) the relations represented in the IFC file are not
perfectly aligned with the authors’ proposed semantic logic-based representation. For
example, an explicit relation entity in IFC is typically represented by two predicates
(e.g., P2 and P3) whereas in the semantic logic-based representation it is represented
by only one predicate (e.g., P4).
P1: has_for_layer_set(material_layer_set_usage21369,material_layer_set21320).
P2: has_relating_space(rel_space_boundary38711,space514).
P3: has_related_building_element(rel_space_boundary38711,covering23992).
P4: has_space_boundary(space514, covering23992).
This paper presents a new BIM IE method for automatically extracting design
information from IFC-based BIMs and transforming the extracted information into
logic facts. The proposed method is intended to support automated reasoning for
automated compliance checking. Beyond this application, the proposed method could
be further used to assist other analysis and reasoning applications that use IFC-based
BIMs, because it provides semantic logic-represented information that are human-
interpretable and computer-processable. The method utilizes JSDAI and semantic
NLP techniques. It could process BIMs based on different versions of IFC schema.
The method was tested on processing information in the Duplex Apartment Project
from buildingSMARTalliance of the National Institute of Building Sciences.
Comparing to a manually developed gold standard, the experimental results showed a
100% precision. In addition, the processing of 38,898 lines of data only took 15.02
seconds. In their future work, the authors will further refine the proposed method to
make its output logic facts better aligned with regulatory requirements represented as
logic rules.
© ASCE 7
Computing in Civil Engineering 2015 180
ACKNOWLEDGEMENT
REFERENCES
ISO. (2004). “ISO 10303-11:2004 - Part 11: Description methods: The EXPRESS
language reference manual.”
<http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csn
umber=38047> (Dec. 05, 2014).
BuildingSMART. (2014). “Industry Foundation Classes (IFC) data model.” <
http://www.buildingsmart-tech.org/specifications/ifc-overview> (Dec. 06,
2014).
Cherpas, C. (1992). “Natural language processing, pragmatics, and verbal behavior.”
Anal. Verbal Behav., 10(1992), 135–147.
El-Gohary, N. M., and El-Diraby, T. E. (2010). “Domain ontology for processes in
infrastructure and construction.” J. Constr. Eng. Manage., 136(7), 730–744.
Garrett, Jr., J.H., and Palmer, M.E. (2014). “Delivering the infrastructure for digital
building regulations.” J. Comput. in Civ. Eng., 28, 167-169.
Marquez, L. (2000). “Machine learning and natural language processing.” Proc.,
“Aprendizaje automatico aplicado al procesamiento del lenguaje natural”.
East, E.W. (2015). “Common building information model files and tools.” <
http://www.nibs.org/?page=bsa_commonbimfiles> (Mar. 03, 2015).
Zhang, J., and El-Gohary, N.M. (2014a). “Automated information transformation for
automated regulatory compliance checking in construction.” J. Comput. in Civ.
Eng., accepted.
Zhang, J., and El-Gohary, N.M. (2014b). “Logic-based automated reasoning for
construction regulatory compliance checking.” J. Comput. in Civ. Eng.,
submitted.
Zhang, J., and El-Gohary, N.M. (2013). “Semantic NLP-based information extraction
from construction regulatory documents for automated compliance checking.”
J. Comput. in Civ. Eng., published ahead of print.
Zhong, B. T., Ding, L. Y., Luo, H. B., Zhou, Y., Hu, Y. Z., and Hu, H. M. (2012).
“Ontology-based semantic modeling of regulation constraint for automated
construction quality compliance checking.” Autom. Constr., 28(2012), 58–70.
Zhou, N. (2012). “B-Prolog user’s manual (version 7.8): Prolog, agent, and constraint
programming.” Afany Software.
<http://www.probp.com/manual/manual.html> (Dec. 28, 2013).
Zhou, P., and El-Gohary, N. (2014). “Ontology-based multi-label text classification
for enhanced information retrieval for supporting automated environmental
compliance checking.” Computing in Civil and Building Engineering (2014),
ASCE, Reston, VA, 2238-2245.
© ASCE 8
Computing in Civil Engineering 2015 181
Abstract
Given the critical role of specialty contractors in the construction industry, as the
part who is involved in manual construction tasks and therefore is faced with hazardous
attributes more than any others, finding innovative ways to enhance the safety
performance of these types of contractors is promising. To address this challenge, the
objective of this study is to utilize recently developed attribute-based risk assessment
method and create a safety risk database, which then will be used to develop a safety-
risk assessment tool (a mobile application) that can be operated by designers, project
managers, and particularly specialty contractors to identify and assess risk of
construction activities. The study focuses on electrical works, as this sector is one of the
most hazardous trades in the construction industry (e.g. contact with electricity is the
fourth main cause of fatalities in this industry). To build upon a reliable dataset, we
obtained 323 accident reports from the OSHA IMIS database to create attribute-based
safety risk models. Then, the attributes of construction objects and work tasks that cause
injuries for electrical contractors are identified through a content analysis of injury
reports and the relative risks associated with each attribute is quantified by recording
the outcome of injuries. To predict potential outcome of injuries, a principal component
analysis (PCA) and generalized linear model (GLM) conducted on the attribute based
database and the predictive power of developed models is calculated using a rank
probability skill score (RPSS). Ultimately, a mobile application has been generated to
issue the findings.
INTRODUCTION
The construction industry presents unique challenges for occupational health
research and prevention because it involves a large number of relatively small
employers, multiemployer work sites, many hazardous exposures, and a highly mobile
workforce. This diversity complicates worker safety, especially within arenas that
frequently interact with electricity. Contact with electric current is a major cause of
injury and death among construction workers (Janicak, 2008). In 2012, the Census of
Fatal Occupational Injuries (CFOI) data produced by the Bureau of Labor Statistics
(BLS) indicated that contact with electric current was the fourth leading cause of work
related deaths—after falls, transportation incidents, and contact with objects and
equipment (BLS, 2012). Therefore, finding innovative ways to identify, assess, and
© ASCE
Computing in Civil Engineering 2015 182
mitigate electrocution hazards in early stages of a project would save lives and prevent
injury.
For a safe completion of a project, safety hazards should be predicted before
starting activities. This enables safety managers to employ proactive measures to avoid
or reduce the hazards. The objective of safety predictive models is to establish a
relationship between safety performance (dependent variable) and some measurable
factors (independent variables) that may contribute to predict safety-related outcomes
(see Figure 1). There are several methods to measure safety performance (i.e., the
dependent variable) such as accident statistics, accident control charts, attitude scales,
severity sustained by the workers, safe behavior, and identifiable hazards (Brauer 1994,
Gillen et al. 2002; Cooper and Phillips 2004, Esmaeili and Hallowell 2012). A variety
of factors have been used to measure the predictor variables such as safety attitudes,
practices and characteristics of construction firms, safety program elements employed,
and construction trades and activities. Safety predictive models vary according to the
nature of these different types of predictive variables.
An extensive literature review of previous studies indicated that there are three
limitations of the existing safety predictive models (Tam and Fung 1998; Gillen et al.
2002; Chen and Yang 2004; Fang et al. 2006; Rozenfeld et al. 2010): (1) most of the
previous models are based on subjective data obtained from field personnel; (2) these
models focus on unsafe behavior and ignore the importance of physical unsafe
conditions; and (3) the proposed models cannot be integrated in to the preconstruction
safety activities.
To address these limitations, the objective of our study is to develop safety
predictive models to forecast the probability of different injury outcomes and integrate
the results into a mobile risk assessment tool. This study departs from the current body
of knowledge by developing a novel mathematical model to predict the hazardous
situation in early stage of project. It is expected that the employed approach and the
resulting models will significantly improve proactive safety management. Specifically,
the predictive models can help practitioners to consider safety during design, choose
alternative means and methods of construction, identify high risk periods of project, and
select injury prevention practices more strategically.
RESEARCH DESIGN
The objectives of this research are to identify construction safety risk attributes
specific to electric work, develop regression models to predict potential severity of
accidents, and create a mobile application that performs the risk-assessment and
provides mitigation plans. The specific research protocol for each objective and the
© ASCE
Computing in Civil Engineering 2015 183
contribution to be made is discussed in detail below. For clarification, the different steps
conducted in the study are summarized in Figure 2.
Predictive models and risk assessment techniques that are based on empirical
data provides higher validity for the users. Therefore, we decided to use Integrated
Management Information Systems (IMIS) database of accidents because this database is
publicly available and include a wide range of incidents described by OSHA
compliance officers. We limited the scope of study to industry group 1731 (electrical
work) from the OSHA IMIS database and collected 325 accident reports which were
reported from January 2009 to October 2012.
Identifying Attributes
The corresponding research objective is to identify and classify the attributes of
construction objects and work tasks that cause injuries through a content analysis of
accident reports. Content analysis is empirically grounded in the scientific method that
helps researchers gain insights into specific issues and quantify the frequency and
distribution of content in textual data (Krippendorf, 2004). As safety-risk attributes are
latent in accident reports and identifying them requires recognizing patterns in accident
reports, we decided to conduct content analysis on accident reports obtained from IMIS
database. Content analysis is a scientific method to analyze content in textual data
(Krippendorf, 2004).
© ASCE
Computing in Civil Engineering 2015 184
a very flexible approach for exploring the relationships among a variety of variables
(discrete, categorical, continuous and positive, extreme value) as compared to
traditional regression (McCullagh and Nelder, 1989). Model parameters in GLM were
determined in an iterative process called iterated weighted least squares (IWLS). This
algorithm was implemented by default through R’s standard GLM libraries, such as
“MASS,” “VGAM,” and “nnet.” To avoid over-fitting of the data and find a “best
model” that contains the right quantity of variables, we adopted a stepwise regression
approach that minimizes the Akaike Information Criterion (AIC) instead of a likelihood
function to evaluate goodness of fit in the stepwise search. This helped us to create
predictive models that reproduce the variance of the observations with the fewest
number of parameters (Wilks, 1995).
It is necessary to measure predictive power of models. In this study, the
performance of the model will be measured against the observed data through a rank
probability skill score (RPS), which indicates the degree to which the model predicts the
observed data. This method has been used in various climatological contexts to compare
the model’s skill in predicting categorical rainfall and stream flow quantities (Regonda
et al., 2006). A detailed description of the RPSS method has been provided by Wilks
(1995).
© ASCE
Computing in Civil Engineering 2015 185
© ASCE
Computing in Civil Engineering 2015 186
After selecting the PCs, GLMs with a logit link function were fit to the selected
PCs to predict the probability of not severe, mild, and severe injuries. In addition, to
find the parameter set that minimized the model AIC, a stepwise regression approach
was employed. Therefore, number of variables used in the model was reduced once
more. Then, the logit link function was used to transform responses, x, into the linear
predictor. The results of stepwise generalized linear models for these models are
summarized in Table 3. This table presents value of parameters that can be entered into
simultaneous equations and predict probability of various types of injuries based on the
existence of certain attributes.
Table 3. Overall results of stepwise generalized linear models
Predictor Estimate Std. Error
Intercept-1 0.669 0.209
Intercept-2 0.810 0.201
PC1-1 -2.720 0.493
PC1-2 -1.725 0.426
PC3-1 -3.537 0.757
PC3-2 -3.997 0.718
* All parameters are significant to p< 0.1.
By estimating parameters ( ), link functions ( ) can be calculated and by back-
transforming link functions with the inverse logit, probabilities of not severe, mild, and
severe will be obtained. As one can see, there are two values for each parameter in
Table 3. The first values are parameters for , and the second
Equation 1
Equation 2
Equation 3
While the mathematics behind the models is complicated, the findings can be
easily used in practice. To find the probability of severe, mild, and not severe injuries
for an activity, one should select potential attributes that workers will be exposed to
during conducting the activity. Then, by solving the above mentioned equations, the
probability of not severe, mild, and severe accidents can be calculated.
The RPSS of model is calculated as 0.185 which represent a strong model
performance. One should note that the expected value of RPSS is less than zero (Mason
2004) which means that any value greater than zero indicate superior performance of
the model to the reference forecast. A diagnostic test was also conducted, and the
residual plots of R (being actual Y less projected Y) versus “predicted Y” show a
random distribution. This confirms that the assumption about the normality is valid.
There was no need to analyze multicolinearity among variables, because the PCs are
orthogonal and PCA remove any multicolinearity.
© ASCE
Computing in Civil Engineering 2015 187
PRACTICAL APPLICATIONS
There are several practical implications for the developed risk database and
predictive models; for example, they can be integrated in to the building information
modeling (BIM) software. One of the major barriers for a wide implementation of BIM-
based safety management programs is lack of valid safety risk data. However, the
results of this study addresses this challenge by presenting a list of limited number of
attributes that can be used to measure safety risk and predict severity of possible
injuries. There are several safety related feedbacks and reports that can be created by
BIM-based safety management program using the proposed attribute-based safety risk
management. After assigning the attributes to the tasks and objects, hazard analysis can
be conducted in the planning phase of the project. The hazards identified in this study
can also be used by designers to modify their designs to improve safety. If the hazards
cannot be prevented during the design, more attention should be paid to mitigate them
during the construction phase. In addition, a project manager can compare alternative
means and methods to see which ones provide more hazards for the workers.
Furthermore, a supervisor can identify hazardous activities or situations to highlight
them during job hazard analysis or tool box meetings. The database can also be used to
develop safety risk profiles. Using risk profiles provide an opportunity for managers to
allocate safety resources in accordance with risk fluctuation in project schedule. As
mentioned, to facilitate dissemination of results a safety risk assessment mobile
application was developed. Screen shots of the tool are provided in Figure 3. In future
steps of this study, the mobile application will be validated by the safety managers.
CONCLUSIONS
Predicting hazardous situations and the probability of injuries before start of a
project is an essential step towards adopting preconstruction safety activities and
mitigating risks. The current safety predictive models are not based on objective data
and can be implemented only in the construction phase of a project. To address this
limitation and facilitate adoption of preconstruction safety activities, predictive models
were developed that forecast severity of possible injuries using fundamental safety
attributes. The model has several advantages to previous studies. First, the model are
based on a reliable empirical database developed by conducting content analysis on a
large number of accident reports obtained from OSHA IMIS database. For the first
time, there was a dataset of sufficient size and quality to apply statistical techniques and
create mathematical models. Second, the predictive model shows which attributes are
more critical in causing accidents. Therefore, project managers can mitigate the risk of
© ASCE
Computing in Civil Engineering 2015 188
injuries by focusing on limited number of attributes. Third, the results of the study and
developed predictive models can be integrated into building information modeling and
enhance preconstruction safety practices. At the end, it is expected that the predictive
models could drastically change the way that potential injuries are considered during
planning, project financing, and safety controls.
REFERENCES
Adobe® Captivate® 6. (2013). Adobe Systems, 345 Park Avenue, San Jose, CA 95110-
2704. Available from: http://www.adobe.com/products/captivate/.
Brauer, R. L. (1994). Risk management and assessment. Safety and Health for
Engineers, Van Nostrand Reinhold, New York, 572-543.
Bureau of Labor and Statistics (2012). Census of fatal Occupational Injuries (CFOI).
http://www.bls.gov/iif/oshwc/cfoi/cftb0268.pdf.
Chen, J. R., & Yang, Y. T. (2004). A predictive risk index for safety performance in
process industries. Journal of Loss Prevention in the Process Industries, Vol.
17, pp. 233–242.
Cooper, M. D., & Phillips, R. A. (2004). Exploratory analysis of the safety climate and
safety behavior relationship. Journal of Safety Research, Vol. 35, pp. 497–512.
Esmaeili (2012). Identifying and quantifying construction safety risks at the attribute
level. Unpublished thesis (PhD), University of Colorado at Boulder.
Esmaeili, B., Hallowell, M. R., (2011). Using network analysis to model fall hazards on
construction projects. Safety and Health in Construction, CIB W099, August 24-
26, 2011, Washington DC.
Esmaeili, B., Hallowell, M. R., (2012). Attribute safety risk model for measuring safety
risk of struck-by accidents. In The 2012 Construction Research Congress
(CRC), May 21-23, West Lafayette.
Fang, D.P., Chen, Y., Louisa, W., (2006). Safety climate in construction industry: a case
study in Hong Kong. Journal of Construction Engineering and Management.
Vol. 132(6), pp. 573–584.
Gillen, M., Baltz, D., Gassel, M., Kirch, L., & Vaccaro, D. (2002). Perceived safety
climate, job demands, and coworker support among union and nonunion injured
construction workers. Journal of Safety Research, Vol. 33, pp. 33–51.
Hinze, J., Huang, X., and Terry, L. (2005). The nature of struck-by accidents. Journal of
Construction Engineering and Management., Vol. 131 (2), pp. 262-268.
Janicak, C. A. (2008). “Occupational fatalities due to electrocutions in the construction
industry.” Journal of Safety Research, 39: 617 621.
Joliffe, I. T., (2002). Principal Component Analysis. (2nd edition), Springer-Verlag.
Krippendorff, K. (2004). Content analysis: An introduction to its methodology.
Thousand Oaks, CA: Sage.
Mason, S. J. (2004). On Using Climatology as a Reference Strategy in the Brier and
Ranked Probability Skill Scores. Monthly Weather Review, Vol. 132, 1891-
1895.
McCullagh, P. and J.A. Nelder (1989). Generalized linear models. Chapman and Hall,
London.
Regonda, S., Rajagopalan B., Clark M. (2006). “A new method to produce categorical
streamflow forecasts.” Water Resources Research, 42.
© ASCE
Computing in Civil Engineering 2015 189
Rozenfeld, O., Sacks, R., Rosenfeld, Y., Baum, H., (2010). Construction job safety
analysis. Safety Science, Vol. 48, pp. 491–498.
Tam, C.M., Fung, I.W.H., (1998). Effectiveness of safety management strategies on
safety performance in Hong Kong. Construction Management and Economics.
Vol. 16(1), pp. 49–55.
Wight, P.C., Blacklow, B. and Tepordei, G.M. (1995). An epidemiologic assessment of
serious injuries and fatalities among highway and street construction workers.
Department of Transportation Grant No. DTFH61-93-X-00024.
Wilks D. S. (1995). Statistical methods in the atmospheric sciences. Academic Press.
© ASCE
Computing in Civil Engineering 2015 190
1
Graduate Student, Department of Civil and Environmental Engineering,
University of Illinois at Urbana-Champaign, 205 North Mathews Avenue, Urbana,
IL 61801. E-mail: [email protected]
2
Assistant Professor, Department of Civil and Environmental Engineering,
University of Illinois at Urbana-Champaign, 205 North Mathews Avenue, Urbana,
IL 61801. E-mail: [email protected]
Abstract
© ASCE
Computing in Civil Engineering 2015 191
INTRODUCTION
BACKGROUND
© ASCE
Computing in Civil Engineering 2015 192
template or schema (Manning and Schutze 1999). There are two common
approaches to IE (Moreno et al. 2013; Moens 2006): (1) a rule-based approach
that requires manually defining a series of patterns based on syntactic features
and/or semantic features to guide the extraction; and (2) a machine learning (ML)
approach that applies algorithms such as Support Vector Machine, Hidden
Markov Model and Conditional Random Field to automatically learn those
patterns from training data. Although ML methods can save the human effort in
pattern recognition and rule development, rule-based methods are commonly-used
because of their expected higher performance (Moens 2006).
Ontology-based IE (OBIE) is a subfield of IE that aims to use an ontology
to assist in extracting semantic information that is specific to a domain
(Wimalasuriya and Dou 2010). A domain ontology represents domain knowledge
in terms of concepts, relationships, and axioms (El-Gohary and El-Diraby 2010).
Comparing to general IE which only depends on the syntactic information of the
text, OBIE further relies on semantic domain-specific information to extract
information of domain interest. OBIE has been well explored across different
domains, such as the business (Saggion et al. 2007), the legal (Moens 2006), and
the biology (Moreno et al. 2013) domain. However, OBIE efforts are still limited
in the construction domain: existing efforts either rely on mostly syntactic features
(Al Qady and Kandil 2010) or simple document structure features (HTML tags)
(Abuzir and Abuzir 2002). In comparison, this research integrates both syntactic
features (e.g., POS tags) and semantic features (e.g., concepts from an ontology)
to assist the extraction process.
© ASCE
Computing in Civil Engineering 2015 193
identify the labels used in classification, and a sub-ontology was built for each
topic (label) in the hierarchy. After preprocessing the documents using
tokenization, stemming, and stopword removal, a document was assigned with
zero, one, or multiple labels by measuring the semantic similarity between the
document and each sub-ontology using a deep learning technique. The documents
assigned with a zero label are filtered out. For further details on the classification
methodology, the readers are referred to Zhou and El-Gohary (2014).
Step 2: Preprocessing
Preprocessing aims to prepare the raw classified (from Step 1) text for the
following analysis and processing steps. Preprocessing involves three primary
NLP techniques (Manning and Schutze 1999): (1) Tokenization: Tokenization
splits the raw text into tokens (e.g., words, numbers, punctuations, symbols,
whitespace); (2) Sentence splitting: This task splits text into sentences by
detecting sentence boundary indicators like question mark, exclamation mark, and
period; and (3) Morphological analysis: This task collapses different derivational
(e.g., affixes like “ly”, “ion”) and inflectional forms (e.g., plural, progressive) of a
word to its base form. For example, “realizations”, “realizing”, “realistically”, and
“unreality” are all mapped to “real”.
After preprocessing the text, syntactic features and semantic features were
selected for further extraction rule development (Step 5).
Syntactic features were selected using POS tagging and a set of gazetteers.
POS tagging assigns a tag to each word based on the syntactic word class (e.g.,
noun, verb, adjective, etc.) (Moens 2006). For example, tag “NN” was assigned to
each singular noun in a sentence. A gazetteer refers to a list of words that share a
common category (e.g., countries of world) (Wimalasuriya and Dou 2010). For
example, in this paper, a gazetteer measurement list was built to collect all
words/symbols that act as measurement units (e.g., “square feet”, “cfm/sf2”), and
these words/symbols were assigned with a tag named “unit”. Each tag was used as
a syntactic feature.
Semantic features were selected using an ontology. In this paper, an
ontology was built to capture the concepts of building energy conservation, which
is a subdomain of the environmental domain. The methodology of building an
ontology as described in El-Gohary and El-Diraby (2010) was followed. Each
concept from the ontology was used as a semantic feature.
© ASCE
Computing in Civil Engineering 2015 194
After identifying the target SIEs, extraction rules were manually developed
to extract the instances of target SIEs. The left side of an extraction rule models
the pattern of the text in terms of semantic features (e.g., POS tags) and/or
syntactic features (i.e., concepts), while the right side defines the information that
© ASCE
Computing in Civil Engineering 2015 195
should be extracted when this pattern is matched. In developing the rules, the
dependency information among the SIEs assisted in defining the patterns. For
example, the following rule was developed to extract the instances of
“Comparative Relation” (a target SIE): (JJR IN):cr + QuantityValue
cr.ComparativeRelation. “JJR” and “IN” are POS tags for comparative adjective
and preposition. “JJR IN” is a pattern that matches information like “less than”.
When “JJR IN” is followed by “QuantityValue”, which is the dependency
information, the information matching “JJR IN” should probably be an instance of
“Comparative Relation”. Therefore, a pointer “cr” was set to pattern “JJR IN”, and
the information (which the pointer refers to) matching this pattern was extracted as
instances of “ComparativeRelation”. Similarly, Rule 7 (sj + VBZ +
ComparativeRelation sj.Subject”) was used to extract “supply air system” as an
instance of “Subject”, where “supply air system” is a concept defined in the
ontology.
A gold standard was manually built to evaluate the extraction results. The
performance was measured in terms of recall and precision (Moens 2006). Recall
refers to the percentage of the total number of correctly extracted instances out of
the total number of correct instances in the gold standard. Precision refers to the
percentage of the total number of correctly extracted instances out of the total
number of extracted instances.
© ASCE
Computing in Civil Engineering 2015 196
ACKNOWLEDGEMENT
© ASCE
Computing in Civil Engineering 2015 197
The authors would like to thank the National Science Foundation (NSF).
This material is based upon work supported by NSF under Grant No. 1201170.
Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of NSF.
REFERENCES
© ASCE
Computing in Civil Engineering 2015 198
Zhong, B. T., Ding, L. Y., Luo, H. B., Zhou, Y., Hu, Y. Z., and Hu, H. M. (2012).
“Ontology-based semantic modeling of regulation constraint for automated
construction quality compliance checking.” Autom. Constr., 28(2012), 58-70.
Zhou, P., and El-Gohary, N. (2014). “Ontology-based, multi-label text
classification for enhanced information retrieval for supporting automated
environmental compliance checking.” 2014 Int. Conf. on Comput. Civ. and
Build. Eng., ASCE, Reston, VA, 2238-2245.
© ASCE
Computing in Civil Engineering 2015 199
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 200
© ASCE 2
Computing in Civil Engineering 2015 201
areas made from roof shingles, roof felt paper, and plywood - typical visible materials
on a damaged roof - was made and scanned with different angles of incidence. In the
second test, a building damaged by the 2013 Moore, OK tornado was scanned from
different locations. Using these lidar datasets, the influence of the incident angle on
lidar intensity measurements and wind-induce damage detection were investigated.
LABORATORY TEST
The laboratory test was run by scanning a model of a damaged roof panel with
varying angles of incidence. Figure 1 shows the laboratory test settings. The roof
panel was constructed by nailing shingles on a 1.22 m X 2.44 m (4ft X 8ft) sheet of
plywood. To represent wind induced defects, roof felt papers and plywood pieces
were cut and placed on the roof panel (see the close up view in Figure 1). Then as
shown in Figure 1, the roof panel was hung on the laboratory wall at height of 4 meter
off the ground. Five different scans were run and a crane system was used to change
the roof panel angle at each scan. A Leica C10 scanner was used, and all scans were
collected from 10 meters away from the roof panel with a medium scanning
resolution (i.e., 1 centimeter point spacing at 10 meter distance).
In each scan, the mean angle of incidence for the points captured on the roof
panel was determined. The RANSAC plane fitting algorithm was used to extract the
normal vector of the roof panel. Next, the mean angle of incidence was determined by
calculating the angle between the panel normal vector and the vector connecting the
scanner position to the panel centroid point. The points captured on shingles, felt
paper, and plywood were then manually separated and their intensity values were
extracted to be analyzed.
© ASCE 3
Computing in Civil Engineering 2015 202
Analysis of the test results indicated that the increase in the angle of incidence
decreases the reliability of separating shingles and plywood areas based on only
intensity information. Figure 2 illustrates how the lidar intensity measurements of
each material changed by changing of the angle of incidence. As shown in Figure 2,
unlike the intensity values obtained from shingles and felt papers, the intensity values
obtained from plywood areas decreased with increasing angle of incidence. Therefore,
when the angle of incidence was above 70 degrees, the intensity values obtained from
shingles and plywood were very similar, resulting in difficulties to properly classify
shingle and plywood points only based on the intensity information.
Figure 3 also shows this impact of degrading intensities with increasing angle
of incidence in another illustration. Figure 3 (a) and (b) shows the normal
distributions fitted to lidar intensity values obtained from scans with angle of
incidence of 50 and 80 degrees, respectively. It is clearly shown that intensity values
captured from the three materials were contained in different ranges when the angle
of incidence was 50 degrees. But when the angle of incidence reached 80 degrees, the
intensity values obtained from shingles and plywood areas converged with significant
overlap.
© ASCE 4
Computing in Civil Engineering 2015 203
FIELD TEST
The field test was performed by scanning a damaged building after the 2013
Moore, OK. Figure 4 shows the point cloud data and scanning locations. Scans were
collected with the same scanner and resolution used in the laboratory test. The two
roof surfaces shown by the red polygons in Figure 4 were selected for the analysis.
The lidar points obtained on the target roof polygons comprised four registered scans
captured from the locations shown in Figure 4. The angle of incidence was
determined for each point in the cloud. Points measured on shingles, felt paper, and
plywood areas were then manually separated, and their intensity values were
extracted for the analysis. Figure 5 shows a scatter plot of intensity values versus the
angle of incidence. In Figure 5, points related to different materials were shown with
different colors. Also regression lines were added to illustrate the general trend of
change in intensity values of each material.
© ASCE 5
Computing in Civil Engineering 2015 204
The field test results supported the results of the laboratory test and indicated
that when the angle of incidence is close or above 70 degrees, it is hard to separate
shingles and plywood areas based on only intensity information. As shown in Figure
5 the scanning angle of incidence for the target roof points were between 65 and 75
degrees, and the intensity values obtained from shingles and plywood areas were very
close. Similar to the observations in the laboratory test, Figure 5 shows that the trends
of change in intensity values obtained from shingles and plywood areas (i.e. slopes of
the regression lines) are different. Therefore, it can be expected that in scans with
lower angles of incidence shingles and plywood points can be separated based on
their intensity.
CONCLUSIONS
This paper presented laboratory and field tests to investigate the influence of
the scanning angle of incidence on lidar intensity measurements and implications for
automated building wind damage detection. These tests indicated that in close range
scanning when the scanning angle of incidence is under 70 degrees, the captured lidar
intensity information with a Leica C10 scanner can be used reliably to detect and
separate the three exposing materials of roof shingle, felt paper, and plywood on the
damage building roofs. However, when the scanning angle of incidence gets close to
or reaches above 70 degrees, then distinguishing the shingle and plywood areas only
based on intensity measurements will be more difficult.
Based on the results of this study, we suggest the following strategies for
employment of lidar intensity measurements in building wind damage detection. 1)
The position and height of scanners during data collection should be determined
© ASCE 6
Computing in Civil Engineering 2015 205
properly to reduce the scanning angle of incidence as much as possible. 2) When the
angle of incidence is above 70 degrees, other spectral information such as color data
should be used in conjunction with the intensity data. 3) Scanning should be
completed at close ranges to minimizes the influence of angle of incidence.
In order to properly employ lidar intensity information in point cloud
segmentation and object extraction techniques, further studies are needed to
comprehensively analyze factors effecting intensity measurements. In this study, we
focused on specific application of lidar intensity in wind damage detection and
studied the impact of the angle of incidence. Also, we only used one type of scanner
(Leica C10) and our scans were run within close ranges. Lidar intensity analysis is an
active research and efforts are needed to investigate different effective factors such as
the scanning range, internal sensor system parameters, etc. Normalization and
calibration methods should be developed in order to reduce impacts of these factors
on lidar intensity values and extract true reflectance information.
REFERENCES
Antonarakis, A. S., Richards, K. S., & Brasington, J. (2008). Object-based land cover
classification using airborne LiDAR. Remote Sensing of Environment, 112(6),
2988-2998.
Coren, F., & Sterzai, P. (2006). Radiometric correction in laser
scanning.International Journal of Remote Sensing, 27(15), 3097-3104.
Ding, Q., Chen, W., King, B., Liu, Y., & Liu, G. (2013). Combination of overlap-
driven adjustment and Phong model for LiDAR intensity correction. ISPRS
Journal of Photogrammetry and Remote Sensing, 75, 40-47.
González, D., Rodríguez-Gonzálvez, P., & Gómez-Lahoz, J. (2009). An automatic
procedure for co-registration of terrestrial laser scanners and digital
cameras. ISPRS Journal of Photogrammetry and Remote Sensing, 64(3), 308-
316.
González, J., Riveiro-Rodríguez, B., González-Aguilera, D., & Rivas-Brea, M. T.
(2010). Terrestrial laser scanning intensity data applied to damage detection
for historical buildings. Journal of Archaeological Science, 37(12), 3037-3047.
Höfle, B., & Pfeifer, N. (2007). Correction of laser scanning intensity data: Data and
model-driven approaches. ISPRS Journal of Photogrammetry and Remote
Sensing, 62(6), 415-433.
Jelalian, A. V. (1992). Laser radar systems. Artech House.
Kashani, A. G., Crawford, P., Biswas, S., Graettinger, A., Grau, D. (2014).
Automated Tornado Damage Assessment and Wind Speed Estimation Based
on Terrestrial Laser Scanning. Journal of Computing in Civil Engineering,
ASCE.
Kashani, A. G., Graettinger, A. (expected 2015) Cluster-Based Roof Covering
Damage Detection in Lidar Point Clouds. Submitted to Journal of Automation
in Construction, Elsevier (Under Review)
© ASCE 7
Computing in Civil Engineering 2015 206
Abstract
INTRODUCTION
Recognizing and localizing traffic signs are among the important components
of a roadway asset management system. Such data collection and analysis has to be
done for millions of miles of roads and the practice needs to be repeated every so
often. Nevertheless, the significant number of traffic signs can negatively impact the
quality of any manual data collection and analysis method. The time-consuming
process involved in manual data collection practices can further create potential
safety hazards for the inspectors.
To streamline current processes, many state Departments of Transportations
(DOTs) and local agencies have proactively looked into a variety of methods for
inventorying roadside assets. These methods vary based on equipment type, and the
time requirements for data collection and data reduction and can be categorized into
integrated GPS/GIS mapping systems, aerial/satellite photography, steel-level
photography using camera-mounted vehicles, terrestrial laser scanners, and mobile
mapping systems (i.e., vehicle-based LiDAR, and airborne LiDAR) (Balali et al.
© ASCE 1
Computing in Civil Engineering 2015 207
2012). These techniques have their own benefits and limitations. For example,
vehicle-mounted LiDAR, a relatively new type of mobile mapping system, is capable
of rapidly collecting large amount of detailed highway inventory data in 3D, but is
expensive and involves significant data reduction processes to extract the desired
highway inventory. Among all techniques, collecting high-resolution steel-level
images using inspection vehicles equipped with high resolution cameras has received
most attentions form the DOTs. These images can provide detailed and dependent
information on both location and condition of the existing traffic signs, yet analyzing
them is still done manually and involves painstaking and subjective processes.
To address current limitations associated with manual analysis, many
computer vision are developed that can detect and classify one or a few types of
traffic signs from large collections of street-level images (Balali and Golparvar-Fard
2015; Hu and Tsai 2011; Huang et al. 2012). These methods have potential to
minimize the subjectivity of the current processes (Balali and Golparvar-Fard 2015).
However, it would be a blunder to assume that the task is not challenging. To make
these detection and classification useful, additional research is needed to minimize
False Positive (FP) and False Negative (FN) rates, and also localize the detected signs
in 3D. The task is particularly challenging due to the interclass variability of the
traffic signs and expected changes in illumination, occlusion, and sign
position/orientation (Figure 1). To this end, this paper presents and validates an
efficient pipeline for detection and classification of traffic signs from 2D street-level
images and triangulate their locations in a 3D point cloud environment. In the
following, an overview of the state-of-the-art methods is provided. Next, the new
method and the experimental results are presented.
RELATED WORK
© ASCE 2
Computing in Civil Engineering 2015 208
METHOD
The new method for traffic sign recognition and localization from street-level
images involves several steps: (1) detecting and classifying traffic signs in each 2D
image; (2) reconstructing 3D point clouds from overlapping images; and (3)
localizing the detected traffic signs in the generated 3D environment. In particular
using a standard Structure from Motion (SfM) based procedure, the method generates
both sparse and dense 3D point cloud model of the roadway assets. Each detected
traffic sign is localized in 2D using a bounding box. From multiple detections of the
same traffic signs, the location of the detecting bounding boxes is triangulated in 3D
using linear and non-linear methods and the corresponding 3D points are labeled with
the category of the detected sign. In the following, each step is discussed in detail.
Different from the state-of-the-art methods, the proposed method does not
make any prior assumption on the 2D location of traffic signs in images. Rather, a 2D
template at multiple scales slides across the entirety of each image and extracts
candidates for traffic sign detection. While the aspect ratio for the detection window
© ASCE 3
Computing in Civil Engineering 2015 209
is considered constant (1:1), yet three different scale factors of (0.75, 1.00, 1.25) are
considered to account for different scales in traffic signs. To balance accuracy in 2D
localization vs. efficiency in computation, the overlap between the sliding windows is
6.67%.
.
Trained Linear SVM Classifiers
Histogram of
Real Image HOG Color
Extract New Extract HOG &
Candidates with Color Histogram
Sliding Window Features
Multiple Images
Detect if Sign Concatenate and
Exists in Candidate Form HOG+C
Window Descriptors
Inspired by (Balali and Golparvar-Fard 2015), for each candidate, the gradient
orientations and color information are locally histogrammed and concatenated as
HOG+Color descriptors. These descriptors are then fed into multiple one-vs.-all
linear Support Vector Machine (SVM) classifiers– which are trained in an offline
process– to classify the detected signs into multiple categories. Finally, a non-maxima
suppression step removes false positives and keeps high score detections for accurate
localization. As shown in (Balali and Golparvar-Fard 2015), the performance of the
method is independent of different image resolutions and is robust to noise and
changes in illumination.
Image-Based 3D Reconstruction
© ASCE 4
Computing in Civil Engineering 2015 210
filters false matches, and eventually expands them to make patches dense. The default
parameters are used for 3D reconstruction are as follows:
a) Images per segment: The maximum number of images to be clustered together for
simultaneous matching. This number is limited by availability of memory and
varies based on the density of the reconstruction.
b) Reconstruction level and Voxel size: These parameters vary the density at which
the matching is carried out. Denser matching requires more memory allocation
and is much slower than less dense matching. The level determines the number of
times images are decimated before matching. Finally, the voxel size determines
how often a match is attempted – i.e., every nth pixel in the x and y directions of
the sampled images.
c) Reconstruction threshold: A threshold that is related to correlation values in the
matching process and is used here to filter out bad matches. Larger (up to 1.0)
values mean fewer but more reliable points, smaller values retain more points, but
the quality can be lower.
The above SfM pipeline outputs camera projection matrices and a dense 3D
point cloud model from the street-level images. Assuming that the intrinsic and
extrinsic camera matrixes are accurate, the Direct Linear Transform method of
(Hartley and Zisserman 2003) is used to triangulate the detected traffic signs in 3D.
To do so, the camera matrices are extracted from the Bundler output file. The view
list begins with the length of the list, and is followed by quadruplets <camera>
<key> <x> <y>, where <camera> is a camera index, <key> is the index of the SIFT
keypoint where the point is detected in that camera, and <x> and <y> are the
positions of the detected keypoint. The pixel postitions are floating point numbers in a
coordinate system where the origin is the center of the image.
For each detected traffic sign, the SIFT features within the 2D detection
bounding boxes are extracted. Among these features, those whose feature tracks are at
least pass through four images are chosen to achieve reasonably accurate
triangulation. On an average, ten SIFT features are triangulated per traffic detection
per image. The corresponding features are individually projected onto the 3D
coordinates of the generated point cloud model by solving for a system of equations
involving the following transformation per SIFT feature:
s = PS (1)
where s refers to the point in the image (x,y,1), S refers to the point in 3D
(X,Y,Z,1), and P is the projection matrix defined as K * [ R | T ] where K is the intrinsic
camera matrix, R is the rotation of the camera with respect to world coordinates, and
T is the translation with respect to world coordinates.
The triangulation solving for these equations using a direct Linear
Transform (DLT) is followed by Levenberg-Marquardt non-linear algorithm. Once
all feature points relevant to a particular traffic signs are triangulated in 3D, a 3D box
is placed over these points and is texture-mapped with an images that represents the
category of the detected traffic signs. Figure 3 illustrates the process of feature
© ASCE 5
Computing in Civil Engineering 2015 211
detection and matching followed by triangulating the location of the matched features
in 3D.
The performance of the method was validated for both accuracy in detection
and classification and also 3D reconstruction. In particular, precision and recall
metrics are used to measure the accuracy of classification for different types of traffic
signs. The average precision and recall among all types of traffic signs are 90.15%
and 99% respectively. Figure 5 shows example results for multi-class traffic sign
detection and classification on a part of the dataset that was collected on a street on
campus of the University of Illinois. This dataset contained 138 images.
The performance of the proposed 3D reconstruction and also localization
methods were also tested. In these experiments, Level had the biggest impact on
computational time and cell size had the biggest impact on quality of generated 3D
model. Minimum number of images had no major impact on time. The best
combination of parameters is shown in Table 1. The new implementation based on
GPU has significantly reduced the computation time (10 fold), making it feasible to
© ASCE 6
Computing in Civil Engineering 2015 212
reconstruct large areas that are typical in case of highway infrastructure assets. 89%
of the points of interest were successfully projected into the point cloud.
Table 1. 3D Reconstruction Parameters.
Parameter Value Description
Cell size 1 Optimal image quality
min number of images 4 Optimal image quality
level 2 Optimal computational time
The results of 3D reconstruction for the detected signs in Figure 5 are shown
in Figure 6. Here, based on the procedure described in the method section, the
locations for visual features are triangulated in 3D and their associated bounding
boxes are labeled with the traffic sign categories derived from the detection method.
In Figure 6, the location of the two detected traffic sign and the warning sign
shown by red and yellow boxes in Figure 5 are texture-mapped using images of
regulatory and warning signs respectively. Such visualizations enable the users to
select an asset category of interest and review their locations in 3D. The users can
also navigate through the geo-registered images or conduct visual observations in 3D
point
CONCLUSION
REFERENCES
Ai, C., and Tsai, Y. J. (2011). "Hybrid Active Contour–Incorporated Sign Detection
Algorithm." Journal of Computing in Civil Engineering, 26(1), 28-36.
© ASCE 7
Computing in Civil Engineering 2015 213
Ali, N. M., Sobran, N. M. M., Ghazaly, M., Shukor, S., and Ibrahim, A. T. "Traffic Sign
Detection and Classification for Driver Assistant System." Proc., The 8th Int. Conf.
on Robotic, Vision, Signal Processing & Power Applications, Springer, 277-283.
Balali, V., Depwe, E., and Golparvar-Fard, M. (2015). "Multiclass Traffic Sign Detection and
Classification Using Google Street View Images." Transportation Research Board
94th Annual Meeting, TRB, Washington, DC, USA.
Balali, V., and Golparvar-Fard, M. (2014). "Video-Based Detection and Classification of US
Traffic Signs and Mile Markers using Color Candidate Extraction and Feature-Based
Recognition." International Conference on Computing in Civil and Building
Engineering, ASCE, Orlando, FL, USA, 858-866.
Balali, V., and Golparvar-Fard, M. (2015). "Evaluation of Multi-Class Traffic Sign Detection
and Classification Methods for U.S. Roadway Asset Inventory Management."
Journal of Computing in Civil Engineering.
Balali, V., Golparvar-Fard, M., and de la Garza, J. M. (2012). "Video-based highway asset
recognition and 3D localization." International Workshop on Computing in Civil
Engineering, ASCE, Los Angeles, CA, USA, 379-386.
Brilakis, I., Fathi, H., and Rashidi, A. (2011). "Progressive 3D reconstruction of
infrastructure with videogrammetry." Automation in Construction, 20(7), 884-895.
Brkic, K. (2013). "An overview of traffic sign detection methods." Department of Electronics,
Microelectronics, Computer and Intelligent Systems, Unska, 3, 10000.
Cimpoi, M. (2014). "Traffic sign detection and classification in video mode."
Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R. "Towards internet-scale multi-view
stereo." Proc., Computer Vision and Pattern Recognition, 1434-1441.
Gallup, D., Frahm, J.-M., and Pollefeys, M. "A heightmap model for efficient 3d
reconstruction from street-level video." Int. Conf. on 3D Data Proc., Visualization
and Transmission,.
Golparvar-Fard, M., Balali, V., and de la Garza, J. M. (2012). "Segmentation and recognition
of highway assets using image-based 3D point clouds and semantic Texton forests."
Journal of Computing in Civil Engineering(04014023).
Hartley, R., and Zisserman, A. (2003). Multiple view geometry in computer vision,
Cambridge university press.
Hu, Z., and Tsai, Y. (2011). "Generalized image recognition algorithm for sign inventory."
Journal of Computing in Civil Engineering, 25(2), 149-158.
Huang, Y.-S., Le, Y.-S., and Cheng, F.-H. "A Method of Detecting and Recognizing Speed-
limit Signs." Proc., Intelligent Information Hiding and Multimedia Signal Processing
(IIH-MSP), 2012 Eighth International Conference on, IEEE, 371-374.
Kim, J. W., Jung, K. H., and Hyun, C. C. (2005). "A study on an efficient sign recognition
algorithm for a ubiquitous traffic system on DSP." Computational Science and Its
Applications–ICCSA 2005, Springer, 1177-1186.
Maldonado-Bascon, S., Lafuente-Arroyo, S., Gil-Jimenez, P., Gomez-Moreno, H., and
López-Ferreras, F. (2007). "Road-sign detection and recognition based on support
vector machines." Intelligent Transportation Systems, IEEE Transactions on, 8(2),
264-278.
Oskouie, P., Becerik-Gerber, B., and Soibelman, L. "Automated Cleaning of Point Clouds for
Highway Retaining Wall Condition Assessment." Proc., 2014 International
Conference on Computing in Civil and Building Engineering.
Rashidi, A., Brilakis, I., and Vela, P. (2014). "Generating Absolute-Scale Point Cloud Data of
Built Infrastructure Scenes Using a Monocular Camera Setting." Journal of
Computing in Civil Engineering, 04014089.
Snavely, N. (2014). "Bundler: Structure from Motion (SfM) for Unordered Image
Collections.", <http://www.cs.cornell.edu/~snavely/bundler/>.
© ASCE 8
Computing in Civil Engineering 2015 214
Timofte, R., Zimmermann, K., and Gool, L. V. (2014). "Multi-view traffic sign detection,
recognition, and 3D localisation." Mach. Vision Appl., 25(3), 633-647.
Yangxing, L., and Ikenaga, T. (2007). "Geometrical, physical and text/symbol analysis based
approach of traffic sign detection system." IEICE, 90(1), 208-216.
Zhu, S.-d., Zhang, Y., and Lu, X.-f. (2006). "Detection for triangle traffic sign based on
neural network." Advances in Neural Networks-ISNN 2006, Springer, 40-45.
© ASCE 9
Computing in Civil Engineering 2015 215
Abstract
© ASCE
Computing in Civil Engineering 2015 216
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 217
The recent advent of Inertial Measurement Unit (IMU) sensors (a sensor unit
with built-in accelerometer and gyroscope used to measure tilts, acceleration, etc.) has
resulted in their utilization for various applications. Within the construction industry,
the components of IMU sensors (accelerometer and gyroscope) have so far been used
for activity analysis of construction workers (Cheng et al. 2013), (Joshua et al. 2011),
construction vehicle tracking (Lu et al. 2007) and detecting 3-D orientation of
construction equipment (Akhavian et al. 2012). Most of these studies were aimed at
resource (equipment and material) tracking for construction planning and control.
Although their findings can be useful for performance monitoring, yet they were
mainly focused on tracking rather than analyzing the operations at activity level to
make decision regarding productivity.
Ahn et al. (2013) have first employed MEMS accelerometer for estimating the
working and idle modes of construction equipment. They relied on the assumption
that any engine operation of construction equipment, including idling, creates
distinguishable patterns of acceleration signals compared to background noise that is
generated during the engine-off mode. Also the stationary operating (e.g. for an
excavator) creates distinguishable patterns of acceleration signals, as compared to the
idling. They tested this approach on the operation of an excavator. The equipment
actions were classified into working and idle modes. The activity recognition
accuracies were found to be around 93% which resulted in less than 2% error in the
measurement of idle time. But as discussed above, this outcome is not adequate for
the purpose of detailed productivity analysis and therefore we have to study the
various activity modes and duty cycle of construction equipment in depth.
The objective of this research is to test the hypothesis that signals captured by
smartphone (equipped with IMU sensors) that is mounted on a construction
equipment, can be used to classify the operation into various activity modes (For
example: boom movement, cabin rotation and wheel base motion for an excavator)
and to determine the duty cycle of the operation. We are extending the work done by
Ahn et al. (2013) related to broad-based equipment action classification (i.e. working,
idle and engine off modes only) to multi-mode classification as discussed above. The
underlying assumption behind this hypothesis is that every operation of construction
equipment (For example: boom movement, cabin rotation and wheel base motion for
an excavator) generates unique and distinguishable patterns of signals compared to
the idling and engine off-modes.
© ASCE
Computing in Civil Engineering 2015 218
Crane. The experiments were performed on CAT 330CL hydraulic excavator working
on actual project sites to test the feasibility of this approach in the real world scenario.
The research framework is demonstrated in Figure 1. The smartphone with inbuilt
sensors (accelerometer and gyroscope) was mounted to a rigid block inside the cabin
of the excavator. Accelerometer and gyroscope collected data in the form of three
dimensional acceleration at fixed intervals based on the sampling frequency. The raw
data are segmented into windows with 50 % overlapping between consecutive
windows. The second by second operation of the excavator was being videotaped.
The starting time stamp from data collection and the video was matched. We used
detrending to remove gravity effect from raw data for FFT (Fast Fourier Transform)
processing.
Various time and frequency domain features were extracted from the data.
Time domain features included Resultant Acceleration, Mean Acceleration (3 Axis),
Standard Deviation of Acceleration (3 Axis), Peak Acceleration (3 Axis), Correlation
(3 Axis), Zero Crossing Rate (3 Axis), Kurtosis (3 Axis), Skewness (3 Axis),
Interquartile Range (3 Axis). Frequency domain features included Spectral Entropy (3
Axis), Spectral Centroid (3 Axis), Short Time Energy (3 Axis), Spectral Roll-Off (3
Axis). Same set of features were extracted from both accelerometer and gyroscope
data.
DATA CLASSIFICATION
© ASCE
Computing in Civil Engineering 2015 219
Acceleration (m/sec2)
0.6 3
0.2 1
-0.2 2 4 6 8 10 12 -1 235 240 245 250
-0.6 -3
-1 -5
Time (sec) Time (sec)
Acceleration (m/sec2)
0.6
1
0.2
-1 2054 2059 2064 2069 -0.2 766 766.5 767 767.5 768 768.5 769
-0.6
-3 -1
Time (Sec) Time (Sec)
© ASCE
Computing in Civil Engineering 2015 220
to work extensively on feature engineering in WEKA using filter and ranker methods
for refining the accuracies.
A video approximately 34.48 min long (2069 data points) was selected to
demonstrate the applicability of our approach. The features extracted from the raw
data were divided into training and testing set (60-40 split). The predictions for
activity actions were made on the testing set. The classification accuracy with
Random Forest algorithm was observed to be 74.7%. The whole duration of the
operation corresponding to the testing set was divided into small time segments. To
compute the cycle times based on predicted labels, we first calculated the number of
cycles in a particular segment by identifying and enumerating the points in time at
which labels change from cabin rotation to bucket/arm movement and vice versa.
This can be achieved by calculating the difference between consecutively occurring
labels in time. We ask the system to return a flag value for motions irrelevant to cycle
(Eg: Wheel Base Motion). A particular set of label difference always occurs in a
pattern and one such pattern represent one duty cycle. We can estimate the start and
end of each cycle by tracking this pattern. Once we knew the number of cycles, the
cycle time is calculated by dividing the duration of time segment by number of cycles.
A comparison between predicted and actual labels is shown in Table 2.
© ASCE
Computing in Civil Engineering 2015 221
For most of the cycles, cycle time from the predicted labels matched exactly
(or very close) with that from the video but for some of the others, the variation was
significant. After analyzing the video, we concluded that this variation is mainly due
to a lot of mixed motions (bucket/arm movement happening simultaneously) in these
duty cycles, that resulted in incorrect prediction of labels by classification algorithm
and subsequently affected the cycle time measurement. A higher standard deviation
for duty cycle measurement is probably because of confusion among classes such as
arm/bucket movement and cabin rotation. We are exploring other features which will
help us to better distinguish between cabin rotation and bucket/arm movement. This
is expected to improve cycle time measurement accuracy.
CONCLUSION
An attempt has been made in this paper to classify the various actions of a
hydraulic excavator (CAT 330CL) using smartphone sensor based approach. A novel
method is presented for calculating the cycle time of the excavator from data
classification results for various cycles. We observed that the estimation of mean
cycle time has better accuracy than classification accuracy. This is promising because
estimating the cycle time is what we eventually aim to achieve for improving
performance of equipment fleet operations. In future, we are planning to conduct a
© ASCE
Computing in Civil Engineering 2015 222
REFERENCES
Abolhasani, S., Frey, H.C., Kim, K., Rasdorf, W., Lewis, P., Pang, S.H. “Real-world
in-use activity, fuel use, and emissions for nonroad construction vehicles: a
case study for excavators” J Air Waste Manag Assoc. 2008 Aug;58(8):1033-
46.
Ahn, C. R., Lee, S.H., and Peña-Mora, F. (2013). “Accelerometer-based measurement
of construction Equipment operating efficiency for monitoring Environmental
performance.” Computing in Civil Engineering (2013).
Ahn, C. R., Lee, S.H., and Peña-Mora, F. (2012). “Monitoring System for
Operational Efficiency and Environmental Performance of Construction
Operations, Using Vibration Signal Analysis.” Construction Research
Congress 2012, West Lafayette, Indiana, May 21–23.
Akhevian, R., Behzadan A.H (2012), “Remote Monitoring of Dynamic Construction
Processes Using Automated Equipment Tracking” Construction Research
Congress 2012: pp. 1360-1369.
Bennink, C. (2011). “Dig Into Excavator Productivity” Equipment Today, <
http://www.forconstructionpros.com/article/10212146/dig-into-excavator-
productivity> (Nov. 15, 2014).
Chao, L.C. (1999). “Simulation of construction operation with direct inputs of
physical factors”, Construction Informatics Digital Library, paper w78-1999-
2287.content
Chen, J., Ahn, C. R., and Han, S. (2012). “Detecting the Hazards of Lifting and
Carrying in Construction through a Coupled 3D Sensing and IMUs Sensing
System." Computing in Civil Engineering (2014).
Gong, J., and Caldas, C. H. (2011). "An object recognition, tracking, and contextual
reasoning-based video interpretation method for rapid productivity analysis of
construction operations." Automation in Construction, 20(8), 1211-1226.
Heydarian, A., Golparvar-Fard, M., “Automated Visual Recognition of Construction
Equipment Actions Using Spatio-Temporal Features and Multiple Binary
Support Vector Machines”. Construction Research Congress (2012).
Joshua, L., Varghese, K. (2011), “Accelerometer-Based Activity Recognition in
Construction”. Journal of Computing in Civil Eng. 2011.25:370-379.
Pradhananga, N. and Teizer, J. (2013). “Automatic spatio-temporal analysis of
construction site equipment operations using GPS data”, Automation in
Construction, 29, p.107-122.
Zou, J., Kim, H. (2007) “Using Hue, Saturation, and Value Color Space for Hydraulic
Excavator Idle Time Analysis”, 10.1061/(ASCE)0887-3801(2007)21:4238.
© ASCE
Computing in Civil Engineering 2015 223
Abstract
Sustainable building system design techniques aim to find an optimal balance
between occupant comfort and the energy performance of HVAC systems. Design
and implementation of effective heating ventilating and air conditioning (HVAC)
controls is the key to achieve these optimal design conditions. Any anomalies in the
functioning of a system component or a control system would result in occupant
discomfort and/or energy wastage. While occupant discomfort can be directly sensed
by occupants, measurement of waste in energy use would require additional sensing
and analysis infrastructure. One way of identifying such a waste is to compare as-
designed system requirements with the actual performance of the systems. This paper
presents an analysis of an air handling unit (AHU) in a five story office building and
provides the comparison results of design requirements against the sensor data
corresponding to the AHU parameters. One year sensor data for the AHU parameters
was analyzed to assess the correctness of the implementation of the design intent. The
design intent was interpreted from the sequence of operations (SOOs) and confirmed
with a commissioning engineer, who worked on the project. The design intent was
then graphically represented as a pattern that the sensor data corresponding to the
controls is expected to follow if it follows the design intent. Any deviation in the
sensor data as compared to the expected operation pattern of the design intent
indicated incorrect operation of the system with incorrectly implemented controls.
The findings in this paper substantiate the need to formally define the sequence of
operations and also point to the need to verify the implemented controls in a given
project to detect any deviations from the actual design intent.
© ASCE 1
Computing in Civil Engineering 2015 224
INTRODUCTION
According to the American Society for Heating, Refrigerating and Air-
Conditioning Engineers (ASHRAE) handbook, the primary purpose of HVAC
equipment and systems is to provide the desired indoor environment to the occupants
of a facility as it greatly impacts their productivity. HVAC design engineers calculate
the thermal loads based on the internal thermal loads and the thermal resistance of the
building envelope. They select an appropriate HVAC system design that meets the
standard design guidelines as well as the goals related to indoor environment and
energy performance requirements (ASHRAE, 2012). The controls necessary for the
optimal functioning of these systems are conveyed to the other project team members
through construction documentation. The construction documents contain textual
narratives called the Sequence of Operations (SOOs) to specify HVAC controls.
Research by the National Building Controls Information Program (NBCIP)
highlighted the frequency of occurrence of control problems and the energy impact of
these control problems (Barwig et al., 2002). However, the study by NBCIP involved
identifying control related problems in retrospect without looking at the design intent
of the controls. These control related issues could have been introduced as part of the
designed controls for these buildings. This paper presents a case study of the controls
for an AHU in a newly constructed office building.
The designed and implemented controls for the AHU were compared to
identify any possible deviations. Within the current practice, sequence of operations
(SOOs) written by mechanical engineers are the main medium to share the design
intent and the SOOs and design drawings for the studied AHU were examined to
understand the designer’s intent for controlling the system. The design intent that was
interpreted from the SOOs was then verified with the commissioning authority (CxA)
for the project. The sensor data from the AHU was collected over a period of one
year and this was graphically compared against the design intent of the mechanical
designer. This comparison of the design intent and the actual implemented controls
indicated inefficiencies in the present approach to exchange information related to
HVAC controls. The findings substantiate a need for formal representation of SOOs
to foster the exchange of information related to HVAC controls.
© ASCE 2
Computing in Civil Engineering 2015 225
chiller in the building. Energy efficiency was of prime importance for this project as
the owner aimed for a LEED™ platinum rating. Apart from an efficient HVAC
system, the design also incorporates high quality envelope, automated solar shading
devices on all windows, occupancy and daylight sensors to achieve the energy
efficiency goals.
a. View of case study building b. View of the roof top packaged AHU
© ASCE 3
Computing in Civil Engineering 2015 226
© ASCE 4
Computing in Civil Engineering 2015 227
position are plotted against each other, the plot can be compared to the expected
operation pattern to assess if the control follows the design intent.
This can be graphically represented as shown in Figure 3a. Any points falling
in the green section of the graph are predicted to follow the designed controls and the
points falling in the red region of the graph are predicted to deviate from the designed
controls.
© ASCE 5
Computing in Civil Engineering 2015 228
Figure 3. Deviation analysis of the implemented control against the design intent
DEVIATION ANALYSIS OF IMPLEMENTED CONTROLS
The Pia tool in MATLAB can be used for deviation analysis of the
implemented controls by visualizing the corresponding sensor data. The tool allows
the creation of a matrix of scatter plots and provides an option of coloring the data
points interactively within the scatter plot matrix (Isakson, 2002). Trend data
collected from April 2013 to March 2014 at a 15 minute time interval were used in
this analysis for the following parameters: the by-pass damper, hot water valve
position, hot water pump status, supply fan status, hot water return temperature, hot
water return temperature set point, return air temperature, return air temperature set
point. The heat exchanger has been provided to reduce the heating load on the heating
coil to save energy in the AHU.
The deviation analysis presented here analyzes the implemented control to
verify if the design intent of saving energy is being achieved during the system
operation. The data collected at 15 minutes time interval is classified as – “Follows
design intent” (green area) or “Doesn’t follow design intent” (red area) based on the
interpreted designed controls. This expected operation pattern of the by-pass damper
and heating coil valve position based on the interpreted controls is graphically
presented in Figure 3a. The hot water valve is expected to modulate only when the
by-pass damper is completely closed during occupied heating hours when both the
heating coil and the heat exchanger are not in the freeze protection mode.
The sensor trend data points were plotted against one another in a scatter plot matrix.
The plot of the heating coil valve position vs. the by-pass damper position (Figure 3b)
was chosen from the matrix and the data points that corresponded to the green region
of the operation pattern (as shown Figure 3a) were colored in green and the rest of the
points were colored in red. Further, all instances corresponding to unoccupied hours
(Supply fan – off), no hot water flow (hot water pump- off), the freeze protection of
the heating coil (HWRT – HWRT:SP <0) and the freeze protection of the heat
exchanger (RAT-RAT:SP<0) were filtered out. These filtered out instances are
shown as black points in Figure 3b. The remaining red points indicate the instances
when the implemented control deviated from the design intent. In this particular
example being discussed, the system loses opportunities to save energy at these
© ASCE 6
Computing in Civil Engineering 2015 229
instances marked in red. The heating coil valve should have been modulating only
when the heat recovery by-pass damper is completely closed according to the design
intent, however both are open above 0% for the instances marked in red. Majority of
these instances occurred in September and October 2013 when the room temperatures
modulated between 20°C to 26°C.
It is impossible to assess why the heat exchanger does not reach its maximum
capacity by only looking at the room temperatures, as the heating requirement would
be compensated by the heating coil. Hence it is important to look at the control
relationship between the heating coil valve and the bypass damper which may not
always be explicitly described in the SOOs. In the case example being studied, the
control relationship was determined from the pseudo-code developed based on the
discussion with the commissioning engineer for the project. Formal representation of
SOOs can be used to clearly interpret all the control relationships required to
implement the design intent.
© ASCE 7
Computing in Civil Engineering 2015 230
for twelve months. Both the active and passive testing approaches help in identifying
the differences in the implemented and designed controls. However, these diagnostic
approaches give little information about the changes required for the implemented
controls to eliminate the deviations. Formally associating the sensor data points to
the respective control parameters can further help in identifying the changes that need
to be made to the implemented controls in the event of deviations.
CONCLUSION
This paper highlights a need for the formal representation of SOOs and also
formal approaches for comparing the design intent from the SOOs to the
implemented controls. Presently, several challenges are associated with interpreting
the design intent of HVAC controls from the SOOs, such as missing information
items, missing set points, or insufficient descriptions. Hence the implemented
controls cannot always be compared to the design intent during commissioning. The
deviation analysis from the case study presented in this paper shows the importance
of testing the implemented controls as they help identify control issues that may lead
to energy wastages. Formal representation of SOOs can greatly improve this process
of controls testing by enabling clear interpretation of the design intent. Also, the
testing approaches used for commissioning the HVAC control systems presently do
not clearly indicate the changes that are required to be made to resolve any identified
deviations of the implemented controls from the design intent. The analysis of the
case study shows that associating the trend data points from the BAS to the control
parameters can be used to effectively identify any deviations of the designed and
implemented controls. Once the correct implementation of the controls is established,
the controls can further be optimized to achieve energy savings up to 35% (Wang et
al., 2011). Future work will focus on formalizing the deviation analysis of the
designed and implemented controls to exactly identify the changes that are needed to
be made in the BAS programming in the event of any deviations.
ACKNOWLEDGEMENT
The authors thank the team at Baumann Consulting for their expert advices and
supporting this research.
REFERENCES
Barwig, F. E., House, J. M., Klaassen, C. J., Ardehali, M. M., & Smith, T. F. (2002).
The national building controls information program. In Proc. ACEEE
Summer Study on Energy Efficiency in Buildings, Washington D.C., August
18 – 23, 2002.
Baumann, O. (2003). Operation Diagnostics – Verification and Optimization of
Building and System Operation by Multi-Dimensional Visualization of
BEMS Data, ICEBO – International Conference for Enhanced Building
Operations, , Berkeley CA, October 13-15, 2003.
Guideline, A. S. H. R. A. E. (2004). Guideline 13-2000 Specifying Direct Digital
Control Systems. ASHRAE Publications, Atlanta, GA.
Handbook, A.S.H.R.A.E. (2012). HVAC systems and equipment. American Society
of Heating, Refrigerating, and Air Conditioning Engineers, ASHRAE
Publications, Atlanta, GA.
© ASCE 8
Computing in Civil Engineering 2015 231
© ASCE 9
Computing in Civil Engineering 2015 232
Abstract
Micro-management refers to a management style whereby the managers closely observe and
control the work details of subordinates or employees. Although micro-management generally
has a negative connotation, the implications of adopting micro-management in construction
projects remain unclear. This paper proposes the use of Agent Based Modeling (ABM) to
investigate the impacts of micro-management on the efficiency, effectiveness, quality, and
employee stress level in construction projects. A comprehensive simulation platform, Virtual
Organizational Imitation for Construction Enterprises (VOICE), has been developed to simulate
the proposal development of an EPC (Engineering, Procurement and Construction) project. The
simulation results show that the micro-management has complex effects in the studied project,
whereby decisional, behavioral, technical and institutional factors are interdependent. Micro-
management in certain cases improves the efficiency and quality of proposal development. This
paper contributes to the simulation studies in investigating social and behavioral problems in
construction.
INTRODUCTION
Management styles can be grouped into two categories according to how coordinators involve
themselves in the decision making and managerial actions; the categories are micro-management
and not (Burton et al. 1998). Micro-management is the custom of being heavily engaged in the
daily affairs and specific tasks of subordinates while the opposite is giving a degree of autonomy
to subordinates. The organizational literature often refers to micro-management as a “bad
management” practice (Alvesson and Sveningsson 2003). In general “it takes away the decisions
from the people that should take the decisions” (Alvesson and Sveningsson 2003), and results in
interference with productivity of people and the efficiency of projects and processes (Chambers
2009). Despite the evidence from general organizational literature, the implications of adopting
micro-management in construction projects, however, remain unclear. This study proposes the
use of Agent Based Modeling (ABM) to investigate the implications of micro-management in
construction projects.
© ASCE 1
Computing in Civil Engineering 2015 233
LITERATURE REVIEW
ABM is an emerging tool for use in social research to study human and organizational issues in a
diversity of areas (North and Macal 2007). It is a computational method that builds a common
environment for heterogeneous and autonomous agents to share, and allows the agents to
simultaneously interact with each other for self-interest (Ligmann-Zielinska and Jankowski
2007). Unlike top-down modeling approaches (e.g., System Dynamics), in ABM the collective
behavior of the simulated system is not predefined, but emerges from individual agents who act
based on what they perceive to be their own interests. Thus, ABM is capable of reproducing the
emergent properties of the studied systems (Macal and North 2007). ABM has been utilized by a
small but growing community of scholars to tackle a range of difficult problems in the
construction area, including engineering design (Soibelman and Pena-Mora 2000), project
organizations and network (Du and El-Gafy 2010, 2012; Horii et al. 2005; Jin and Levitt 1996;
Taylor and Levitt 2007), construction operations (Kim 2010; Mohamed and AbouRizk 2005;
Watkins et al. 2009), project management (Christodoulou 2010), supply chain (Xue et al. 2005),
and construction safety (Walsh and Sawhney 2004).
METHODOLOGY
An ABM model has been developed, namely Virtual Organizational Imitation for Construction
Enterprises (VOICE). VOICE tailors Robbins’ model of organizational behaviors (Robbins 2005)
to suit construction organizations, with three main components modeled (Fig.1): (1) Work:
construction organizations are project based organizations (PBOs), and thus projects and
corresponding tasks are modeled as the sole input as that in Robbins’ model; (2) Actors: project
tasks are performed by the individuals in a construction organization, whose personalities, value
and attitudinal factors affect the perceptions toward the tasks, leading to diverse micro-level
behaviors directly related to the work performance; and (3) Organization: a variety of
organizational structures that arranges lines of authority, work and communications, and
allocates rights and duties. In addition, key performance indicators of project team performance
are modeled as the main output. The architecture illustrated in Fig. 1 reflects the bottom-up
process of organizational behavior (input-individual level process - group process -
organizational process - output) as suggested by Robbins (2005). VOICE conceptualizes and
integrates all components into a comprehensive and integral model. Table 1 summarizes the
model rules of VOICE.
© ASCE 2
Computing in Civil Engineering 2015 234
• Task amount is measured by “hours”, i.e., how many hours it takes to finish a task by a team
member with average competence;
• Some tasks need approval from managers or president, or additional information before processing;
If a task is dependent on another one which has not been finished, it cannot be processed;
There are three major roles of actors in a construction project team, including president, manager and
staff member. In VOICE, an actor will first examine the situation. Then based on judgment on the
situation and preference, a certain behavioral module will be triggered.
• Prioritizing: An actor can only process one task at a time; therefore prior to further actions, an actor
may order all tasks in hand based on the readings of their priorities;
• Processing: Processing a task means reducing certain amount from the task every simulation tick.
The amount reduced depends on task difficulty and competence of the actor. During this process,
actors may commit mistakes shown as a mistake percentage of the task;
• Submission: Once a task is finished and there is no successive actors (according to work process),
the actor will submit the task to his/her superior;
• Assigning: A manager may assign a task to his/her subordinates based on the assigning preference
(e.g., speed driven or quality driven);
• Requesting/approving: some tasks require approval from superiors; in this case, the actor will render
this task to superiors, who approves the task or render it again to superiors based on the technical
information of the task and authority level of the actor.;
Actors
• Conflict management: If a conflict cannot be solved by staff members, it will be raised to the
manager or coordinator for further actions;
• Information exchange: If the available information for a task is less than the required information,
the actor will send this task to another actor. After a while, the task will be returned to the requestor
with necessary information;
• Meeting: If the number of all exceptions in a team is bigger than the threshold of the president, a
meeting will be held. After a meeting, all tasks are approved, information is provided, and
exceptions are cleared;
• Monitoring: If the mistake percentage of a task is bigger than the threshold of a staff member or a
manager, it will be returned to the original actor, or will be corrected at a cost of additional time
depending on the preference.
• Correction/rework: If an actor receives a returned task marked as unqualified, he/she will redo it to
improve quality. The time spent on correcting/redoing a task depends on the mistake percentage of
the task and competence of the actor;
• Stress-coping: An actor sums up total amount of tasks (burden) in hand – if this number is bigger
than his/her capacity, he/she will suspend working, and return new tasks to the manager. The
manager will reassign it to a staff member with smaller level of burden.
• Reporting structure: It is assumed that construction project team has a three level hierarchical
Organization
organizational structure;
• Work process: The procedure of processing a task; it shows the sequence of delivering a task among
team members. It always starts from a manager;
• Information flow: The channel connects information requestors and providers. Information only
refers to task related information, i.e., that is needed for processing a task.
© ASCE 3
Computing in Civil Engineering 2015 235
Performance
• Effectiveness: Ratio of productive time versus total time. Productive time is defined as time directly
spent on processing tasks;
• Quality: The mistake percentage of a project, which equals weighted average of the mistake
percentages of all its tasks.
• Work pressure: Total work amount of tasks in hand for an actor;
SIMULATION ANALYSIS
A case study was conducted with a large EPC company denoted as A. In order to enhance its
competitive power in the EPC market, Company A acquired an engineering design firm several
years ago to design all of A’s new EPC jobs. Proposal development is the responsibility of A’s
project proposal team. But because of the specialty of work, A’s proposal team highly relies on
the technical and quantity information from the engineering team to develop proposals. This
study utilized VOICE to explore the implications of micro-management in the proposal
development at A, especially with the interdepartmental cooperation between the engineering
and proposal teams (Fig.2). The magnitude of micro-management was measured with the
acceptable number of iterations for information exchange before raising the issue to coordinators
(Kristof-Brown and Stevens 2001). A smaller acceptable number of interactions means the
coordinators prefer to micro-management. In addition, it has been found that two other
sociotechnical factors affect the implications of micro-management: 1) goal congruence, i.e.,
aligned perceptions of behavioral standards and ranking of management criteria among
stakeholders (Thomsen et al. 2005); and 2) task dependence, i.e., the relationships among tasks
which determine the order in which activities need to be performed (Jin and Levitt 1996). In the
simulation, goal congruence was quantified as a value from 0 to 1, where 1 means the best goal
congruence. As for task dependence, a probability was used to determine whether a newly
generated task can be processed or not while preceding tasks are ongoing. Monte Carlo
simulation was performed to explore the entire uncertainty space. Uniform distributions were
used to simulate the changes of micro-management, goal congruence and task dependence.
Engineering Coordinator
Proposal
Coordinator
Engineer Staff
© ASCE 4
Computing in Civil Engineering 2015 236
analysis (Table 2) found that micro-management’s influence shows different features under different
levels of goal congruence:
• Efficiency: The effects of micro-management on efficiency vary depending on the level of goal
congruence. When goals are less congruent between two teams (0.1 and 0.2), micro-management
can help improve efficiency. But when goals are highly congruent between two teams (0.8 and 0.9),
too much micro-management hurts efficiency.
• Effectiveness: The effect of micro-management on effectiveness also depends on level of goal
congruence. When goals are less congruent, such as at a level of 0.1 or 0.2, micro-management
improves effectiveness. Otherwise, micro-management sacrifices effectiveness. This indicates that
micro-management helps with effectiveness only when goals are incongruent.
• Quality: Result shows that autonomy sacrifices quality in most cases. Micro-management can
always help reduce mistakes. However, this is not true when the goals of two teams are highly
congruent (e.g., greater than 0.9). In this case, micro-management will slightly increase the chance
of committing more mistakes. This indicates that when teams share the same goals, micro-
management leads to mistakes.
• Work related pressure: the ANOVA indicates there is a significant relationship (p-value<0.0001)
between micro-management and work related pressure at each level of goal congruence: less micro-
management or higher level of autonomy for the staff means a higher level of work related pressure.
© ASCE 5
Computing in Civil Engineering 2015 237
• Efficiency: the influence of micro-management becomes less noticeable when task dependence is
considered. Only when tasks are very independent (task dependence is 0 through 0.2), is micro-
management able to improve efficiency; otherwise, it exerts no influence. This indicates that micro-
management is beneficial only when tasks are highly dependent.
• Effectiveness: similar to efficiency, the influence of micro-management on the effectiveness of
proposal development is not significant when task dependence is considered.
• Quality: autonomy sacrifices quality. When coordinators prefer the autonomy of team members, the
team will commit more mistakes. Worth noting, however, is that the opposite trend occurs when
task dependence equals 0, and is due to the abnormal data points in the simulation.
• Work pressure: ANOVA does not show a significant relationship between micro-management and
work related pressure under most task dependence levels. Only when tasks are highly independent
© ASCE 6
Computing in Civil Engineering 2015 238
(dependence is smaller than 0.4) do the results show that micro-management can reduce work
related pressure.
Table 3. p-values of micro-management’s influence under levels of task dependence
Dependence Efficiency Effectiveness Quality Pressure
0.0 <0.0001* <0.0001* 0.003* <0.0001*
0.1 0.0027 <0.0001* <0.0001* <0.0001*
0.2 0.0343 0.0238* <0.0001* <0.0001*
0.3 0.0789 0.0862 <0.0001* 0.0253*
0.4 0.1237 0.1658 <0.0001* 0.0448*
0.5 0.3364 0.0822 <0.0001* 0.1864
0.6 0.6238 0.1561 0.0134* 0.3762
0.7 0.1595 0.1231 0.0013* 0.4579
0.8 0.4423 0.0684 0.0002* 0.2688
0.9 0.5864 0.0402* 0.0002* 0.3357
1.0 0.3522 0.0355* 0.0052 0.4238
The general organizational science literature always refers to micro-management as a bad practice.
However, the implications of adopting micro-management in construction projects remain unclear.
This study proposes to investigate the micro-management and its implications in project proposal
development from the behavioral perspectives. Unlike previous efforts, it also highlights the
importance of considering diverse human behaviors relevant to the proposal development in a
comprehensive manner rather than just one or several critical behaviors, as the interactions of various
human behaviors set the foundation of understanding complex institutional and behavioral
phenomenon. An ABM model, called VOICE, was built to perform a series of simulation experiments
on the impacts of micro-management, with the implications of goal congruence and task
interdependence. Results indicate that the impacts of micro-management are complex depending on a
variety of factors. For example, when team members share congruent goals, micro-management will
hurt performance but it will improve performance when team members have incongruent goals.
Admittedly, this work is in its infancy. The future work will be focusing on expanding the factors and
processes modeled by VOICE to capture a wider range of organizational behaviors. More real data
from different companies will be collected in order to define behaviors, work process, and interactions.
This will result in more realistic results.
APPENDIX: SUPPLEMENTARY INFORMATION
The behavior rules in VOICE were based on a survey conducted in 2011 and a meta-analysis. For
summaries please refer to https://sites.google.com/site/dujresearch/working-papers.
REFERENCES
Alvesson, M., and Sveningsson, S. (2003). "Good visions, bad micro-management and ugly ambiguity:
contradictions of (non-) leadership in a knowledge-intensive organization." Organization Studies,
24(6), 961-988.
Burton, R. M., Obel, B., Hunter, S., Søndergaard, M., and Døjbak, D. (1998). Strategic organizational
diagnosis and design: Developing theory for application, Kluwer Academic Pub.
Chambers, H. E. (2009). My Way Or the Highway: The Micromanagement Survival Guide: Easyread
Super Large 18pt Edition, ReadHowYouWant. com.
© ASCE 7
Computing in Civil Engineering 2015 239
© ASCE 8
Computing in Civil Engineering 2015 240
Abstract
Laser scanners provide high-precision geometrical information, however, due
to various scan errors, the generated point clouds often do not meet the data quality
criteria set by project stakeholders for accurate data processing. Although, there exists
several studies on identifying scan errors in literature, there is limited research on
defining a data quality-driven scan plan for accurate detection of geometrical
features. The authors propose a novel framework to integrate image-processing
methods with point cloud processing techniques for defining a data quality-driven
approach for scan planning. The framework includes the following steps: 1) capturing
images of a target using a commercially available unmanned aerial vehicle (UAV)
and generating a 3-D point cloud, 2) recognizing the project’s geometrical
information using the point cloud, 3) extracting the features of interest (FOI) using
the point cloud, 4) generating multiple scanning scenarios based on the data
processing requirements and the extracted features, 5) identifying the best scan plan
through iterative simulation of different scanning scenarios, and 6) validating the scan
plan using real life data. The framework was evaluated using preliminary results of a
case study. The results showed that the data quality requirements of the case study
were met using the proposed framework.
INTRODUCTION AND BACKGROUND
Visual remote sensing technologies, such as Light Detection and Ranging
(LiDAR) systems and digital cameras, are prevalently used for generating as-built/as-
is 3-D models. Reconstructing accurate 3-D models by using such technologies have
been extensively studied in recent years (Anil et al. 2011; Balali and Golparvar-Fard
2014; Balali and Golparvar-Fard 2015; Tang et al. 2010). Performances of 3-D laser
scanners and digital cameras in creating realistic 3-D models have been also
compared in many studies (Dai et al. 2012; Zhu and Brilakis 2009). According to Dai
et al. 2012, the performance of laser scanners is more consistent and their accuracy is
10 times higher than image and video-based methods. Researchers have identified
factors that influence scan-data accuracy. For example, Shen et al. (2013) found that
scan resolution, scanner’s distance, color and intensity of the scanned objects are the
1
© ASCE
Computing in Civil Engineering 2015 241
factors that contribute most to scan errors. In order to have an accurate 3-D point
cloud and to minimize the errors for different purposes, such as Scan to BIM, it is
essential to have an accurate scan plan, which takes all parameters that could affect a
scan’s accuracy into account. Most of the scan planning studies focus on analyzing
the visibility requirements and do not consider the data quality requirements (e.g. data
density and tolerance). The use of a uniform scan plan can result in redundant level of
detail (LOD) in parts of a point cloud and lack of the required details in other parts. A
scan plan should also consider the project specific LOD requirements, which may
vary in a project. The LOD requirements of a geometrical feature of a project could
be initially defined based on the asset condition assessment (ACA) goals. For
instance, an ACA goal for a bridge could be the inspection of concrete columns for
crack detection. In order to monitor the columns using a laser scanner, point clouds
should provide detailed data points on the columns so the data processor could
analyze the severity of cracks or damages. If the rest of the bridge is scanned with
settings similar to the columns, a scan plan based solely on the visibility requirements
could potentially result in a time consuming and costly scan process.
Recently, researchers have developed new scan planning approaches focusing
on scan data quality. Pradhan and Moon (2013) introduced a simulation-based
framework to identify scan locations for better capturing the critical components of a
bridge such as piers and girders. Based on their findings, capturing geometrical
measurements of certain parts of a structure is more critical than the others since
different portions of structures have different levels of importance in terms of
performance/response metrics. Song et al. (2014) proposed a novel approach for scan
planning, which integrated 3-D data quality analysis and clustering methods to group
the geometrical features based on their LOD requirements. Their study showed that
the automatic scan planning algorithm results in a denser point cloud without the
need to increase the data collection time. Also, their study is based on the assumption
that a BIM model of the project is available, however, this is not always the case,
especially when the as-is condition of an infrastructure is different than the archived
as-built/designed models. Moreover, the line-of-sight and portability limitations of
terrestrial laser scanners have to be considered in the scan plan. For instance, a scan
plan has to provide a solution for capturing the features that are located in the blind
spots of a laser scanner. Therefore, a hybrid scan plan including different LiDAR
equipment such as long range scanners and aerial imaging sensors may be required to
scan infrastructure systems with accessibility limitations.
FRAMEWORK
In order to define a high quality scan plan, there is a need for a holistic
approach that takes into account project specific characteristics and data quality
requirements. The framework proposed in this paper was designed to enable project
stakeholders to define an integrated scan plan, which is centered on data quality
requirements. (Figure 1). The input data of the proposed framework is comprised of
every project’s specific information. The main information that drives the scan plan is
realization of the condition assessment goals. The LOD requirements for different
geometrical features of the project are defined by project stakeholders based on these
goals. LOD requirements are usually not constant and may change throughout the
2
© ASCE
Computing in Civil Engineering 2015 242
project. The next input data to be derived is the list of all available LiDAR sensors
and their parameters. This information is directly tied to the project’s constraints,
such as time and budget, weather conditions, accessibility, etc. For instance, in order
to overcome the project’s accessibility constraints, an alternative solution would be
using long range terrestrial laser scanners, aerial LiDAR sensors, or UAV imaging.
The last input data for the framework is the 3-D model of the project (if available). In
this paper, the authors propose using an image-based 3-D model for planning the scan
when the updated BIM model is not available.
3
© ASCE
Computing in Civil Engineering 2015 243
position across different sides of the project. The output of the decision making
process is a scan scenario, which is then evaluated by a sensor simulation software
(i.e., Blensor). The sensor simulation output enables evaluation of the proposed scan
scenario by measuring the data quality for different features. If the data quality is not
satisfying, the decision making process is repeated with the new information using
the sensor simulation output. The other module of the proposed framework focuses
on integrating LiDAR and UAV-based imaging data to generate a single coherent 3-
D point cloud, in which all the blind spots of the target are covered. Due to the fact
that point clouds from different sensors might have non-equal spatial resolution,
precision, and accuracy, their registration process is challenging. In the case of
having point clouds from multiple sensors, an iterative registration process using
multiple common features/control points could improve the accuracy of final point
cloud. Once the registration process is completed, if there are missing data in the final
point cloud, there will be a need to augment the 3-D model by extrapolating the
missing data using their surrounding points’ coordinates and RGB values.
CASE STUDY
In order to provide a preliminary evaluation of the proposed framework, the
authors selected the Mudd Hall building’s courtyard located on University of
Southern California campus. The Mudd Hall building has been named as one of the
Historical Cultural Monuments by the Los Angeles City Council and the fact that it is
a feature rich building makes it a suitable case study for the purpose of this research.
Figure 2b shows some of the architectural features on the building’s courtyard.
UAV Image Collection. 3-D scene reconstruction using image sequences has
been a growing field of inquiry across multiple disciplines such as cultural heritage
preservation (Stanco et al. 2011), archaeology (Kersten and Lindstaedt 2012), and
construction (Golparvar-Fard et al. 2009; Ham and Golparvar-Fard 2013; Rodriguez-
Gonzalvez et al. 2014). Generating 3-D models from 2-D images follow the SfM
technique which includes: 1) feature extraction and description using Scale-invariant
Feature Transform (SIFT) algorithm, 2) pairwise matching of images using SIFT
descriptors, 3) estimation of motion and structure using the matched images, 4)
refining the estimates using Bundle Adjustment, and 5) creating surface meshes using
image-based triangulation.
The authors used a DJI Phantom 2 Vision Plus drone to take images from the
case study building. The drone comes with a 14 Megapixel built-in camera, which is
mounted on a gimbal, making it stable during its flight. The camera has a large field
of view (FOV = 125°), therefore the images are highly distorted. We selected the
lowest available FOV option (85°), even then the collected images were slightly
distorted. We then calibrated the camera to circumvent the distortion effect and to
rectify the images. We installed 10 ground control points (GCP) and scanned them
with a Leica TS06-plus Total Station to be able to geo-reference the 3-D
reconstructed model (Figure 2a). A total of 236 images were taken from the
building’s courtyard using a drone.
Image-based 3-D Reconstruction. We used commercially available and
open source tools, such as VisualSfM (Wu 2011), Autodesk Recap 360, and Agisoft
Photoscan Pro to generate a dense point cloud and a 3-D mesh of the courtyard. After
visual inspection of the reconstructed models, we decided to use Agisoft’s output as it
4
© ASCE
Computing in Civil Engineering 2015 244
provided a denser point cloud comparing to VisualSfM and since Autodesk Recap
only provides 3-D mesh. Note that the selection of the software tools might have
effects on the results; however, optimum selection of the tools will be part of the
future directions of this research. We then geo-referenced the model by assigning
surveyed coordinates to GCP on the point cloud. The 3-D reconstruction process was
completed in 155 mins using a Microsoft© Windows workstation laptop with Intel®
Core i7 processor, 32 GB RAM memory, and NVIDIA© K2000M graphic card.
5
© ASCE
Computing in Civil Engineering 2015 245
which is a low number and therefore has resulted in large jumps in precision and
recall values. Once the FOIs were detected in the images, the corresponding points
should be localized in the point cloud. A reverse engineering method using the SfM
principles could be employed to address the localization problem. Figure 3 illustrates
different steps to match 2-D pixels on the images to 3-D points on the point cloud.
The images were previously pair-wised by matching similar SIFT features during the
SfM process and a tree data structure was made for the pair-wised images. Therefore,
a tree-based search can identify co-visible images that contain a particular FOI. Using
the previously estimated camera poses of the co-visible images, fiducial planes (G1
and G2) of the two viewpoints along with the rotation matrix (R1,2) are computed and
used to localize the detected object in the 3-D model. For this preliminary study, we
manually localized the FOI in the point cloud to ensure the accuracy of the feature
classification and the following steps. As part of the future work, we will examine the
proposed automated localization approach.
Table 1. Results and LOD definition
a) Haar-Like Results b) Scan Results c) LOD Definition
6
© ASCE
Computing in Civil Engineering 2015 246
7
© ASCE
Computing in Civil Engineering 2015 247
8
© ASCE
Computing in Civil Engineering 2015 248
Kersten, T., and Lindstaedt, M. (2012). "Image-Based Low-Cost Systems for Automatic 3D Recording
and Modelling of Archaeological Finds and Objects." Progress in Cultural Heritage
Preservation, Springer Berlin Heidelberg, 1-10.
Pradhan, A., and Moon, F. (2013). "Formalized Approach for Accurate Geometry Capture through
Laser Scanning." Computing in Civil Engineering (2013), 597-604.
Rodriguez-Gonzalvez, P., Gonzalez-Aguilera, D., Lopez-Jimenez, G., and Picon-Cabrera, I. (2014).
"Image-based modeling of built environment from an unmanned aerial system." Automation
in Construction, 48(0), 44-52.
Shen, Z., Tang, P., Kanaan, O., and Cho, Y. (2013). "As-Built Error Modeling for Effective 3D Laser
Scanning on Construction Sites." Computing in Civil Eng., 533-540.
Song, M., Shen, Z., and Tang, P. (2014). "Data Quality-oriented 3D Laser Scan Planning."
Construction Research Congress 2014, 984-993.
Tang, P., Huber, D., Akinci, B., Lipman, R., and Lytle, A. (2010). "Automatic reconstruction of as-
built building information models from laser-scanned point clouds: A review of related
techniques." Automation in Construction, 19(7), 829-843.
US-GSA (2009). "BIM Guide for 3D Imaging." http://www.gsa.gov.
Viola, P., and Jones, M. "Rapid object detection using a boosted cascade of simple features." Proc.,
Computer Vision and Pattern Recognition, 2001. CVPR 2001.
Wu, C. (2011). "VisualSFM: A Visual Structure from Motion System."
Zhu, Z., and Brilakis, I. (2009). "Comparison of optical sensor-based spatial data collection techniques
for civil infrastructure modeling." Journal of Computing in Civil Engineering, 23(3), 170-
177.
9
© ASCE
Computing in Civil Engineering 2015 249
ABSTRACT
In the past few years, Performance-based building design (PBD) is being adopted by
the structural engineering community, especially for tall buildings [1, 2]. The main
concept of PBD is to define multiple performance objectives that a building has to
satisfy, in order for the design to be accepted [1]. The method helps stakeholders make
better decisions about buildings design, as it takes into account uncertainties involved
in predicting the building performance, resulting in a more robust design procedure [3].
© ASCE
Computing in Civil Engineering 2015 250
In performance based design, analyses with at least three different ground motions are
generally required. Several design guidelines (such as in Tall Buildings Initiative [2])
mandate at least 7 ground motions. Los Angeles Tall Buildings Structural Design
Council (LATBSDC) recommends using 7 ground motions or more [4]. If less than 7
ground motion simulations are used, the absolute maximum of the evaluated responses
from these simulation shall be used to evaluate the performance. However, if 7 ground
motions or more are used, then the average of responses from the different simulations
can be used to evaluate the performance, as using more simulations produces a design
that is more robust against uncertainties in the ground motions [2]. Dynamic earthquake
simulations of buildings requires a large amount of data to be analyzed, processed and
stored, often times it can easily require tens of gigabytes of storage. For instance, an
earthquake simulation of El Centro 1940 earthquake requires evaluating responses at
1501 time steps (30 seconds with 0.02 second time step), that is 1501 times the amount
of storage of a static load case simulation. For a model that consists of 5000 nodes,
2000 frame elements and 200 shell elements, a single dynamic simulation would
require around 750 Mega Bytes (MB) of storage. It can easily be seen that the multiple
dynamic simulations required for PBD would consequently consume gigabytes of
storage. For some models, it might also be necessary to evaluate nonlinear response
time histories, which normally require substantially more storage than linear time
history as smaller time steps are often required to accurately evaluate nonlinearities [5].
It is imperative for the design engineers to be able handle these big datasets efficiently,
in order to make better decisions by exploring different design alternatives.
In this section, the two big data libraries that are used for this study are discussed. First,
the format of the simulation data that is required to be stored is described, then an
explanation of how the two libraries can be adapted to store data is presented. In
addition, performance metrics used later for testing is defined.
© ASCE
Computing in Civil Engineering 2015 251
the HDF Group [6]. The library is used for handling large complex data very efficiently
in different scientific domains. For instance, NASA has developed its own variant of
HDF that is being used to store data from the earth observation system, which is a
collection of satellites gathering data about the earth [7].
HDF uses a hierarchy to store data which is very similar to directory/file structure in
modern computers. The library relies on three main entities, groups, datasets and
attributes. A group can contain other groups and/or datasets (i.e., subsets). A dataset is
a table that contains data. Finally, attributes can be attached to groups and datasets, in
order to store additional metadata about the contents.
In order to accommodate the data format as mentioned in Table 1, an upper layer of
groups are used to define different load cases, then an intermediate layer of groups are
used to define different kinds of elements (e.g. nodes, frames….etc). Inside each
elements group, datasets can be used to store results in a table form (Figure 1). Finally
attributes are used to store additional metadata about the load cases or elements. For
instance, an attribute can be added to the nodes group to define the units of the
displacements stored for each node.
Shells
Dataset
Node 1
Load Case 1 Frames
Dataset
Nodes Node 2
Model
Shells Dataset
Node 3
Load Case 2 Frames
Nodes Etc.
Figure 1 Hierarchy used for HDF files to store simulation results data
SQLite
SQLite is an open source, light weight, library available in the public domain [8]. The
library is a relational database that does not require a server and uses SQL as its query
language. It is popular for being very efficient in handling big data and for its very
small footprint as its size can be less than 500 Kilo Bytes (KB).
In order to accommodate the data formats mentioned in Table 1, the SQLite database
file includes two basic tables, one to hold information about different load cases and
another to hold information about the different elements and the type of quantities
stored for each of the elements (Figure 2). It has to be noted that these two tables can
also hold other metadata such as units used for each of the stored quantities or number
of steps for each load case. In addition, the file has three tables to hold elements results,
one table for each kind of elements. Each row in the elements tables represent the
results of one element for a specific load case and specific time step, so each table has
two additional columns to hold indices defining these two values, in addition to a
column that contains the index(Identification number) of the element.
© ASCE
Computing in Civil Engineering 2015 252
Figure 2 Schemas of the Groups, Subgroups and Nodes tables used for the SQLite database.
Note: For SQLite, “PK” indicates primary key column in the table, REAL indicates a double precision floating
point number and VARCHAR(n) is a character array (string) with a variable length that is less than or equal to n
Read/Write Directly
(A) Process Permament Storage
to/from Disk
(Disk)
Write/Read Volatile
Write/Read
(B) Process
to/from Cache
Storage Permament Storage
(Memory) to/from Disk
(Disk)
Figure 3 Schematic diagram of (A) Reading/writing files without caching, (B) Automatic
caching process for reading/writing files (Adapted from Microsoft Windows
Documentation [9])
© ASCE
Computing in Civil Engineering 2015 253
The main purpose of this study is to compare the performance of the considered two
big data libraries, HDF and SQLite. Two different performance tests were performed
for each data library, one is for hypothetical model with nodes only, and the other for
a typical structural model of a building that includes results for 5000 nodes, 2000 frame
elements and 1000 shell elements, based on two dynamic load cases. Results for all the
tests were based on results values obtained from a pseudorandom number generator.
All the tests in this section were carried out on a Lenovo ThinkPad W510 mobile
workstation, with an Intel i7 Q820 processor clocked at 1.73 GHz, a 16GB of memory
and 7200 RPM hard drive, running a 64 bit Windows 8.1 operating system.
The first test is conducted for reading and writing results for nodes based on different
caching schemes (warm or cold). Tables 2 and 3 compare performances of both
libraries for writing and reading nodes results, respectively. In addition, a closer
comparison for writing and reading results of 10,000 nodes is portrayed graphically
in Figures 5 and 6, respectively.
Table 2 Time to write nodes results to data file Table 3 Time to read nodes results from data
measured in seconds, for both cases when file measured in seconds, for both libraries
writing to disk and when using cache with cold and warm caches
Nodes HDF SQLite Nodes HDF SQLite
Disk Cache Disk Cache Cold Warm Cold Warm
10 0.11 0.075 0.22 0.21 10 0.04 0.0013 0.065 0.026
100 0.34 0.311 1.49 1.33 100 0.45 0.014 0.55 0.23
1000 5.3 3.14 13.25 13.23 1000 2.42 0. 13 7.73 2.3
10000 55 35.9 141.1 137.6 10000 26 1.23 47.5 22.9
© ASCE
Computing in Civil Engineering 2015 254
160 50
140 47.5
137.6 141.1 40
120
100 30
80
20 26
60 22.9
40 55 10
20 35.9 1.23
0 0
Write Cache On Write Cache Off Cold Cache Warm Cache
Figure 5 Time to write results for 10,000 nodes Figure 6 Time to read results of 10,000
in seconds nodes in seconds
Table 4 Time to write typical model results to Table 5 Time to read typical model results from
data file measured in seconds, for both cases data file measured in seconds, for both libraries
writing to disk and writing to cache with cold and warm caches
Elements HDF SQLite Elements HDF5 SQLite
Disk Cache Disk Cache Cold Warm Cold Warm
Nodes 60.2 4.14 142 29.02 Nodes 26.1 1.27 46.5 22.8
Frames 20.7 6.12 84.6 43.8 Frames 13.6 0.59 31 14.2
Shells 14 1.60 58.7 18.5 Shells 11.6 0.43 19.6 11.9
Total 95 12.04 285.6 91.5 Total 51.3 2.3 97.2 48.9
© ASCE
Computing in Civil Engineering 2015 255
300 120
250 285.6 100
97.2
200 80
150 60
100 40 51.3 48.9
91.5 95
50 20
12.04 2.3
0 0
Write Cache On Write Cache Off Cold Cache Warm Cache
HDF5 SQLite HDF5 SQLite
Figure 8 Time to write results for typical Figure 9 Time to read results of typical
structural model in seconds structural model in seconds
CONCLUSIONS
Based on the results discussed in this paper, it is found that HDF is faster than SQLite
in writing and reading simulation results, especially when a large amount of data is
being transferred in a single process. In addition, it is observed that HDF is more
efficient in terms of storage requirements, as it produced data files that were 30%
smaller in size than SQLite databases. Although the tests were performed on building
simulations data, the previous conclusions might also be applicable to applications in
other disciplines with data of similar format to the data described in this paper.
Additional comparative studies are needed to evaluate the merits of these libraries in
other disciplines or applications with different data formats.
Further studies might also be needed to fully understand the performance of big data
libraries. The current study is carried out on a single hardware configuration.
Performance testing using different hardware configuration may be needed. In
particular, performance testing on Solid-state hard drives (SSD) is necessary. Both
HDF and SQLite libraries offer data compression capabilities that also need to be
investigated. Although, compression can save storage space, it can significantly hinder
the reading/writing performances. Additional tests are required to investigate
performances with different compression algorithms. More advanced performance
optimizations can also be investigated such as exploiting concurrency or parallelism.
Finally, it has to be noted that SQLite supports advanced queries (e.g. Max, Min, Count,
Sum, etc.) through the use of SQL query language, whereas HDF does not provide
queries. To perform queries on HDF data, a custom code has to be written that reads
the data from the file first, then computes the result of the query. Further research needs
to be performed to compare the performances of both libraries in such scenarios.
© ASCE
Computing in Civil Engineering 2015 256
REFERENCES
© ASCE
Computing in Civil Engineering 2015 257
ABSTRACT
Sentiment analysis of social media has become a popular approach in many
research areas. Most of the works were based on data collected from Twitter, which is
a data source composed of text and geographic information. Many transportation
researchers attempted to enrich and provide more accurate transport statistics through
this approach. However, text data cannot fully provide a suitable explanatory of
human’s behavior. In this paper, Instagram, which is a data source containing not only
text and geographic information but also photographs, is utilized for the analysis.
With computer vision techniques, there is the potential to better understand human’s
travel behavior.
Keywords: Data Mining, Spatial analysis, Temporal Pattern, Image processing,
Social Media, Instagram
INTRODUCTION
!
The Origin-Destination (OD) table is a common type of transportation data. It is
mostly generated by the public transportation systems, and reflects the number of trips
between places in the network from both ticket and card records. However, this
common used data cannot show traveler behavior and their trip purpose. To collect
trip purpose data, transportation agencies need to spend great amount of time and cost
for public survey. With the improvement of mobile devices and the increasing number
of social media users, travelers’ behavior and their tendency can be inferred by their
posts and geospatial references. Social media such as Twitter, Facebook, Foursquare,
provides users’ rich textual and geo-referenced data, which can be useful to the
understanding or estimation of trip purposes. In addition, a relatively new form of
social media called Instagram provides not only the data mentioned but also
photographs. It is an application through which users share their life by uploading
© ASCE
Computing in Civil Engineering 2015 258
LITERATURE REVIEW
Image Features. Feature selection and extraction are important factors for pattern
recognition. The basic assignment of feature extraction is to find the most significant
features. Two feature selection methods are reviewed in this paper. The first is SURF,
a speeded-up local feature selection method (Bay et al 2008). The second is SIFT, a
method invariant to uniform scaling, orientation, and partially invariant to affine
distortion and illumination changes.
Table 1 is based on research conducted by Luo and Gwun (2009). SURF has
better performance on time and vague images. SIFT can find similar descriptors in
rotation and scaled images.
© ASCE
Computing in Civil Engineering 2015 259
METHODOLOGY
The Instagram crawled dataset we have used contains 147269 post and 37912
user ID. Time period is from 2014-11-10 to 2014-11-21. The dataset was collected in
proximity of each MRT station using a 2km buffer. The data structure of the
Instagram data is shown in Figure 1.
The data contains the user ID, Location, Photo’s Link, Comments, Captions,
Users tagged in photo, the number of liked and the relationship between users.
However, some private users’ information is not included. In this paper, tagged users
are considered as friends of the user who posted the photo. Since the case study is a
MRT line in Taipei City, description of each station was added into the dataset.
Given the Instagram data, individual’s movement can be observed. The number
© ASCE
Computing in Civil Engineering 2015 260
of geo-tagged data in a certain area is much higher than other locations. In other
words, a user posts in this area much more often. There is good chance that the area is
either the user’s workplace or home.
Stations are plotted with different colors in Figure. 4. This figure presents the
distribution the spatial and temporal data. The visualization is constructed on
CartoDB (cartodb.com), which is an online Geographic Information System platform.
© ASCE
Computing in Civil Engineering 2015 261
Photographs in Figure 5 and Figure 6 are randomly picked for showcasing the
computer vision feature representation. There are 1470 (left) and 2803(right) SURF
features in Figure 5, and 2153 (left) and 2499 (right) SIFT features in Figure 6. The
number of descriptors does be determined the performance of feature extraction.
From these pictures, the SURF features took brightness into account. It is easily to
detect highlighted spots. When the picture is illuminated, SURF can easily detect the
outline of objects. The SIFT features in Figure 6 has almost the same features that
were detected by SURF. From this random case, the performance is almost the same.
In large-scale image detection, SURF is more often used because it requires less
computational time.
Visualization and data mining are both important. Visualization provides a more
comprehensive representation of the data. For example, the patterns of aggregated
traveler data or individual traveler behavior are visualized in this study. The presented
topic is an ongoing work. Currently, we have acquired several datasets on public
transport volume and related demographic data. The main future direction of this
work is to add into the dataset of aggregated Instagram as input features for the
prediction of more accurate traffic demand forecast. He et al., (2013) used tweet
semantic to improve the estimation of traffic flow volume, and showed the possibility
to estimate the traffic flow volume by incorporating social media. Modeling of the
data will also be conducted for purposes such as capturing of system behavior for
transportation.
© ASCE
Computing in Civil Engineering 2015 262
REFERENCE
Alesiani, F., Gkiotsalitis, K., & Baldessari, R. (2014). “A Probabilistic Activity Model
for Predicting the Mobility Patterns of Homogeneous Social Groups Based on
Social Network Data.” The 93rd Annual Meeting of Transportation Research
Board.
Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). “Speeded-up robust features
(SURF).” Computer vision and image understanding, 110(3), 346-359.
Collins, C., Hasan, S., and Ukkusuri, S. V. (2013). “A Novel Transit Rider
Satisfaction Metric: Rider Sentiments Measured from Online Social Media
Data.” Journal of Public Transportation, 16(2).
Hasan, S. (2013). “Modeling urban mobility dynamics using geo-location data.”
(Doctoral dissertation, PURDUE UNIVERSITY).
Hasan, S., Zhan, X., and Ukkusuri, S. V. (2013). “Understanding urban human
activity and mobility patterns using large-scale location-based data from online
social media.” Proceedings of the 2nd ACM SIGKDD International Workshop
on Urban Computing. ACM(p. 6).
Hasan, S., and Ukkusuri, S. V. (2014). “Urban activity pattern classification using
topic models from online geo-location data.” Transportation Research Part C:
Emerging Technologies, 44, 363-381.
He, J., Shen, W., Divakaruni, P., Wynter, L., and Lawrence, R. (2013). “Improving
traffic prediction with tweet semantics.” In Proceedings of the Twenty-Third
international joint conference on Artificial Intelligence, AAAI Press, pp.
1387-1393
Hu, Y., Manikonda, L., and Kambhampati, S. (2014). “What We Instagram: A First
Analysis of Instagram Photo Content and User Types.” The International AAAI
Conference on Weblogs and Social Media
Hochman, N., and Manovich, L. (2013). Zooming into an instagram city: Reading the
local through social media. First Monday.
Jin, P. J., Cebelak, M., Yang, F., Ran, B., and Walton, C. M. (2013). “Urban Travel
Demand Analysis for Austin TX USA using Location-based Social
Networking
Data
” Transportation Research Board 92rd Annual Meeting (No. 13-2374)
Jin, P. J., Cebelak, M., Yang, F., Ran, B., and Walton, C. M. (2014). “Location-Based
Social Networking Data: An Exploration into the Use of a Doubly-Constrained
Gravity Model for Origin-Destination Estimation.” Transportation Research
Board 93rd Annual Meeting (No. 14-5314).
Luo, J. and Gwun, O. (2009). “A comparison of sift, pca-sift and surf.” International
Journal of Image Processing (IJIP), 3(4), 143-152.
Lowe, D. G. (1999). “Object recognition from local scale-invariant
features.”Computer vision, 1999. The proceedings of the seventh IEEE
international conference on (Vol. 2, pp. 1150-1157). IEEE
Manikonda, L., Hu, Y., and Kambhampati, S. (2014). Analyzing User Activities,
Demographics, Social Network Structure and User-Generated Content on
Instagram. arXiv preprint arXiv:1410.8099.
Ni, M., He, Q., & Gao, J. (2013). “Using Social Media to Predict Traffic Flow under
Special Event Conditions.”(Doctoral dissertation, State University of New York
at Buffalo).
Yang, F., Jin, P. J., Wan, X., Li, R., and Ran, B. (2014). “Dynamic Origin-Destination
Travel Demand Estimation Using Location Based Social Networking Data.”
Transportation Research Board 93rd Annual Meeting (No. 14-5509).
© ASCE
Computing in Civil Engineering 2015 263
1 2 3 4
5
1 2 3 5 4
© ASCE
Computing in Civil Engineering 2015 264
CO2
© ASCE
Computing in Civil Engineering 2015 265
O = [0, 1]
L = {l1 , l2 , · · · , ln } n
P = {p1 , p2 , · · · , pn } n
© ASCE
Computing in Civil Engineering 2015 266
© ASCE
Computing in Civil Engineering 2015 267
© ASCE
Computing in Civil Engineering 2015 268
© ASCE
Computing in Civil Engineering 2015 269
© ASCE
Computing in Civil Engineering 2015 270
© ASCE
Computing in Civil Engineering 2015 271
Abstract
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 272
RELATED STUDIES
© ASCE
Computing in Civil Engineering 2015 273
collection of accurate, complete, and reliable field data is not only essential for active
management of construction projects, but also for facility and civil infrastructure
management. In facility management area, (Lee & Akin, 2009) found that missing
jobsite information, caused by poor quality data from manual documentation, could
lead to huge fieldwork inefficiency.
Laser scanning, as an efficient and effective measuring method, are addressing
the desire for quality data in the field and has many applications in construction.
(Park, Lee, Adeli, & Lee, 2007) uses laser scanner to measure deformation of
construction. In the domain of civil engineering, Haas et al. investigated the technical
feasibility of integrating a CAD model and laser scanned data for automated
construction progress monitoring (Bosche & Haas, 2008; Turkan, Bosche, Haas, &
Haas, 2012). As another method of acquireing 3D imaging data, photogtammetry is
gaining interest of researchers. (Hwang, Weng, & Tsae, 2008), and (Yang & Kang,
2014) use digital photos to reconstruct as-built BIM and assessing the accuracy of
point clouds; (GORE, SONG, & ELDIN, 2012) use the photogrammetric techniques
to assist space planning on complex and congested construction sites.
Although laser scan technologies are becoming popular in construction
industry, few research projects studied about how to acquire laser scanning data of
unknown site conditions with certain data quality. The process of visual complexity
analysis and data colleciton planning shown in Figure 2 will fill the gap of acquiring
quaility laser-scan data in dynamic and unpredictable environments. This paper
focuses on visual complexity while data collection planning will be the focus of a
case study.
© ASCE
Computing in Civil Engineering 2015 274
In this approach, we use sparse imageries, which are jobsite photos and low-
resolution laser-scanned point cloud, to generate 3D point cloud for visual complexity
analysis. Photography has the advantages of easy to use, fast data collection, portable
devices, and popularization. Sparse photos allow fast detection of potential changes
and spatial complexities across a job site. On the other hand, the disadvantage of
photos is that photos do not contain sufficient information about absolute geometries
of objects. Therefore, we used sparse laser-scanned point cloud to fuse with point
cloud from jobsite photos for scaling and more detail of certain area.
First, we generate 3D point clouds from the jobsite images using Scale-
invariant Feature Transform (SIFT) feature points because SIFT features points are
generally salient features containing potential visual complexities. SIFT algorithm is
used for feature extraction and image matching (Lowe, 2004). SIFT feature points are
robust and invariant for spatial scale, rotation angle and brightness of image, which
means SIFT can detect the same object from different images by matching SIFT
feature points. As a result, SIFT will detect all the distinguishable feature of the
object. So we can process visual complexity analysis based on SIFT feature points
instead of original images for time and space efficiency.
In practice, we used Microsoft Photosynth, a software based on SIFT feature
points detection and matching, to accomplish 3D point cloud generation from the
jobsite images. This point cloud will keep all of the matched SIFT feature points of
© ASCE
Computing in Civil Engineering 2015 275
the images, wh hich is the source inform mation of viisual compleexity analysiis. Figure 4
shoows the poin nt cloud froom photos of o a campuss building named n Colleege Avenue
Com mmons (CA AVC) in Arizzona State Unniversity (ASU).
Second,, the authorrs integrate 3D point cllouds generrated from photos p with
spaarse laser-scaan data for absolute
a geommetric informmation of jobb sites and more
m details
of complicated
c d areas. The laser scanneer used in thhis research integrates a camera so
thatt it can pro oduce 3D laaser scanninng point cloouds along with panoraamic photo
aliggned with th he point clouud. Through SIFT we match m imagess taken by LiDAR
L with
imaages that ussed for poinnt cloud genneration. Ussing three pairs of matcching SIFT
featture points, we can condduct point-too-point regisstration betw ween point clouds
c from
phootos and laseer scan. After this processs, we will have
h a fused point cloud with a real-
worrld scale and d more detaills about the complicatedd part of the jobsite.
j
We cou uld also use other
o methodds to scale thhe point clouud from imagges (e.g. set
artiificial target)). In this appproach, we juust use pointt clouds from
m photos. We
W are in the
proocess of testting the usees of laser scanning
s daata for scalinng photo-baased spatial
infoormation, an nd will present the results in future publications.
VIS
SUAL COOMPLEXITTY ANALLYSIS: US
SING DIS
SCONTINU
UITY TO
QU
UANTIFY VISUAL
V CO
OMPLXITY
Y
Fiigure 4. Poin
nt cloud froom photos Figure 5.. Three Diffferent discontinuities
© ASCE
Computing in Civil Engineering 2015 276
Noticing
g two featurres of the pooint cloud inn civil enginneering appliications, we
cann start from analyzing thet visual discontinuity
d y on X-Y pllane, and thhen analyze
disccontinuity in
n 3D space to
t get the cooordinates off visually coomplex locattions. These
twoo features innclude: 1) the
t z-axis of o the pointt cloud is always
a poinnting zenith
or every piicture used will provide a preccise zenith orientation
orieentation, fo
infoormation because of thee gravity sennsor in the devices;
d 2) the
t actual joobsite often
connsists of wallls that are perpendicullar to the grround, whichh is differennt from the
natuural landscap
pe.
VIS
SUAL COM
MPLEXITY
Y ANALYSIIS: IN X-Y PLANE
P
© ASCE
Computing in Civil Engineering 2015 277
VIS
SUAL COM
MPLEXITY
Y ANALYSIIS: IN 3D SP
PACE
F
Figure 7. 3D
D discontinu
uity detection (z+ & z- Figuree 8. Visuallyy complex
direction example)
e locationns of CAVC C building
LO
OD DETERRMINATIO
ON OF VIS
SUALLY COMPLEX
C LOCATIO
ONS AND
LA
ASER SCAN
N PLANNIN
NG
In this step,
s the autthors are tryying to use detected
d visuually compleex locations
andd frequency analysis off photos forr deriving data d quality requiremennt—level of
detaail (LOD), and
a then guiide a compreehensive lasser-scan plannning (determ mination of
scaanning positiions and ressolutions). We W will discuss the maathematic deefinition of
LOOD of a pointt cloud in fuuture publicaations. Here we use LOD D to represent the point
dennsity of the neighborhood
n d of certain point
p of visuual complexiity.
LOD determinationn is to deteermine the samplings ratte of 3D-im
maging data
colllection for reconstruct the originnal signal (rreal-word joobsite, reprresented by
origginal photoss) with acceeptable detaail loss. Froom the deteected visuallly complex
locaations, we can
c find the correspondiing points inn original im mages. By appplying 2D
disccrete frequeency analysiis (Fourier transform or o Wavelet transform) around the
neigghborhood of these pooints, we will w get the frequency domain d infoormation of
certtain visuallyy complex location.
l Thhen we can apply Nyquuist-Shannonn Sampling
© ASCE
Computing in Civil Engineering 2015 278
theorem (Shannon, 1949) to determine the sampling rate of the image, with which the
original image can be reconstructed from sampling point with acceptable detail loss.
If the algorithm detected more details in the neighborhood of a feature point, that area
needs higher data density of 3D imaging measurements, and vice versa. Last step is
translating the sampling rate of the image to the real word scale. If the LOD
requirement of each visually complex location is satisfied, the point cloud collected
will have approximately the same amount of detailed information as the 2D jobsite
image at each point of complexity. Future publications will give detailed description
of LOD determination.
Finally, the scan planning algorithm uses visually complex locations and LOD
requirements as inputs, and calculates the optimal data collection plan. In the dynamic
construction environments, the data collection plan would guide civil engineers to
adjust scanning parameters (resolution, scanning locations, etc.) for ensuring the data
quality in order to support proactive construction progress monitoring, safety analysis,
and quality control. Due to space limit, details of the scan planning algorithm will be
in relevant publications. Real-world experiments validate that proposed complexity
analysis and scan planning would guarantee quality jobsite data.
REFERENCE
Bosche, F., & Haas, C. T. (2008). Automated 3D data collection (A3dDC) for 3D
building information modeling. The 25th International Symposium on
Automation and Robotics in Construction. ISARC-2008, 279–285.
GORE, S., SONG, L., & ELDIN, N. (2012). Photo-modeling for Construction Site
Space Planning, 1350–1359.
© ASCE
Computing in Civil Engineering 2015 279
Hwang, J., Weng, J., & Tsae, Y. (2008). 3 D Modeling and Accuracy Assessment- A
Case Study of Photosynth, 3–8.
Lee, S., & Akin, Ö. (2009). Shadowing tradespeople: Inefficiency in maintenance
fieldwork. Automation in Construction, 18(5), 536–546.
Lowe, D. G. (2004). Distinctive Image Features from Scale-Invariant Keypoints.
International Journal of Computer Vision, 60(2), 91–110.
Park, H. S., Lee, H. M., Adeli, H., & Lee, I. (2007). A New Approach for Health
Monitoring of Structures: Terrestrial Laser Scanning. Computer-Aided Civil and
Infrastructure Engineering, 22(1), 19–30.
Shannon, C. E. (1949). Communication In The Presence Of Noise. Proceedings of the
IRE, 86(2), 447–457.
Taneja, S., Akinci, B., Asce, M., Garrett, J. H., & Soibelman, L. (2012). Sensing and
Field Data Capture for Construction and Facility Operations, 137(10), 870–881.
Turkan, Y., Bosche, F., Haas, C. T., & Haas, R. (2012). Automated progress tracking
using 4D schedule and 3D sensing technologies. Automation in Construction, 22,
414–421.
Yang, L., & Kang, J. (2014). Application of Photogrammetry: 3D Modeling of a
Historical Building (pp. 219–228).
© ASCE
Computing in Civil Engineering 2015 280
Mahdi Safa1,*; Arash Shahi2; Mohammad Nahangi3; Carl Haas4; and Majeed Safa5
1,2,3,4
Department of Civil and Environmental Engineering, University of Waterloo, 200
University Avenue West Waterloo, ON, Canada N2L 3G1.
5
Department of Agricultural Management and Property Studies, Lincoln University, New
Zealand.
Abstract
Prefabrications has been gaining popularity in the construction industry over the past decade as it
provides a safer and more sustainable operation as well as higher quality and cheaper
construction components, due to its controlled conditions during fabrication, in-house quality
assurance systems, and lower amount of waste. Despite the advantages of prefabrication
methods, the current quality management processes, particularly in piping fabrication, are
labour-intensive, time-consuming, expensive and rather inaccurate. This paper investigates
automated solutions for improving the quality management system associated with the
prefabrication of piping assemblies in the industrial sector of the construction industry. The
scope of this research is the quality management processes conducted at the post-production
stage of the fabrication process. The findings of this paper indicated that 3D laser scanning and
photogrammetry techniques can both be successfully used in quality assurance systems for post-
fabrication of pipe-spools. It was concluded that these methods exceeded the accuracy
requirements of the current systems, while substantially improving the efficiencies of the quality
assurance processes for prefabrication operations.
INTRODUCTION AND BACKGROUND
Within the context of the construction industry, a quality assurance (QA) system refers to the
method by which owners and contractors use systematic quantitative and qualitative
measurements to ensure, with adequate confidence level, that a product, process, or service
conforms to design specifications or contract requirements (Burati et al., 1992). Given the
dynamic construction environment, failure to achieve adequate quality levels in construction
processes has long been an obstacle to the delivery of projects on time and on budget. Current
QA processes primarily involve paper forms and manual human operations, which are
inaccurate, time-consuming, expensive, and labour intensive (Fredericks, 2005). The associated
problems may cause delays in completion of the project and may trigger claims by the owner and
other parties. Effective improvements in the quality assurance systems associated with
construction processes therefore offer significant promise (Rounds and Chi, 1985). Also, the
construction industry has been exploring alternatives to its traditional operations in order to deal
with challenges inherent in working in a dynamic, unique, and continuously evolving
construction environments.
The application of pre-fabrication techniques, as an alternative to traditional construction
practices, has resulted in a profound change in the construction industry worldwide (Tucker,
1982; Safa et al., 2014). Prefabrication can be defined as "a manufacturing process, generally
taking place at a specialized facility, in which various materials are joined to form a component
1
© ASCE
Computing in Civil Engineering 2015 281
part of a final installation" (Tatum et al., 1987). Any component that is manufactured offsite and
is not a complete system can be considered prefabricated, including pipe spools and pipe
modules. Benefits include improved quality, enhanced design, reduced project time, and less
reliance on site labour. For some projects, these benefits come with increased costs, which could,
however, be minimized in the future, as the construction industry becomes more familiar with
the technology (Yeung et al., 2002). In general, modularization and prefabrication represent
aspects of a trend toward more efficient construction that has been developing in the construction
industry in Canada and the US over the past few decades. Improved productivity is a key driver
for the use of prefabrication (Eastman and Sacks, 2008). In addition, supply chain requirements
in Canada and abroad necessitate the use of modular design and prefabricated systems across
nationally and often globally distributed supply chains.
Due to logistical challenges of global supply chains, successful transportation and delivery of
prefabricated materials has always been a key challenge. Despite recent substantial advances in
modularization and prefabrication and the use of an extra 10 % to 20 % of structural material for
bracing and supporting modules, significant damage still occurs during shipment, requiring
rework after arrival at the site: an undesirable secondary effect. Other factors that lead to rework
and increased costs are fabrication errors and inaccuracies, which are due mostly to human
interaction and challenging material behaviours in the fabrication process. The Construction
Industry Institute (CII) has reported that in 2001 the cost of rework across the entire construction
industry was approximately US $ 15 billion (CII, 2002). Defects frequently become evident
during the construction phase, which is costly for both the contractors and the owners. It has
been estimated that approximately 10 % of the cost of construction rework is caused by delays in
detecting the defects (Akinci et al., 2006). Timely and accurate quality management practices
can hence save money and expedite project schedules. Current approaches for quality control
measurements on piping fabrication and installation are not as effective as they could be in
identifying defects early in the construction process. As a result, defects can go undetected until
later phases of construction or even to the maintenance phase. It was reported by the
Construction Industry Institute in 2003 that 13.3% of all fabricated pipe spools in the industrial
sector required rework (CII, 2003; Safa et al., 2011).
Addressing deviations due to fabrication errors and shipping damage requires an automated,
integrated, and continuous inspection and quality control management system. Combining 3D
imaging with 3D design information opens up a wide range of potential solutions. Applying this
technique for automating the measurement process of QA provides project management with an
outstanding opportunity for visualization of the as-built point-cloud data, which needs to be
properly registered with the as-planned point-cloud (Golparvar-Fard et al., 2011). The feasibility
of the use of these technologies has been the subject of numerous research studies involving the
analysis of construction progress and other computer vision and construction management
applications (Brilakis et al, 2011; Shahi et al. 2014). Such tools make it possible to automate
tasks related to quality control and quality assessment, including (1) the automated quality
assessment of fabricated assemblies, (2) the remote identification of exceeded tolerances, and (3)
the remote and continuous quality control of assemblies as they are being fabricated. Solving
these problems and automating these tasks will reduce the risk of fabrication errors and thus
decrease project cost as well as enhance schedule and productivity performance.
2
© ASCE
Computing in Civil Engineering 2015 282
While the use of prefabrication is increasing in all construction sectors, the scope of the research
presented in this paper was limited to the development of a QA model of the built dimensional
quality of prefabricated pipe spools and pipe modules that have been produced using a
prefabrication process. However, a comprehensive quality control system would need to be able
to identify and prevent production, post-production, and on-site defects. Detecting on-site defects
was beyond the scope of this research. Instead, this paper focuses on using automated systems to
assist with post-production quality control processes. Historically, this is the stage where the
majority of the quality issues can be identified, while providing sufficient warning for resolving
the issues before the prefabricated elements are shipped to the construction site.
AUTOMATED QUALITY ASSURANCE FOR PREFABRICATED SPOOLS
In current QA practice, pipe spools are tested and measured only when the production
department places them at the end of the production line. At this post-production stage, the pipe
spools are considered end products of the fab-shop if they pass the QA tests. The automated
post-production QA process was developed in this research with the potential to improve on the
current practice.
Pipe spools should be assembled in a way that avoids forcing, cutting, or otherwise weakening
the structural members. Even one pipe-spool with small deficiencies and defects could cause
multiple problems and challenges: rework costs, process or service audits, supplier surveillance,
client complaints, etc. Given the QA requirements surrounding piping construction, it ranks
ahead of other construction QA categories with respect to the need for technological
advancement (Kim et al., 2013). Unfortunately, the construction industry as a whole is about 20-
30 years behind some other industries, including manufacturing, when it comes to adopting new
technological tools (Shahi et al., 2012). While several new technologies could be applied as a
means of improving QA piping construction, in this research the use of 3D laser scanning and
photogrammetric techniques were investigated.
For the use of 3D laser scanning, three suitable laser scanner locations were identified in the
corners of the quality control room at the end of the production process, shown in Figure 1. The
appropriate location of the scanners minimized occlusion and enabled as many sides of the
spools as possible to be covered in the scans. Separate scans were merged together by finding the
common points among the separate point clouds. The ambient temperature was about 15ºC to 20
ºC, which would not affect the scanner results. One of the advantages of laser scanners is that
they can tolerate small temperature changes (0-5°C), without the need for lengthy calibrations.
Of course, large temperature changes could negatively affect the accuracy (Ahmed et al., 2011).
3
© ASCE
Computing in Civil Engineering 2015 283
4
© ASCE
Computing in Civil Engineering 2015 284
(a) (
(b) (c)
Figure 3:
3 (a) Origin
nal 3D CADD Drawing; (b) Point Cloud
C Generrated by Laaser Scannerr; (c)
Point Cloud Generaated Using the
t Photogrrammetry Approach
A
In order tot have the 3D
3 CAD Moodel and the scanned as--built status aligned,
a coarrse registratiion is
performeed using the Principal Coomponent Analysis
A (PC
CA), as the PCA
P is quickk and robustt. The
registratiion process is
i programm
med in MATL LAB and theen applied too sample spoools. A com mputer
with 3.7 GHz processor and 32 GBG RAM cann process thee registrationn in 5 to 6 seeconds.
While the application n of photogrrammetry tecchnique has known limiitations whenn it comes to flat
and featuureless surfaaces, such ass pipe spoolss, 3D laser scanning
s is very
v powerfful in this coontext
and its accuracy
a levvels do not deteriorate in such coonditions. Thhe results ofo laser scannning
process are dependeent on a nuumber of factors f suchh as object distance from scannerr and
5
© ASCE
Computing in Civil Engineering 2015 285
measurement angle. Laser scanners can output extremely high resolution models in compared to
photogrammetry results in both laboratory and actual field experiments, while both approaches
allow the as-built environment to be visualized from different viewpoints. In general, the
photogrammetry offers a good alternative to laser scanning considering its much cheaper start-up
cost, if the accuracy level desired for a certain application is moderate. For this study, results
obtained by implementing photogrammetry are as accurate as the results of using laser scanning
technologies for these size objects and indoor settings.
The proposed automated measurement system for the post-production QA process was
performed for several pipe spools. Sample results for three pipe spools are shown in Table 1 for
both the laser scan and the photogrammetry data. The measurement results are the diameters and
lengths of the varied sections of the pipe spools which are determined by the QA department as
the critical areas. There were five critical points identified on pipe-spools 1 and 2, and four
points on pipe-spool 3 to be controlled as part of the quality assurance program. The QA
department ensures the fabricated spools meet final assembly requirements through determining
the critical test area and dimensional tolerances for completed fabricated piping. There is no
standard or specification for determining the critical areas according to the one-off nature of
these projects. As a basic rule, typically the standard parts of the pipe spools such as fittings,
pipes, and flanges should not be considered as the critical areas.
Table 1: Comparison of the Dimensions; Dap: As-planned Dimension from the Original 3D
drawing; DL: Dimension Measured in the 3D Point Cloud from the Laser Scanner; DP:
Dimension Measured from the 3D Point Cloud from Photogrammetry (mm)
Pipe Dap (mm) DL (mm) Dp (mm) L= % L P= % P
Spool # Original Laser Photogrammetry Dac-DL Dac-DP
Value Scan
1 14 15 13 -1 -7.14% 1 7.69%
374 378 379 -4 -1.07% -5 -1.32%
117 120 122 -3 -2.56% -5 -4.10%
38 38 38 0 0.00% 0 0.00%
168 174 176 -6 -3.57% -8 -4.55%
2 133 131 132 2 1.50% 1 0.76%
1881 1879 1881 2 0.11% 0 0.00%
214 216 218 -2 -0.93% -4 -1.83%
165 175 174 -10 -6.06% -9 -5.17%
89 90 93 -1 -1.12% -4 -4.30%
3 38 43 44 -5 -13.16% -6 -13.64%
191 189 187 2 1.05% 4 2.14%
23 23 22 0 0.00% 1 4.55%
14 15 13 -1 -7.14% 1 7.69%
Based on the QA department procedure, the % L or % P has to fall less than 5% for a specific
pipe-spool to be accepted. If the result is between 5 to 7% then it is referred to QA Department
for a decision. If it is above 7% then the pipe spools is deemed unacceptable. Based on these
6
© ASCE
Computing in Civil Engineering 2015 286
criteria, spool 2 would be sent to QA department for further review and spools 1 and 3 would be
rejected. These results were matched exactly with the results reported by the manual inspection
of the spool. However, there are a few distinct advantages of using these automated systems.
First, while there are general rules for identifying the critical areas, there are no set and proven
standards. Therefore, many localized defects may go undetected at this point, simply because
their exact position was not chosen as a critical point. Instead, with the automated systems the
complete length of the spool can be checked with its as-planned dimensions, through the use of
the 3D model as a-priori knowledge and therefore the reliability of the QA will be substantially
improved.
Second advantage is the elimination human factor and associated errors in obtaining the
dimensional measurements. The accuracy of the systems used, both through photogrammetry
and 3D scanning, is reported to be between 1-2% in the controlled conditions similar to what was
used in this research (Ahmed et al., 2011, Nahangi et al. 2015). Given the many sources of error
that exist for the current system of measuring the dimensions manually with a measuring tape,
including worker fatigue, eye-sight errors, even expansion and contraction of the tape due to
temperature changes, the results obtained through the automated system are both more accurate
and more reliable. Even though the decisions that were made through the manual system
matched the ones made through both laser scanning and photogrammetry, it is possible that this
was due to the particular attention paid to the manual process in this investigation and it is
expected that in normal operations with hundreds of spools to inspect the accuracy of the manual
system would drop substantially while the automated system can continue to function with the
same consistency and accuracy levels.
CONCLUSION
In industrial construction, piping activities account for a significant portion of the construction
projects. The current QA processes for piping activities are characterized by numerous
limitations due to their inherent complexity and labour-intensive procedures. This research
investigated 3D laser scanning and photogrammetry as automated tools for post-production QA
for the particular application of prefabricated pipe-spools. The implementation of these
automated tools and the associated field study results indicate that the system has the potential to
be a valuable tool for QA tasks related to prefabricated pipe-spools, by improving accuracy,
consistency and reliability of the QA process. It was further concluded that either
photogrammetry or 3D laser scanning techniques would be suitable for this application. With the
promising results of this research in automating the post-production quality assurance process, it
is recommended for future research to consider the entire spectrum of QA activities in order to
provide a comprehensive automation system for QA processes related to prefabricated
construction elements.
REFERENCES
Ahmed, M., Guillemet, A., Shahi, A., Haas, C.T., West, J.S., and Haas, R.C.G. (2011).
“Comparison of Point-Cloud Acquisition from Laser-Scanning and Photogrammetry
Based on Field Experimentation.” CSCE 3rd International/9th Construction Specialty
Conference. Ottawa, Ontario, June 14-17.
Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., Park, K. (2006). "A Formalism for
Utilization of Sensor Systems and Integrated Project Models for Active Construction
Quality Control." J. Autom. Constr., 15(2), 124-138.
7
© ASCE
Computing in Civil Engineering 2015 287
Brilakis, I., Fathi, H., Rashidi, A. (2011). Progressive 3D reconstruction of infrastructure with
videogrammetry, J. Autom. Constr. 20 (7) 884–895.
Burati, J. L., Farrington, J. J., and Ledbetter, W. B. (1992). Causes of quality deviations in design
and construction. J. Constr. Eng. Manage. ASCE. 118(1), 34–49.
Construction Industry Institute. (2003). New Joining Technology for Metal Pipe in the
Construction Industry, Breakthrough Strategy Committee, BTSC Document.
Construction Industry Institute (2002), New Joining Technology for Metal Pipe in the
Construction Industry, University of Texas at Austin, Austin, TX.
Dai, F., and Lu, M. (2010). Assessing the accuracy of applying photogrammetry to take
geometric measurements on building products. J. Constr. Eng. Manage. ASCE. 136(2),
242-250.
Eastman, C. M., and Sacks, R. (2008). Relative Productivity in the AEC Industries in the United
States for on-Site and Off-Site Activities. J. Constr. Eng. Manage. ASCE. 134(7), 517-
526.
Fredericks, T., Abudayyeh, O., Choi, S., Wiersma, M., Charles, M. (2005). Occupational injuries
and fatalities in the roofing contracting industry, J. Constr. Eng. Manage. ASCE. 131
(11) 1233– 1240.
Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., and Peña-Mora, F. (2011). Evaluation of
image-based modeling and laser scanning accuracy for emerging automated performance
monitoring techniques. J. Autom. Constr., 20(8), 1143-1155.
Nahangi, M., Haas, C., West, J., and Walbridge, S. (2015). "Automatic Realignment of Defective
Assemblies Using an Inverse Kinematics Analogy." J. Comput. Civ. Eng. ,
10.1061/(ASCE)CP.
Son, H., Kim, C., and Kim, C. (2014). "Fully Automated As-Built 3D Pipeline Extraction
Method from Laser-Scanned Data Based on Curvature Computation." J. Comput. Civ.
Eng., 10.1061/(ASCE)CP
Rounds, J. and Chi, N. (1985).Total Quality Management for Construction. J. Constr. Eng.
Manage. ASCE. 111(2), 117–128.
Safa, M., Gouett, M.C., Haas, C.T., Goodrum, P.M., Caldas, C.H. (2011), “Improvement of
Weld-less Innovation on Construction Project,” 3rd International/9th Construction
Specialty Conference. Ottawa, Ontario, Canada.
Safa, M., Shahi, A., Haas, C. T., & Hipel, K. W. (2014). Supplier selection process in an
integrated construction materials management model. Automation in Construction, 48,
64-73.
Shahi, A., Aryan, A., West, J. S., Haas, C. T., & Haas, R. C. G. (2012). Deterioration of UWB
positioning during construction. J. Autom. Constr., 24, 72-80.
Shahi, A., Safa, M., Haas, C.T., and West, J.S. (2014) Workflow-Driven Data Fusion Framework
for Automated Construction Management Applications. J. Comput Civil Eng. ASCE.
Tatum, C. B., Vanegas, J. A., and Williams, J. M. (1987). Constructability Improvement Using
Prefabrication, Preassembly, and Modularization, Construction Industry Institute, The
University of Texas at Austin.
Tucker, R. (1982), Construction Technology Needs and Priorities - Construction Industry Cost
Effectiveness Project Report, The University of Texas at Austin, Austin, TX.
Yeung, N. S. Y., Chan, P.C.& Chan, D. W. M. (2002). “Application of prefabrication in
construction – a new research agenda for reform by CII-HK”. Conference on Precast
concrete Building System, Hong Kong.
8
© ASCE
Computing in Civil Engineering 2015 288
ABSTRACT
© ASCE
Computing in Civil Engineering 2015 289
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 290
METHODOLOGY
Data collection and synthesis. This study is based on data that the authors
collected during an afternoon shift for over 5 hours using inexpensive GPS loggers
from a push-loading scraper operation with three scrapers, a pusher (i.e., dozer) and a
grader. The observed operation performed as a part of US-231 road construction
project in 2011 near Purdue University. Travel duration data shown in this paper are
acquired by executing an activity recognition algorithm that the authors implemented.
The identified activity durations are thoroughly validated by using video which
recorded the operation and 3D-visualization tool that can re-create the operation
based on GPS data as 3D animation. The 3D-visualization tool is a byproduct of the
authors’ another research project. In the remainder of this paper, the term of “travel”
in scraper operation includes sequential Activities Haul, Spread, and Return. Scraper
travel times thus start at the end of Activity Load and end when Activity Return is
terminated at the cycle.
Most distribution fitting techniques are based on the assumption that the
observed data are IID (Martinez 2010). However, in many instances the data as
observed is not IID. Given GPS observation data, travel distances and speed data are
fitted to the distributions (assuming that the data are IID) by @Risk. The fits shown
in Figure 2 are very good (the ChiSquare, Kolmogorov-Smirnov, and Anderson-
Darling tests do not reject the hypothesis at the 0.10 level of significance), it may
seem a good idea to use the fitted distributions in a model.
© ASCE
Computing in Civil Engineering 2015 291
and the best option was selected using the Akaike Information Criterion (AIC)
(Akaike 1974). The travel distance data were thus modeled using ARIMA(0,1,1),
ARIMA(0,1,2), and ARIMA(1,1,2) models for Scraper #1, #2, and #3 respectively
and the parameters were estimated using R software.
© ASCE
Computing in Civil Engineering 2015 292
Travel time data. Total travel time with travel distance plot (Figure 7) and
sequentially ordered plot of speed (Figure 8) indicate that while operating, drivers
tend to sustain their own speed. Thus, travel time input modeling needs to incorporate
this individual behavior by having different distributions for each operator. To cope
with this perspective in sampling travel distance, time series models were derived for
each scraper to capture this characteristic as described on the previous section.
Scatter diagrams in Figure 9 show that travel distance and travel time are
correlated. To find a formal relationship, we further explore with the coefficient of
variation (CV ) between two parameters as shown in Figure 10. The step size of
sampling to study the CVs was used as three cycles per sample. The step size was
determined under the assumption that operators are likely to decide the travel distance
of the following trip at the current instance based on the previous and the current
routes.
© ASCE
Computing in Civil Engineering 2015 293
travel time which corresponds to the travel distance generated from the ARIMA
model.
Figure 9. Scatter diagrams for scraper travel distances vs. travel times
© ASCE
Computing in Civil Engineering 2015 294
Data sampling. At the first instance of travel distance sampling from the
derived ARIMA model (before the start of the first instance of Activity
HaulSpreadReturn per scraper), we sample two more instances such that the mean
and SD of travel distance can be calculated based on the generated three samples
which represent the travel distances of three sequential instances of the corresponding
machine. From the second instance of activity, ARIMA model generates only one
sample per instance for the travel distance of activity which will be initiated after two
instances. With having the mean and SD of travel distance and relationship between
the CV of travel time and the CV of travel distance, a travel time can be easily derived
as shown below.
CV
Di
Ti (T travel time, D travel distance, S speed , i activity instanc )
S
In this manner, with one activity defined in DES, we can exhibit different
travel paths in terms of travel distance and speed by characterizing individual travel
time per machine.
The authors have explored the observed travel data collected from a scraper
earthmoving operation. Some challenges were identified in defining model elements
for DES study and modeling travel time input data when travel pattern changes at
every cycle which is common in scraper earthwork operations. Therefore, this study
has proposed a simple but statistically robust methodology to characterize travel time
distributions with one activity element model. The proposed methodology uses the
relation between travel time and travel distance with regard to coefficient of variation
measures, expressed in two separate distributions, to capture information needed to
construct speed and path scenarios. This input modeling strategy will be studied
further with various validation techniques.
ACKNOWLEDGEMENT
The authors would also like to acknowledge the DJ McQuestion & Sons, Inc.,
in particular Rick McQuestion (Project Manager in US-231 Project), for their
considerable help in collecting data.
REFERENCES
© ASCE
Computing in Civil Engineering 2015 295
Law, A. M., Kelton, W. D., and Kelton, W. D. (1991). Simulation modeling and
analysis. McGraw-Hill New York.
Martinez, J. (2010). “Methodology for Conducting Discrete-Event Simulation Studies
in Construction Engineering and Management.” Journal of Construction
Engineering and Management, 136(1), 3–16.
Martinez, J. C. (1996). “STROBOSCOPE: State and resource based simulation of
construction processes.” Doctoral Dissertation. Department of Civil and
Environmental Engineering, University of Michigan, Ann Arbor, MI.
Martinez, J. C., and Ioannou, P. G. (1999). “General-purpose systems for effective
construction simulation.” Journal of construction engineering and
management, 125(4), 265–276.
Puri, V. (2012). “Incorporation of continuous activities into activity cycle diagram
based discrete event simulation for construction operations.” PURDUE
UNIVERSITY.
Puri, V., and Martinez, J. (2013). “Modeling of Simultaneously Continuous and
Stochastic Construction Activities for Simulation.” Journal of Construction
Engineering and Management, 139(8), 1037–1045.
@Risk, Palisade Corporation, 31 Decker Rd, Newfield, NY 14867, USA.
© ASCE
Computing in Civil Engineering 2015 296
INTRODUCTION
Computers and associated digital technologies are radical inventions having the
ability to transform our lives (Winograd and Flores, 1987). Indeed, there is no doubt
that the working environment of building design engineers has been profoundly
changed following the introduction of computer drawing, modelling and analysis
packages (Hayne et al,2014). However, it is suggested that computer technologies
‘may have unintended as well as intended impacts’(Zhou et al, 2012, p103).
The findings of a brief literature review are set out which identifies some of the
negative impacts of computer technologies on the engineering design industry. In an
© ASCE
Computing in Civil Engineering 2015 297
BACKGROUND
Karl Weick (1985) sets out various psychological procedures that people adopt to
increase learning and efficiency at work including triangulation, affiliation,
deliberating and consolidating. He goes on to question if these are achievable when
working in a digital environment. ‘People using information technologies are
susceptible to cosmology episodes because they act less, compare less, socialize less,
pause less, and consolidate less…As a result, the incidence of senselessness
increases’ (Weick,1985, p56).
The issue for software designers is how to code experience and tacit knowledge. It is
argued that tacit knowledge cannot be codified and, therefore, engineering becomes a
combination of art and standardised coding (Henderson, 1999). Henderson further
questions who codes the information. Is it software houses or engineers? If the
former, there is a distinct danger that control of the software is lost with the computer
becoming ‘the bearer of knowledge’ (p20).
The use and outputs of computers needs to be considered in context. For example,
someone using a word processor is not merely producing a document but is writing a
letter or a report … (Winograd and Flores, 1987). Likewise an engineer analysing a
complex structure is in fact undertaking a part of the much wider design process.
It is generally accepted that design is an iterative process (Alexander, 2000; Wilpert,
2007; Whyte, 2013). This process relies upon the use of visual images for evaluation
of ideas (Goldschmidt,1994). This operation was often completed using tracing paper
overlays where changes could be quickly sketched and adjusted with the designers
able to see the interaction of the existing and proposed (Henderson, 1999). The use
of digital models can lead to an over reliance on a single data source that may not be
the most appropriate (Perrow, 1999. cited in Whyte, 2013). Viewing models and
images on screen can cause visualisation problems as it’s ‘…difficult to see very
much … you have to move the thing around and then you can’t remember what was
on the other side’ (Whyte, 2013. p51). Different perspectives and data sources are
required to validate and ensure accuracy of data. The illusion of accuracy can be
created if people avoid comparison (triangulation), but…illusions of accuracy are
short-lived, and they fall apart without warning. Reliance on a single,
uncontradicted data source can give people a feeling of omniscience’ (Weick,1985,
p57). To evaluate ideas the designers must challenge the outputs of the computer but
© ASCE
Computing in Civil Engineering 2015 298
researchers have identified that ‘digital systems do not encourage the active
challenging of assumptions’ (Zhou et al,2012).
METHODOLOGY
A purposive sample consisting of practising consulting engineers, who began their
careers in the pre-digital era was identified. Older engineers were selected as, unlike
younger engineers, they were able to provide comparisons of the two eras. The
sample consisted of six engineers and a technical manager from a steelwork
fabricator as set out in Table 1. The initial analysis of the interviews indicated that
saturation had been achieved and additional interviews were not considered
necessary.
The locations of the interviewees were geographically disparate requiring five of the
interviews to be conducted using Skype which gave the opportunity to observe body
language or facial expressions (Bryman & Bell, 2007).
The interviews were sound recorded and later transcribed before being coded using
Nvivo software. The common themes were drawn together in memos (Miles &
Huberman, 1994: Dey, 1993) which facilitated a subjective, inductive analysis of the
data.
Table1. Details of interview sample.
Pseudonym. Role / Position Years in Route into
industry industry
John Experience in consulting engineering in 24 HND and BSc
UK, North America and Gulf states. Civil
Engineering
Alan Experience in consulting engineering in 40 BSc Civil
the UK but with some time in Gulf Engineering
States.
Phil Experience in consulting engineering in 27 BSc Civil
the UK prior to last 3 years in China Engineering
Paul Experience in consulting engineering in 27 BSc Civil
the UK. Engineering
Adam Experience in consulting engineering in 33 BSc Civil
the UK with some time in KSA and Engineering
USA.
Richard Experience in consulting engineering in 29 BSc Civil
the UK. Engineering
Glen All work experience in steel fabricators 30 Apprenticeship
in the UK and ONC
FINDINGS OF INTERVIEWS
Positive aspects of computers
© ASCE
Computing in Civil Engineering 2015 299
There was an overwhelming belief that computer technology has brought some
distinct advantages to the industry (John, Phil, Adam, Richard,). The general view
was that much more complex structures were now being analysed and modelled that
would have been impossible to design in a pre-digital world. ‘The likes of Gehry
buildings and Zaha’s buildings and that is purely enabled by technology’ (Adam).
This view was echoed by Phil who interestingly provided the caveat that ‘it’s very
hard to always bring it back to something that’s real’.
The ability of computers to remove the drudgery of hand calculations was seen as an
improvement in the design process (Phil). Similarly, the ability to understand the
spatial interactions of buildings in a 3D environment was seen as an advantage, not
only for the engineers but also to explain the engineering principles to the architects
(Richard).
Over reliance of computers
Most graduate engineers were over-reliant upon the use of computers was a point
raised by several of the engineers. As one participant put it: ‘In the olden days you
would have liked to rationalise the structure…but now they just tend to put it straight
into the computer and believe the results’ (Alan). This sentiment was echoed ‘Their
natural instinct is always to go to the computer’ (Richard),
Several of the engineers had concerns that graduates were feeding designs into
computers with little understanding of what the expected output would be (John).
One respondent suggested that ‘…people put it in the computer, don’t really know
what’s going on and then believe the output’ (Alan). This is not a view shared by all
the engineers as Adam believes such concerns are now outdated. He emphasised his
belief that: ‘the computer is the tool that helps you understand it…Modern
technology allows you to challenge the structure more easily and I think that can
give you a better understanding of structural behaviour rather than the old fashioned
way would...” (Adam). Paradoxically, this is not a view shared by Richard who
explains that ‘it is quite difficult to try and get younger graduate engineers to
actually interrogate what comes out of the computer because they don’t know how to
interrogate it’ (Richard).
The importance of good mentoring was raised by several of the interviewees (John,
Paul, Richard). Richard was quite clear in his belief that it is important for
experienced engineers to pass on their knowledge and experience (Richard).
Repeatedly, Paul highlighted the potential problem that inexperienced engineers
could work for too long without the input of more experienced engineers (Paul). It
was interesting that he also stated that the nature of the questions asked by graduates
has changed as they are ‘… about how to fly the machine which aren’t about
engineering. In the days of hand calcs I suppose it [questions]was more about
engineering’ (Paul).
© ASCE
Computing in Civil Engineering 2015 300
Issues relating to precision and accuracy were raised by four of the consulting
engineers, three of whom suggested that it was a potential problem for inexperienced
engineers (Alan, Adam, Richard).
Three arguments are put forward by the engineers to explain why the level of
accuracy sought by graduates is not appropriate; 1. Site conditions, 2. Buildability
and tolerances, 3. Design assumptions.
Alan, who is working as the site team leader for the design team on a large complex
project in the Middle East, raises the issue of site conditions stating that the computer
generated accuracy is inappropriate “…particularly when you see the work carried
out on site” (Alan).
The issue of buildability and tolerances is raised by Richard, who highlights the issue
that technicians constructing 3D models often have “…a lack of understanding of
tolerances. A lack of understanding of how things fit together.” (Richard). He
contends that a lack of understanding also leads to the creation of details that, whilst
buildable in a digital world, are impossible to construct in the real world.
However, a view put forward that differs to the previous thoughts is expressed by
Phil who suggests that a computer must be accurate “It can only be accurate; it
cannot give you a vague answer.”. He goes on to argue that checks to ensure the sum
of the resultants must equal the loads applied and must be accurate. Any discrepancy,
however small, could be a symptom of a much larger problem with the analysis
model which must be explored and resolved. Whilst requiring accuracy for the
analysis phase, it is interesting that this point does not in fact contradict the three
main points raised above.
Good Engineering Practice /Engineering Philosophy
There was widespread recognition that the fulcrum of engineering is the ability to
solve problems (Phil, Alan, Adam). Phil went on to state that the ability to complete
a structural analysis using a piece of software was not, in his mind, problem solving.
Indeed, the use of computers in such a way, linked with BIM modellers who were
experts in IT technology as opposed to construction was perceived to be a potentially
dangerous situation ‘I think we will end up with some very unsafe designs and there
will be a major failure’ (Richard). It is, therefore, important to recognise the role that
experienced engineers have in mentoring and teaching younger engineers (Richard).
The essence of problem solving would be using an engineering mind (Alan) to
identify what should be analysed, how and using what software (Phil).
Adam’s position encompassed these views as he suggested that engineers must be
able to breakdown complex structures into more manageable pieces or simplify the
structures using approximations. By undertaking this process it should be possible to
produce analysis models that could be constructed in reasonable timeframes but
would give solutions of an acceptable accuracy considering the approximate nature
of building design (Adam).
© ASCE
Computing in Civil Engineering 2015 301
The persistent use of computers can also affect the graduate’s ability to understand
the underlying engineering principles within their designs. ‘You see evidence of this
with jobs that come out of the office designed by young guys and you wonder how
much raw engineering has gone into it.’ (Alan). Alan also suggests that in the pre-
digital world engineers would have rationalised a structure using engineering
judgement (Alan). This is a view echoed by Phil, who recalls that complex structures
were previously broken down into smaller more manageable sections allowing
engineers to understand the structure. He suggests that ‘you might be losing the
ability to see what’s going on’ (Phil) by analysing the entire structure digitally. This
is often lost on some younger engineers who ‘feel that they are good engineers
because they can do the software analysis…rather than taking responsibility for the
solution (John).
Glen believes that the quality of engineering he witnesses as a steel fabricator is
probably at the same level as in the pre-digital era. However, the design is often
poorly transmitted on the drawings, requiring a significant number of questions to be
raised with the engineers to fully understand the design. Worryingly, these omissions
often include critical information germane to temporary stability
Experience and feel
Not surprisingly, experience was discussed by all the engineers. Whilst experience is
the generally understood phenomena of gaining knowledge through witnessing or
undertaking specific actions, feel is somewhat more subjective. Feel was raised by
four of the engineers several times in the context that designers can have an, almost
innate, ability to understand the structural actions of a system and /or the magnitudes
of elements within detailed designs. ‘…you still get that gut feel if they are right…in
some respects it’s feel more than anything else’ (Adam). The ability of younger
engineers to relate a 3D model to reality was also questioned (Richard, Phil, John).
The 3D visualisation of models provides a benefit in understanding how buildings fit
together (Paul ). This alone was not considered sufficient for other engineers. Phil
questioned the engineer’s ability to truly understand what the model represents if you
are lacking experience. ‘…if you don’t have the grounding of the nuts and bolts of
how it all fits together it must be quite hard because it’s all so abstract. ?’ Richard,
questioning the lack of experience stated that ‘… you need the experience behind it.
You still have to relate that 3D model to reality’ (Richard). He pointed out that ‘Even
though you have a 3D model you wouldn’t necessarily perceive that risk unless you
have that experience’ (Richard). This view was echoed by others who argued that if
you did not have explicit experience of how elements should be connected in theory
and on site, issues were going to be missed (John).
DISCUSSION
As expected, the overwhelming view of the interviewees was that computer
technology has brought significant benefits to the industry by removing the drudgery
of hand calculations and by facilitating the design of geometrically complex
buildings. The latter point raises a wider question as to whether these buildings have
architectural merit, or, are some of them simply designed because they can be?
© ASCE
Computing in Civil Engineering 2015 302
© ASCE
Computing in Civil Engineering 2015 303
CONCLUSION
Digital technologies are now central to the working practices of building design
offices. Whilst major benefits have been manifest it is also apparent that a significant
number of unintentional impacts have surfaced. The default position of younger
engineers to automatically turn to the computer, without the underlying experience to
understand the structural issues, is leading to inappropriate designs being produced.
These designs can lack the application of sound engineering principles and become
over concerned with inappropriate levels of accuracy. The representation of the
design is often being lost due to the lack of experience of the engineers and
technicians who now focus their attention on software operations rather than building
construction.
There is a clear and acknowledged need for mentoring of younger engineers and
technicians to ensure that un-coded tacit knowledge is passed on before dangerous
situations arise on site.
REFERENCES
Alexander, C., (2000), Notes on the synthesis of form, Harvard University Press,
London.
Blockley (1980), The nature of structural design and safety, Ellis Horwood Ltd,
Chichester.
Bryman, A., Bell, E., (2007), Business research methods. Oxford, Oxford University
Press.
Cooke, T., Lingard, H., Blismas, N., (2008), ToolSHeDTM The development and
evaluation of a decision support tool for health and safety in construction
design, Engineering, Construction and Architectural Management, vol.15,
no. 4, pp. 336-351.
Dey, I., (1993), Qualitative data analysis. A user friendly guide for social scientists.
London, Routledge
Goldschmidt, G., (1994), On visual thinking: the viz kids of architecture, Design
Studies, vol.15, no.2 pp. 158-174
Hayne, G., Kumar, B., Hare, B., (2014) The Development of a Framework for a
Design for Safety BIM tool. Computing in Civil Engineering and Building
Engineering pp. 49-56
Henderson, K., (1999), Online and on paper, Visual representations, visual culture,
and computer graphics in design engineering, The MIT Press, Cambridge
MA
Miles, M., Huberman, A., (1994), Qualitative data analysis.London, Sage
Publications Ltd
Perrow, C., (1999 [1984]) Normal Accidents:Living with high-risk technologies.
Princeton University Press, Princeton, New Jersey
© ASCE
Computing in Civil Engineering 2015 304
© ASCE
Computing in Civil Engineering 2015 305
Miguel Mora1; Semiha Ergan2; Hanzhi Chen1; Hengfang Deng1; An-Lei Huang1;
Jared Maurer1; and Nan Wang1
1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15231-3890. E-mail: [email protected];
[email protected]; [email protected]; [email protected];
[email protected]; [email protected]
2
Department of Civil and Urban Engineering, NYU Polytechnic School of
Engineering, Brooklyn, NY 11201. E-mail: [email protected]
Abstract
INTRODUCTION
Monitoring and analyzing energy consumption and performance of heating
ventilation air conditioning (HVAC) systems in facilities can give insights about
building behaviors for improving facility operations. However, due to the custom
sensor infrastructure needed to monitor building performance parameters, additional
costs and design changes would be needed to fully instrument facilities. Detailed
analysis of building performance in highly sensed facilities can give insights about
the behavior of similar facilities that have no budget to have such sensing
infrastructure. Through a capstone project course, students had access to sensor data
from a highly sensed facility and they evaluated the performance of several
algorithms in predicting the energy consumption in order to propose a prediction
model that can be used in similar buildings with similar climate zones. The main
objective of the capstone project was to help students to implement the knowledge
© ASCE
Computing in Civil Engineering 2015 306
they acquired in the core courses of their graduate program and to apply data analysis
approaches, to reinforce their knowledge on sensor and monitoring applications while
learning about energy use in buildings.
The project scope included analysis of energy consumption and HVAC
performance data in a highly sensed building and development of prediction models
for building system performance and energy consumption useful for similar
buildings. Since Air Handling Units (AHUs) are the primary components of HVAC
system for heating and cooling in buildings, the research team focused on their
parameters to predict energy consumption and asses their performance. A set of data
mining algorithms including weighted least square linear regression, random forest
and K-nearest neighbor was applied to the available datasets to develop energy
consumption prediction models. In order to incorporate the parameters related to
heating and cooling, the parameters have been grouped by environmental parameters,
Heating Ventilation and Air-Conditioning (HVAC) system parameters (e.g., air
supply, exhaust and recirculation), AHUs’ performance parameters, and spatial
parameters of the building/system.
The findings include the performance of such algorithms in identifying
patterns and they give insights about suitability of such algorithms in predicting the
energy use and system performance in similar buildings with no sensor infrastructure.
By comparing these algorithms, the team also aims to determine the influence of
building physical and spatial parameters on the predictions.
BACKGROUND RESEARCH
Previous research on this topic show different ways to predict energy
consumption, some of them are based on physical parameters of the building while
others are data-driven (e.g., Mustafaraj et al., 2011). The former requires a broad
knowledge of a building and its components (e.g., Tariku et al., 2010) while the later
does not require that knowledge and uses algorithms based on sensed and measured
data to create forecasts.
The capstone project was focused on the data-driven approaches to implement
and consolidate learnt knowledge in previous graduate courses as well as to have
hands on experience with a large volume of sensor data. Supervised and classification
algorithms as well as machine learning algorithms were applied based on the
knowledge acquired in the graduate classes and from previous research studies.
On the supervised side, because of its simplicity, regression analyses were
mainly utilized in the literature for similar purposes (e.g., Dong et al. 2005). Such
studies try to predict a target value based on given set of parameters. The simplest
used regression is the multivariate linear regression, which is a generalization of the
linear regression for more than one input parameter. Complex regressions, such as the
weighted least square regression, which apply different weights to each parameter
(Bishop, 2006) are also available and tested for their performance in predicting
energy use in buildings as part of this project.
On the classification side, the algorithms that are available and mainly utilized
in similar studies in the literature were k-Nearest Neighbor (k-NN) and random
forest. The former is an instance-based algorithm, which means it classifies new data
by recombining the defined training dataset (Brown et al., 2011). It is commonly used
© ASCE
Computing in Civil Engineering 2015 307
in the
t fields off finance, hyddrology andd earth sciencce applicatioons, but it iss also found
useeful in foreccasting electrricity loads (e.g., Al-Qaahtani and Crone,
C 20133). Random
foreest is an en nsemble meethod that can c predict class labells for unseeen data by
agggregating a sets of classiifiers (predicctors) learneed from the training datta (Breiman
andd Cutler, 200 04). The esseence of the random
r foreest algorithm
m is random selection
s of
inddependent vaariables in thhe tree-grow wing processs, reducing thet correlatioon between
treees (Breiman and Cutler, 2004).
Finally on the macchine learninng side, artiificial neuraal network (ANN) ( was
utillized as it is one of the most
m commoonly utilized data-drivenn method for energy use
preediction. It has
h an average successs ratio betw ween 90–99% % and it can identify
inteerconnected patterns from f historrical data anda handlinng multiplee objective
funnctions/resultts. It is alsoo a black-box approach, which indiicates the faact that it is
harrd to know what
w the geneerated model looks like (Liu ( and Gaarrett, 2014).
Figure 1. HVAC
H systeem primary componentts: Ducts, AHUs,
A and CUs
C are
coolored according to thee zone they serve. The figure
f only shows the supply air
ducts for simpliciity
Data fro
om sensors was collecteed every onne minute, exxcept for thee hot water
boiiler, which was
w measureed every 15 minutes. Thus,T in ordeer to have a consistent
tim
me interval within
w the dattasets, data was
w adjustedd to 15 minutte time intervvals.
© ASCE
Computing in Civil Engineering 2015 308
Outputs:
Energy Consumption
Prediction Model: (KWh): Individual AHU
Input Parameters: Multivariate Regression & All AHUs
Operational (27) Artificial Neural Network Per zone & total
Environmental (4) Random Forest building
Physical (10) K-nearest Neighbor
Weighted Least Square Model Selection:
Mean Absolute Error
Root Mean Square Error
© ASCE
Computing in Civil Engineering 2015 309
All of these parameters were stored as input and output datasets and separated
into training and test datasets. This division was done to be able to cross validate the
models with the test datasets and asses the performance of the forecast (Bishop,
2006). The training set was used to determine the coefficients of the models related to
the parameters and the second set was used to test the model and analyze how the
model fits or is able to predict the test data (Bishop, 2006). The parameters measured
for AHUs every 15 minutes i shown in Table 2. The table also shows instances of
data from spring data set. The data for training and test datasets were selected
randomly in order to fully represent the whole datasets. Several tests were performed
with partitions from 50/50 to 70/30 to find out the optimal splitting ratio. The
splitting ratio of 50/50 gave the best prediction performance and was chosen for the
datasets.
Table 2. Parameters measured for AHUs, with example values from AHU3
during spring time measurements
Return Supply Mixed Status Return Return Supply
Air Air Air Heating Air CO2 Supply Air Reference Static Static
temp temp temp Valve Conc CO2 Conc Pressure Press Press
71,39 56,25 72,85 20,74 0,26 516,13 517,48 -0,80 1,05
70,60 55,36 72,17 20,74 0,26 509,65 517,44 -0,52 0,86
Static Static Static
Press Press Press Return Mixed HW
after after after Air Supply Air Fan Condensate Coil
filter heat cool RH Air RH RH Speed Flow rate Flow
-0,88 -0,93 -1,22 47,88 75,38 49,67 0,00 0,00 0,00
-0,54 -0,57 -0,78 46,38 83,78 48,91 758,0 0,00 0,00
HW HW Fresh Fresh Avg Avg
temp temp HW Air Air Fresh Air Supply Supply Supply
Return Supply Heating Velocity Temp Flow Velocity Temp Flow
72,14 72,43 0,00 142,95 82,16 1786,86 433,70 72,79 16286,46
72,00 72,42 0,00 110,46 81,66 1380,73 356,14 72,16 13373,87
© ASCE
Computing in Civil Engineering 2015 310
Step 3: Model comparison and the evaluation of the outputs: The models’
performance evaluation was done based on the mean absolute error (MAE) and root
mean square error (RMSE) when models were evaluated using the test datasets
(Bishop, 2006). They are used together to assess errors and variations between the
predicted model and the eventual outcomes.
In this step, the high performing models are selected and outputs are evaluated for the
defined zones with and without the impact of physical parameters. The averaged
results of the error calculations for the high performing models are provided in Figure
© ASCE
Computing in Civil Engineering 2015 311
3. They show that Random Forest based prediction model has the lowest MAE and
RMSE, and hence it was chosen as the most accurate model to predict the energy
consumption of the building. One possible reason for its highly accurate prediction
results is its random selection of independent variables during the tree-growing
process that reduces correlation between them. ANN based prediction model is
detected to be the second best prediction model with slightly higher MSE and RMSE
values. Its accuracy can be attributed to the adjustments of weights done by the
algorithm to create connections between neurons. Multivariate linear regression based
prediction model performed well and because of its high simplicity, effectiveness and
less calculation requirements, it can also be considered for energy consumption
predictions. Finally, based on MAE and RSME values, k-NN and weighted least
square regression are not recommended for energy prediction models.
RMSE MAE
0.60 0.60
0.50 0.50
0.40 0.40
RMSE
MAE
0.30 0.30
0.20 0.20
0.10 0.10
0.00 0.00
Figure 3. Season averages of RMSE and MAE (B: Boiler, CU: Condensing Unit)
CONCLUSIONS
This capstone project provided opportunities for graduate students to
understand the implications of data mining methods and their performance in
© ASCE
Computing in Civil Engineering 2015 312
predicting the energy use of AHUs. The students had hands on experience with
sensor data and applied their data analysis knowledge to build prediction models in
order to forecast energy consumption in similar buildings that share the same climate
zone and functionalities with the studied building.
In this capstone project, various prediction models were tested and proved to
be a useful tool for energy consumption prediction. Among the studied models and
based on MAE and RMSE, Random Forest had the best performance and it should be
useful to forecast energy consumption of AHUs with similar capacities running on
building with similar heating and cooling loads with comparable physical parameters.
Further analyses with well performed algorithms should be done in cross
validations with other buildings, with a larger set of physical parameters, with
multiple years of data and different building control logic in order to improve the
prediction models.
REFERENCES
Al-Qahtani, F.H., and Crone, S.F. (2013). “Multivariate k-nearest neighbor regression
for time series data — A novel algorithm for forecasting UK electricity
demand.” The 2013 International Joint Conference on Neural Networks
(IJCNN). Dallas, TX, 4-9 August 2013, pp. 1-8.
Bishop, C. (2006). Pattern Recognition and Machine Learning. Cambridge: Springer.
Breiman, L., and Cutler, A. (2004). “Random Forests.”
<https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm>
(Dec2, 2014).
Brown, M., Barrington-Leigh, C., and Brown, Z. (2011). “Kernel regression for real-
time building energy analysis.” Journal of Building Performance Simulation,
Vol. 5, 263-276.
Dong B., Lee, S.E., and Sapar M.H. (2005). “A holistic utility bill analysis method
for baselining whole commercial building energy consumption in Singapore.”
Energy and Buildings, 37(2), 167-174.
Heaton, J. (2014). “A feed forward neural network.”
<http://www.heatonresearch.com/articles/5/page2.html> (Dec. 2, 2014).
Liu, P., and Garrett, J. (2014). “Computer-based approaches for search & decision
support in civil infrastructure.” Lecture notes, Pittsburgh, PA: Carnegie
Mellon University.
Mustafaraj, G. J., Lowry, G., and Chen, J. (2011). “Prediction of room temperature
and relative humidity by autoregressive linear and nonlinear neural network
models for an open office.” Energy and Buildings, 43(6), 1452-1460.
Tariku, F., Kumaran, K., and Fazio, P. (2010). “Integrated analysis of whole building
heat, air and moisture transfer.” International Journal of Heat and Mass
Transfer, 53(15-16), 3111-3120.
Wu, X., Kumar, V., Quinlan J.,, Gosh, J., Yang,Q., Motoda, H., MacLachlan, G.,
Ng, A., Liu, B., Yu, P., Zhou, Z., Steinbach, M., Hand, D., and Steinberg,
D. (2008). “Top 10 algorithms in data mining.” Knowledge and Information
Systems, Vol. 14, 1-37.
Xu, K. (2012). “Assessing the minimum instrumentation to well tune existing
medium sized office building energy models.” PhD. Thesis, Pennsylvania
State University.
© ASCE
Computing in Civil Engineering 2015 313
Abstract
The design and construction industry has the responsibility to provide the client with the product
of highest value. This is a complex task. Project stakeholders typically propose discipline
specific optimal solutions and have many conflicting interests. To provide an overall successful
solution they need to correlate the discipline specific goals and constraints and resolve many
conflicts. These are resolved through a negotiation process in which each party typically takes a
self-interested position. An improved approach is to set discipline specific targets and develop
solutions that meet these targets. This paper presents three project delivery scenarios and their
impact on product value to the client: (1) no set target values; (2) designs that meet domain
specific targets - progressive cost estimating Target Value Design (TVD), Sustainable Target
Value (STV), and Life Cycle Cost (LCC); and (3) a preliminary integrated target value driven
approach, linking TVD, STV, and LCC, to engage all stakeholders in an iterative process of
exploration and decision making to provide the client with the highest product value. The AEC
Global Teamwork course offered at Stanford with partners worldwide is used as a testbed.
INTRODUCTION
The design and construction industry has the responsibility to provide the client with the product
of highest value that they can conceive and create. This is a complex task. Project stakeholders
typically propose discipline specific optimal solutions and have many conflicting interests that
need to be resolved before they can provide and deliver a high quality product. To provide an
overall successful solution they need to correlate the discipline specific goals and constraints and
resolve these conflicts. Owners, architects, engineers, and contractors constantly engage in many
negotiations. Generally, the negotiation process is one in which each party takes a self-interested
position and tries to seek agreement through a series of demands and concessions that often
result in decisions that favor the most powerful or outspoken party and not necessarily the
overall value of the project. An improved approach is to set discipline specific objective targets
and develop solutions that meet these targets. This paper presents three project delivery scenarios
and their impact on product value to the client: (1) no set target values; (2) designs that meet
domain specific targets; and (3) a preliminary integrated target value driven approach. More
specifically, the integrated target value driven approach focuses on linking and maximizing three
key aspects of the design and life cycle of a building - first cost of construction, sustainability,
and life cycle cost. The main objective of this approach is to engage all stakeholders in an
iterative process of exploration and decision making to provide the client with the highest value.
1
© ASCE
Computing in Civil Engineering 2015 314
This study builds on practical and research points of departure grounded in three areas related
to progressive first cost Target Value Design (TVD), Sustainable Target Value (STV), and Life
Cycle Cost (LCC). Figure 1 illustrates three formalized project delivery scenarios that will be
further discussed in this paper: (a) The state-of-practice sequential and reactive design,
construction, operation and maintenance process, which does not provide the client with
quantitative evidence about how well the building will perform. (b) The state-of-the-art proactive
project delivery process in which domain specific target values are set. This engages team
members and client to collaborate in order to meet the set targets within each specific domain.
(c) An integrated target value driven project delivery approach that links TVD, STV, and LCC,
which engages all project stakeholder not only to meet the targets, but use the targets as a point
of departure to further collaborate, explore alternatives, and make informed decisions based on
the correlation of design impacts on TVD, STV, and LCC to maximize the value for the client.
We describe the points of departure, developed and integrated tools for TVD, STV, and LCC.
We illustrate the impact of the three project delivery approaches on the quality of the building
products with examples from the AEC Global Teamwork education testbed (Fruchter, 2006).
TESTBED AND METHODOLOGY
We used project data from the AEC Global Teamwork course to develop and test the TVD, STV,
and LCC framework and project delivery scenarios presented in this paper. The AEC student
teams engage in a project-based learning (PBL) experience focused on problem based, project
organized activities that produce a product for a client through a re-engineered processes that
engages faculty, practitioners, and students from different disciplines, who are geographically
distributed. It has been offered annually since 1993 – January through May. It engages
architecture, structural engineering, MEP engineering, lifecycle-cost management (LCFM), and
construction management students from universities in the US, Europe, and Asia (Fruchter
2006). The AEC student teams work on a university building project. The project specifications
include: (1) building program requirements for a 30,000sqft. university building; (2) a real world
university campus site that provides local conditions and challenges for all disciplines, e.g. local
architecture style, climate, and environmental constraints, earthquake, wind and snow loads,
flooding zones, access roads, local materials and labor costs; (3) a budget for the construction of
the building, and (4) a time for construction and delivery. The project progresses from
conceptual development in Winter Quarter to project development in Spring Quarter. The teams
experience a fast track project process with intermediary milestones and deliverables. They
interact with industry mentors who critique and provide constructive feedback (AEC project
gallery http://pbl.stanford.edu/AEC%20projects/projpage.htm). We used the data of AEC global
student teams that we clustered into two groups: (1) control group represented by AEC teams
that participated in the course before the deployment of the TVD, STV, LCC tools, target value
2
© ASCE
Computing in Civil Engineering 2015 315
design thinking and project delivery process, and (2) experimental group of AEC teams that had
access to the new tools and new project delivery processes, i.e. LCC deployed in 2006, TVD
deployed in 2011, STV deployed in 2013, and the preliminary integrated target value driven
approach TVD-STV-LCC deployed in 2014. The team members jointly determined the targets
for TVD and LCC for their specific projects; and the location specific targets for STV were
determined by our research team members and given to the 12 experimental AEC teams in 2013
and 2014. Note that in terms of STV AEC teams prior to 2013 were considered control teams
and had a qualitative sustainability project requirement to design at least a silver LEED certified
“green building.”
3
© ASCE
Computing in Civil Engineering 2015 316
maximizing profit for the design and construction team while our proposed method focuses on
maximizing value for the client. Given that maximizing profit is a key goal for design and
construction companies and maximizing value is a key goal for clients; the challenge rests in
aligning the financial incentives so that both goals can be simultaneously realized.
A critical success factor is how the data is analyzed and used by the project team to inform the
design process, as well as how the team members interact with the data. The data should help the
team members understand the impact of their discipline decisions and help determine the next
steps the team should follow to meet the cost targets set for the overall building and each sub-
system. Figure 2 illustrates the most effective visualization of the updated and synthesized TVD
information that kept the AEC team members focused and engaged in the process. After rapid
prototyping cycles for the TVD graphic user interface we found the most effective way to engage
the team in the TVD process was to provide them with updated, simple, and clear visual
representations of the critical information.
4
© ASCE
Computing in Civil Engineering 2015 317
indicators primary energy and water consumption as major resources used during building
operation, and global warming potential (GWP) and ozone depletion potential (ODP) as global
pollutants. (Table 1). The hypothesis was that setting site specific targets for these indicators will
lead to reductions of life cycle environmental impact. The site specific sustainability targets were
provided by the research team according to the construction location of each project, e.g., San
Francisco and LA California, Reno Nevada, Madison Wisconsin, San Juan Puerto Rico, Weimar
Germany. (Russell-Smith et al., 2014a, 2014b). The proposed STV prototype integrates life cycle
assessment into building design and provides a set of environmental targets including greenhouse
gas emissions, water, and energy based on the ecological carrying capacity of the planet and
accounting for the full building life cycle. These targets are set to enable development of
building designs that perform at or below the site-specific STV targets.
Table 1. STV indicators
The objective was to make STV a team effort. The STV tool aims to engage all team members
in a continuous process of exploration and decision-making to design buildings to specified
environmental targets in order to reduce building life cycle impacts to sustainable levels. STV
makes explicit and transparent the design decisions in an iterative process. It does not require a
complete design solution to evaluate how the building performs, but rather use the STV tool as
an exploration and decision-making tool, comparing different design options (e.g. materials,
subassemblies, systems) and their impact on the different sustainable indicators.
The STV tool was developed in MS Excel. The project specific data inputs include information
about: the location of construction site, “Materials and Construction Phase” and “Use Phase,”
on-site natural gas and electricity consumption, on-site electricity generation that may offset
power drawn from the grid. STV tool automatically updates and displays the location specific
targets and then calculates and displays the different impact categories by referencing the
underlying dataset sheets. Based on the total building floor area the STV tool calculates yearly
water consumption based on commercial building water use data. The results are shown on a
dashboard that summarizes the building impacts and displays the aggregated impact values CO2e
emissions, Water, and Energy in a triangle spider diagram. Each triangle outside the dashed
triangle representing the STV targets represents an extra 100% above the target. This allows the
team to visualize the environmental impacts of a specific building design in comparison to the
site specific STV targets - CO2 emissions, Water, and Energy target values.
Table 2. STV Project Performance – Experiment and Control Group Examples
12 AEC teams in the experimental group were provided the STV design targets and the STV
tool. 15 AEC teams in the control group did not have access to STV targets or tool but were
given a qualitative target of “green design” LEED silver building. Control and experimental
groups had the same physical building requirements on the same sites. The experimental designs
5
© ASCE
Computing in Civil Engineering 2015 318
were within the targets, as were the control designs aiming for net-zero use phase energy.
Projects with specific energy targets out-performed them. Table 2 illustrates typical project
examples of STV performance results of AEC projects from the experimental and control
groups. The results show that setting specific sustainable targets prior to design and providing
support resources that allow designers to iteratively improve and validate designs, reduces the
impact of the building for each environmental indicator studied as compared to control designs.
6
© ASCE
Computing in Civil Engineering 2015 319
provides the justification for the rent. Rent is the annual amount paid by the university to the
building provider is the main source of income. The target rent plays a key role in the calculation
of the net operation income in order to compare the efficiency the building design options with
the financial stability of the building service provider (in this case the AEC team) and the client’s
desired building functionality. To determine the efficiency of any solution the TVD, STV and
Rent targets and Excel tools were linked to the cash flow model. This allowed for exploration,
iterations, and informed decision making that engaged all project team members and the client.
Figure 3: Iterative process of exploration and decision making illustrating alternative Services Subsystems
and their impact on STV, LCC, and TVD based on an integrated target value design approach
Industry state-of-practice and the control AEC student projects used the allocated budget from
the client as metric. The architect, structural engineers, and MEP members designed the building,
then the construction managers produced the cost estimates and the LCFM proposed a rent that
maximized profits and revenue. Figure 2 illustrates a case study typical to AEC team solutions
after we deployed the TVD tool and process in 2011, in which the team pursued the second
project delivery approach (Figure 1b) focused to meet the set overall and sub-system cost targets
7
© ASCE
Computing in Civil Engineering 2015 320
by exploring alternative solutions of subsystems. Figure 3 illustrates a case study in which the
team pursued the integrated project delivery approach (Figure 1c) focused on a decision process
that is based on design exploration and correlation of TVD, STV, and LCC.
CONCLUSION
This paper presents three project delivery approaches – design that did not have set targets,
domain specific target design, and integrated target value design. It illustrates the importance of
setting specific targets for construction costs, sustainability, and life cycle cost targets prior to
design by providing and integrating the respective assessment tools. These were developed by
leveraging and expanding LCA, target value design, and life cycle costing methodologies. They
transform the state-of-the-practice delivery process by engaging all stakeholders in determining
the targets, exploring alternatives, and making informed cost-benefit decisions based on
performance indicators for first cost, sustainability, and life cycle cost. Unlike generic metrics
such as a given construction budget or LEED point-rating, TVD, STV, and LCC enabled the
teams to understand the quantified cost and environmental performance improvements associated
with design decisions. Each design assessment iteration generated cost and environmental
performance data that the teams used to compare alternatives and inform decisions. Each of the
20 AEC teams that used these target value design tools produced building designs that performed
better than similar scope and size buildings for the same construction sites developed by the
AEC teams in the control group that did not have access to these tools (Russell-Smith et al 2014
a, 2014b) (Castillo and Fruchter, 2012). The value of the proposed integrated target value
approach increased active engagement of all stakeholders in continuous and concurrent
exploration, assessment, and decision making leading to reduced environmental impacts and
costs, as illustrated in Figure 3.
REFERENCES
Ballard, G. (2008) The Lean Project Delivery System: an update. Lean Construction Journal, 1-19.
Castillo Cohen, F.J. and Fruchter, R. (2012) “Engaging global multidisciplinary project teams in
target value design” Proc. ICCCBE-XIV, Moscow, Russia.
Fruchter, R., (2006) The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI, 2006:
241-257.
Finnveden, G., Haushild, and M., Ekvall, T., Guinée, J., Heijungs, R., Hellweg., Koehler A.,
Pennington, D., and Suh, S., Recent Developments in LCA, J. Envir. Man., Vol. 91, No.
1, 1-21, 2009.
Hunt, R., and Franklin, W., LCA – How it came about, Int. J. of LCA, Vol. 1, No. 1, 4-7, 1996.
Intergov. Panel on Climate Change (IPCC), Climate Change 2007: Synthesis Report, IPCC 4th
Asses. Report.
EU Climate Foundation, Roadmap 2050: A Practical Guide to a Prosperous, Low-Carbon EU,
Vol. 1, 2010.
Energy Policy Act of 2005, Pub. L. No. 109-58, 119 Stat. 1143, 2005.
Bannister, P., Munzinger, M., and Bloomfield, C., (2005) Water Benchmarks for Offices and
Public Buildings, Ed. 1.2, Exergy Australia Pty Limited, Canberra.
Macomber, H., and Barberio, J., (2007). Target-Value Design: Nine Foundational Practices for
Delivering Surprising Client Value. Lean Project Consulting.
Russell-Smith, S., Lepech, M., Fruchter, R., Littman, A. (2014a) Impact of progressive
sustainable target value assessment on building design decisions, Journal of Building and
Environment. In Press.
8
© ASCE
Computing in Civil Engineering 2015 321
Russell-Smith, S., Lepech, M., Fruchter, R., Meyer, Y., (2014b). Sustainable Target Value
Design: Integrating Life Cycle Assessment and Target Value Design to Improve Building
Energy and Environmental Performance. Journal of Cleaner Production, In Press.
Schade, J. (2007) Proc. 4th Nordic Conference on Construction Economics and Organisation
Development Processes in Construction Management.: ed. B. Atkin; J. Borgbrant. Luleå.
Research report, Luleå University of Technology 2007:18, 321–329.
Weber, B., and Alfen, H.W. (2010) Infrastructure as an asset class: Investment strategies, project
finance and PPP. Chichester, West Sussex, U.K.Wiley.
Woodward, D. G. (1997) Life Cycle Costing-theory, information acquisition and application.
International Journal of Project Management 15, no. 6, 335–344
Tiwari, S., Odelson, J., Watt, A., and Khanzode, A., (2009). Model Based Estimating to Inform
Target Value Design. AECbytes “Building the Future.”
9
© ASCE
Computing in Civil Engineering 2015 322
Ivan Mutis1
Abstract
INTRODUCTION
The lack of exposure to construction processes on the job-site results in
students’ lack of understanding of the dynamic, complex spatial constraints (e.g., how
construction products are related to one another in a particular contextual space) and
the temporal constraints (e.g., the dependencies for coordinating subcontractors’
processes). In fact, the deficiencies in understanding construction products and
processes is widely acknowledged by construction program graduates, including a
lack of experience in applying construction-related concepts to real-world problems
(McCabe, et al. 2008) and the exclusion of important contextual constraints typical
found on jobsites (Sawhney, et al. 2000). Even though these constrains commonly
exist in the management activities of the projects, there is a lack of appropriate
pedagogical materials and media to enable instructors to effectively bring those job
experiences (Jestrab, et al. 2009) into the classrooms that reflect what has been
learned about the dynamics and complexities of such constraints.
© ASCE 1
Computing in Civil Engineering 2015 323
© ASCE 2
Computing in Civil Engineering 2015 324
ME
ETHODOLO
OGY AND APPROAC
CH
© ASCE 3
Computing in Civil Engineering 2015 325
The set of stored video clips and images are composed (connected) to the
virtual model objects. These virtual model objects are superimposed on the selected
images from the video clips and set of stored video clips and images. The composed
virtual objects allow users to contrast real-world images with the virtual objects,
generating the AR experience. Figure 2 shows examples of video frames used by the
researchers to compose the virtual objects. The generation of videos includes areal
images taken from unmanned flying devices to provide visualizations of the jobsite
context by involving broader views, since fix locations limit the ability to capture
such views (see Figure 2). The researchers use an AR platform to coordinate the
images and the virtual objects for each case study. The virtual objects’ designs
occupy a continuum within construction activity to meet the intended learning aims
by including the activity context in the scenario (i.e., the objects’ designs incorporate
specific learning goals, such as the number of overlaid objects that represent
temporary structures to build an assembly for a particular activity).
The focus is placed on what learners need to acquire on spatial and temporal
constraints for CEM activities. The learners-ART interaction is possible through the
development through an AR video composition application. The video captures the
main components and CEM activities of interest and the context (physical and social)
of the construction environment. The video output is a set of video images, which are
edited, tagged and stored using computer applications such as SynthEyes (Anderson-
Technologies 2014). As this research develops, the next step is to develop an AR in-
house computer application, using an AR development platform such as Vuforia®
(Vuforia 2014) over Unity-3d extension (Unity-Technologies 2014) - a game engine
and integrated development environment- for broad compatibility.
The sensory
input data is mostly
visual. Thus, the
video captures the
main components and
CEM activities of
interest and the
context (physical and
social) of the
construction (a) (b)
environment. The Exterior Wall Section Roof Section
video output is a set Figure 2: Images captured from an Unmanned Aerial
of video images, Vehicle (UAV).
which are edited,
tagged and stored using computer applications such as SynthEyes (Anderson-
Technologies 2014).
(2) Simulations. In the classroom, students manipulate superimposed real-
world images as they access and retrieve the images through a computer application
in the classroom. At the same time, the instructor uses a multi-display panel to show
the view of the same field images/streaming video with or without the virtual models
(i.e., the instructor will illustrate and assist the learning process, by showing the
© ASCE 4
Computing in Civil Engineering 2015 326
commposite or non-composi
n ite set of im
mages,
inclluding thee superimpposition off the
commputer gen nerated objects and other
auxxiliary informmation).
Since this
t researcch uses a video
devvice to captu ure particular scenes on o the
jobsite. The device is positioned in a
straategic, safe place to allow capturinng the
maiin compon nents of thhe construcction
proj
oject environ nment. As an a example,, see Figgure 3: Supeerimposed virtual
v
the superimposed steel coolumn struccture objeects (steel coolumns) oveer an in-
oveer and imag ge in-situ inn Figure 3. The situ image.
sim
mulation in th he Figure aim ms at enhanncing
the understandiing of locattion/orientatiion of the structure
s on the footings. The data
outtput consistss of a set of video cllips and im mages, classiified accordding to the
connstruction prrocess and materials
m useed to executte a particular construction activity
(e.gg., steel erecttion in the Figure).
F
(3) Vallidation An nd Assessmeent. This sttep assesses the learnerrs’ problem
solvving capabiilities relateed to CM spatial-tempporal constrraints. It iss aimed at
valiidating the learners’
l ability to solvve problems when usingg the ART intervention
i
andd when learn ners use tradditional matterials methoods. The latter is used as a a control
grooup. CM pro oblem solvinng capabilitiies denotes hereh the leaarners’ abilitty to obtain
CMM backgroun nd knowledgge with exiisting constrraints by arrticulating thhe problem
throough a techn nical commuunication to formulate
f feasible solutions.
The straategy to asssess problem m solving is on formulaating an ill-ddefined CM
situuation (e.g.,, estimatingg and schedduling) wheere the learrners’ spatial-temporal
abillities are esssential. Pooorly conceivved CM deefinitions, foor instance, consist of
pooorly defined,, incompletee parameters where fundamental CM M knowledgee is required
to anticipate
a an
n outcome.
The stu udent assessment strateegy also inncorporates questions in i a semi-
struuctured interrview for a more consiistent and acccurate asseessment. The questions
will be built acccording to the t followinng assessmennt aims (Am merican Socieety of Civil
Enggineers. Bod dy of Knowleedge Committee. 2008, Anderson
A annd Krathwohhl 2001):
• CM
C Knowlledge: The student’ss abiility to remeember the CM M material.
It immplies recallling the apppropriate speecific facts and
a the learnned CM conncepts (e.g.,
funndamental co oncepts of construction
c n techniquess and methoods) to conttribute to a
prooblem-solvin ng outcome.
• CM
C concep pts compreh hension: Thhe learners’ abilities to summarize
andd explain aree characterizzed by their ability to trranslate and map from a system of
symmbols (geom metries in thee drawings) to another system such as a text-based system.
Thiis is a basic level of undderstanding of o the CM prroblem, wheere the learneer is able to
expplain and disscuss details (e.g., give examples,
e peerform generralizations frrom details)
andd describes and distinnguishes differences (elements interactions, i including
prooperties and qualities).
q
• CM
C concep pts application: This aiim refers to the learner’’s ability to
usee the learneed CM conncepts, methhods, and theoretical t principles in
i concrete
© ASCE 5
Computing in Civil Engineering 2015 327
situations, including new or innovative ways. Learners’ are able to solve (find answer,
explanation), apply (find uses for particular purposes or cases), use tools (to a
particular purpose), plan (articulate, contribute, implement, and provide a solution or
outcome), conduct related activities (carry out reports), organize (order elements in
the construction process), and explain (broad clarification to give reason for or cause).
• CM problem analysis: This aim denotes the CM student’s ability to
separate the components parts of an assembly by separating or breaking down
elements for close examination. The analysis activity relating (mapping) parts to
another part or a group of parts for a better understanding of the structure or
organization as whole (e.g., mapping the assembly parts to understand the
components functionality within the assembly-wall structure). The analysis leads to
the organization of parts into a systematic planning and coordination of the systems
involved and to looking at the parts’ functionality as a whole. To reach the proper
outcome, the learners compare the parts’ quality and properties, by identifying
particular features in order to contrast the differences.
• Synthesis of CM concepts: This aim refers to the learners’ ability to
assemble a new whole set or group (by properly and orderly fitting parts) as an
outcome through interpretations. It is bringing into existence a new plan (arrange
parts, devise the realization of CM concepts), and design or creation according to a
plan. It is putting together factors into consistent and complex settings, to reveal and
develop hidden features, to assemble and fit (adapt by means of changes and
modifications) disparate elements into specified and suitable conditions.
• CM conceptualization evaluation: This aim refers to the learner’s
ability to judge the value of concepts, which are definitions of elements for a given
purpose, by meeting defined criteria (e.g., subcontractors’ required conditions for
execution of a assembly task due to constrains- such as costs- and quality/property
conditions). Learners judge and analyze the significance of CM elements and
concepts by justifying their use and by determining their importance. It involves
critical analyses and self-assessments.
CONCLUSIONS
© ASCE 6
Computing in Civil Engineering 2015 328
follow-on ART design environments within the classroom will facilitate ART use and
its implementation within CM programs from other US institutions).
ACKNOWLEDGEMENT
This research has been founded by the National Science Foundation (NSF)
grant No. 1245529.
REFERENCE
American Society of Civil Engineers. Body of Knowledge Committee. (2008). Civil
engineering body of knowledge for the 21st century : preparing the civil
engineer for the future, American Society of Civil Engineers, Reston, Va.
Anderson, L. W., and Krathwohl, D. R. (2001). A taxonomy for learning, teaching,
and assessing : a revision of Bloom's taxonomy of educational objectives,
Longman, New York.
Anderson-Technologies (2014). "SynthEyes. 3-D camera-tracking (match moving)
application.", <http://www.ssontech.com/synovu.html >, (Accesed, April,
2014), Last Update March 2014.
Azuma, R., et al. (2001). "Recent Advances in Augmented Reality." IEEE Comput.
Graph. Appl., 21(6), 34-47.
Barfield, W., and Caudell, T. (2001). "Fundamentals of Wearable Computers and
Augumented Reality."CRC Press, 836.
Bhattacharjee, S., et al. (2013). "Comparison of Industry Expectations and Student
Perceptions of Knowledge and Skills Required for Construction Career
Success." International Journal of Construction Education and Research, 9(1),
19-38.
Bimber, O., et al. (2005). "Spatial augmented reality merging real and virtual worlds."
A K Peters,, Wellesley, Mass.
Chen, H.-M., and Huang, P.-H. (2013). "3D AR-based modeling for discrete-event
simulation of transport operations in construction." Automation in
Construction, 33, 123-136.
Glick, S., et al. (2012). "Student Visualization: Using 3-D Models in Undergraduate
Construction Management Education." International Journal of Construction
Education and Research, 8(1), 26-46.
Haller, M., et al. (2007). "Emerging technologies of augmented reality interfaces and
design." Idea Group Pub.,, Hershey, PA.
Hou, L., et al. (2013). "Using Augmented Reality to Facilitate Piping Assembly: An
Experiment-Based Evaluation." Journal of Computing in Civil Engineering
Issa, R. R. A., and Haddad, J. (2008). "Perceptions of the impacts of organizational
culture and information technology on knowledge sharing in construction."
Construction Innovation: Information, Process, Management, 8(3), 182-201.
Jae-Young, L., et al. "A Study on Construction Defect Management Using Augmented
Reality Technology." Proc., Information Science and Applications (ICISA),
2012 International Conference on, 1-6.
Jestrab, E. M., et al. (2009). "Integrating Industry Experts into Engineering Education:
© ASCE 7
Computing in Civil Engineering 2015 329
© ASCE 8
Computing in Civil Engineering 2015 330
© ASCE
Computing in Civil Engineering 2015 331
© ASCE
Computing in Civil Engineering 2015 332
One way around this issue is to have tools that can develop knowledge earlier in the
design process. The coincidence of high knowledge and a large number of design
options is hypothesized to lead to better design decisions.
The plugin being developed in this project aims to provide more knowledge to the
architects in the preliminary design phase. By developing an energy modeling toolset
that is compatible with the tools, constraints, and methods of early building
architecting and design, early design decisions can be made effectively to improve
building energy efficiency in the final product.
Design decisions based on HVAC system
Minor selections in a design can have a major impact on the end energy consumption
of a building after its completion. For instance, the appropriate orientation of a
building can reduce energy consumption by 30-40% (Cofaigh et. al. 1999). Similarly
proper sizing and positioning of fenestration of a building also have a major impact
on HVAC loads, and therefore total energy consumption. In general practice,
architects use rules of thumb and previous experience to design these features. For
example, it is common knowledge that windows facing east and west have maximum
solar gain. Hence to reduce solar heat gain generally windows are avoided on these
faces, and if used, proper shading is applied. Some of the early design decisions that
architects can take to reduce energy impact include building orientation, wall
insulation material and window properties like size, shading and glazing type.
Previous Tools
Numerous tools have been developed to assist designers in making energy efficient
buildings. But many of these tools require some previous training and technical
know-how about the software and its limitations. SketchUp® is a software package
that is widely used by architects for generating initial CAD models of buildings.
Though SketchUp® has a plugin called OpenStudio®, it requires the entire model to
be constructed again in the plugin and has considerable computational costs to run
each simulation. The computational cost of such programs both increases the time
required for design, and makes these tools incompatible with early stages of
sketching, architecting and design.
One specific tool that follows the same simulation methodology as this plugin is the
Green Building Studio® developed by Autodesk®. This tool has the benefit of being
able to import data from any CAD software using gbXML and exporting the data into
energy plus to carry out a detailed simulation and also takes into account the real
world weather data accumulated over more than a million virtual weather stations
(Autodesk®). However a major concern while using this software is the unreliability
of gbXML while converting geometry. A study by Stanford University (Maile T. et.
al. 2007) shows that some geometry/wall properties are lost during conversion. In
© ASCE
Computing in Civil Engineering 2015 333
terms of computing power requirements, since this tool utilizes cloud computing, it is
more robust but is dependent on the use of internet. Whereas, the tool in development
does not require file conversion and gives the preliminary simulation results in the
same CAD environment, thereby allowing an architect to make the first few decisions
in a more informed way.
Thus this plugin aims at changing the general work flow of design by integrating the
simulation stage within the CAD environment and reduce the time consumed in the
numerous detailed iterations needed to reach an optimal solution. Figure 2 compares
the usual work flow with the modified work flow.
Hence having a plugin that uses the existing CAD data to give instantaneous
preliminary results within the same environment/computer window would help in
removing many of the barriers inhibiting the application of energy modeling tools.
© ASCE
Computing in Civil Engineering 2015 334
• Wall/Window/Floor/Roof areas
• Wall orientation angle with respect to south(azimuth angle)
• GPS location of site from Google® Earth
All the above data is already generated by default when an architect creates his CAD
model. The only extra input that the user has to provide is the building type as per the
16 DoE classification of commercial buildings (US DoE 2011). Based on the building
type. The material properties are pulled from a database that is hard-coded into the
plugin based on building type.
© ASCE
Computing in Civil Engineering 2015 335
vertical wall in each of the standard 8 orientations along the geographic axes, the
transmitted and diffused heat gains were calculated and a database was created for all
the TMY3 locations. Table 1 shows the net solar irradiance for the city of Denver for
first 10 daylight hours of the year for the standard orientations.
Figure 4. - Flow chart for RTS method including NSRDB-informed hourly load
calculations
Table 1 – Total Irradiance including direct, diffused and reflected components
for each wall orientation in Btu/h.ft2
Hour N NE E SE S SW W NW
7 0 0 0 0 0 0 0 0
8 1.45 2.67 7.41 8.87 5.83 1.56 1.45 1.45
9 13.99 16.00 41.85 56.34 44.50 17.88 13.99 13.99
10 25.67 26.11 112.06 194.07 176.40 71.95 25.67 25.67
11 31.66 31.66 83.65 195.48 213.08 123.49 32.19 31.66
12 40.40 40.40 47.52 178.67 236.46 176.79 46.18 40.40
13 38.22 38.22 39.14 114.84 187.28 171.02 79.58 38.22
14 28.70 28.70 28.70 80.30 191.06 208.26 119.23 29.09
15 29.34 29.34 29.34 40.55 128.75 164.68 118.32 33.15
16 12.05 12.05 12.05 12.95 91.41 143.63 117.57 31.45
17 1.69 1.69 1.69 1.69 19.48 38.82 36.31 13.59
18 0 0 0 0 0 0 0 0
© ASCE
Computing in Civil Engineering 2015 336
For the demonstration of the calculation procedure, consider a medium sized office
located in the city of Denver. The outer shell has a dimension of 120’ by 80’ and
consists of 3 floors of 10’ each. Two possible orientations are depicted in Figure 5.
As can be seen from Figure 6 and Table 1 the max heat gain occurs at the south face
and min heat gain on the north [Graph only shows the direct beam component of
solar radiation].
© ASCE
Computing in Civil Engineering 2015 337
adding the internal heat gains the plugin displays the result within the same window.
Since the calculations consists of only multiplications the simulation time is in
seconds. Though the results may not be as accurate as the ones obtained using the
higher fidelity software packages, it gives an idea to the designer whether the change
made to the design would improve the building or vice versa. He/she can then select
the top five designs and send them for further analysis in more sophisticated software
packages.
Conclusions
This paper has presented the methods by which a light-weight building HVAC energy
model is being developed from the ASHRAE RTS method and integrated into a
Sketchup® Plug in. The goal of the plugin is not to develop a calculation model, but
to develop a new workflow for an architect that lets him/her make better informed
design decisions. Once some preliminary designs are selected using this plugin, they
can be run into more standardized and accurate software packages to reach a final
decision. This toolset has advantages over the current state of the art in building
HVAC system modeling tools in that it has fewer information requirements, lower
computational time requirements and presents the results within the CAD
environment.
Future work will concentrate on integrating other aspects of the energy consumption
of the entire building lifecycle in the plug in. Through integration of materials
embedded energy calculations, HVAC energy consumption, lighting energy
consumption [Daylight utilization], water consumption, and end-of-life, a building
carbon footprint metric tool can be developed.
References
ASHRAE (2013) Handbook – Fundamentals
Autodesk Green Building Studio Features, http://www.autodesk.com/products/green-
building-studio/overview
Cofaigh E. O., Fitzgerald E., Alcock R., McNicholl A., Peltonen V., Marucco A.
(1999) A green Vitruvius - principles and practice of sustainable architecture
design. James & James (Science Publishers) Ltd., London
Hasegawa, T. (2002). “Policy Instruments for Environmentally Sustainable
Buildings,” Proc., CIB/iiSBE Int. Conf. on Sustainable Building, EcoBuild, Oslo,
Norway
Maile T., Fischer M., Bazjanac V. (2007). “Building Energy Performance Simulation
Tools - a Life-Cycle and Interoperable Perspective” - CIFE Working Paper
#WP107 December 2007
© ASCE
Computing in Civil Engineering 2015 338
Malmqvist, T. et. al. (2009) “Life cycle assessment in buildings: The ENSLIC
simplified method and guidelines” Energy, Volume 36, Issue 4, April 2011, Pages
1900-1907
NSRDB TMY3 data compiled by National Renewable Energy Laboratory (NREL)
U.S. Department of Energy Commercial Reference Building Models of the National
Building Stock (2011) Technical report, NREL/TP-5500-46861, February 2011
© ASCE
Computing in Civil Engineering 2015 339
Abstract
INTRODUCTION
Air conditioners are used for cooling or heating air in buildings and houses. It
is generally known that air temperature is not even and varies place to place in the
same office or room. Monitoring the spatial, thermal environment would help to
conceive effective countermeasures to such temperature variation. For example, if a
certain place is hot and others are cool in a room, it may be a countermeasure to
deploy electric fans to mix airs of different temperatures. The thermal environment
can be understood by either numerical analysis simulations or monitoring.
Although the thermal environment can be predicted for various conditions by
changing parameters in numerical analysis, high level professional knowledge and
expensive analysis software are necessary to execute simulations. In addition, it takes
© ASCE 1
Computing in Civil Engineering 2015 340
much time and effort to make a 3D model with special attention for analysis and
difficulty exists in verifying the simulated result.
On the other hand, grasping the thermal environment by monitoring
temperatures enables us to compile a database of actual condition, so it is possible to
obtain the real time information. Recently, it is getting easier to monitor temperatures
using Wireless Sensor Network (WSN) thanks to the technological improvement of
wireless communication devices and miniaturized and power saving sensors. In the
future, it is expected to obtain and utilize various types of information by combining
more sensors to WSN. However, sensing data is usually displayed in charts or tables,
where users will find it difficult to know corresponding relationships between sensors
and their locations. Thus, when a countermeasure to make the temperature uniform in
a room is performed, it takes time to check the employed countermeasure works or
not.
One of the methods to represent sensor data in a 3D environment is to display
the data of temperature sensors in a 3D model. However, it takes time and effort to
create a 3D building model. On the other hand, Augmented Reality (AR) is drawing
an attention recently. AR is a technology adding information to the actual
environment by overlapping digital information such as Computer Graphics (CG) to
video camera images. AR can keep overlapping the model in designated position
from the video images and following perspectives by automatically estimate the
location and position of the camera from video images immediately. The model to be
overlapped can be updated in real time so that time consistency can be kept with the
real world. One of the advantages of AR is that the video camera can be moved
during the real-time operation thanks to the registration techniques.
In this research, a system which visualizes the information obtained in real
time with AR technology by monitoring the indoor thermal environment by using
WNN was developed. This system allows users to observe thermal environment in a
3D space, in order for them to intuitively grasp updated distribution status of
temperature and humidity. As a result, the effectiveness of the measures conducted
can be seen, and it can conduct more effective measures than conventional ones. The
expected users of this system are, workers at an office, residents of a house, students
in a classroom, etc. The feasibility of this system was studied as an environmental
improvement method by an experiment.
PRVIOUS RESEARCH
© ASCE 2
Computing in Civil Engineering 2015 341
and monitoring its resolution based on the analysis of the measures. By indicating the
result of the air flow analysis using VR before and after taking measures, it is possible
to understand correctly and grasp easily 3D behavior.
© ASCE 3
Computing in Civil Engineering 2015 342
received data in its own SQLite database and send it to a client computer via wireless
LAN.
Server computer and client computer performed communication in XML
(Extensible Markup Language) way. Client computer analyzed the XML, and
extracted the necessary information only.
The method mentioned in the previous chapter (Hatori et al., 2013) was used
as an adjustment method. The markers are sized for each marker to be detected, as
accuracy of adjustment has no dependence between the sizes of markers. Four or
more different artificial markers were set on the same level. Arbitrary 3D coordinate
system is set at an indoor space to obtain relative coordinate value between the
standard marker and each marker. When the system is on and four or more markers
are recognized, the model can be overlapped. Sometimes when the markers reflect the
light and cannot be detected, images from video camera are converted to binary ones
and indicated, so that it will be possible to confirm the status of detection and change
of threshold value as needed.
© ASCE 4
Computing in Civil Engineering 2015 343
Relationship between R, G, B rates (0.0 - 1.0) and each thermal data was
defined. Relationship between temperature and R, G, B rates is shown in Table 1.
EXPERIMENT
© ASCE 5
Computing in Civil Engineering 2015 344
prepared. In this research, regular tentacle markers using internal patterns with the
letters “me,” “so,” “po,” “ta,” “mi,” “a,” “bun,” and “mei,” written in both Japanese
Hiragana and Kanji with Kozuka Gothic Pro font. Size of the markers used were 350
x 350mm each. To enhance a sense of detection in an actual environment, white
frames were created around them.
In this research, NeoMote of Sumitomo Precision Products Co., Ltd. was used
to build WSN. In the experiment, temperature measurements were conducted in 34
different points. Sensors were hung in the air to measure temperature at interspace.
© ASCE 6
Computing in Civil Engineering 2015 345
Discussion
The system developed made it possible to obtain temperature distribution at
an indoor space intuitively, so it also leaded to obtain the changes in real time. By
observing process of the changes, it was possible to understand the results of each
measure immediately and leaded to the result of measure comparisons were
conducted easily. It is assumed that these results will contribute to creating a better
environment, because users can understand current status at once so that they can
come out with various types of measures on the spot.
Obviously, the snapshots in Figure 4 do not show all sensors. Other sensors
are not in the field of view in this case. The user of the system can move the video
camera to see the condition of other sensors.
© ASCE 7
Computing in Civil Engineering 2015 346
CONCLUSIONS
In this research, the system was developed in order for the users to obtain the
thermal environmental information database intuitively by monitoring the indoor
thermal environment with WSN. The improvement measure at an indoor environment
was also conducted by using this system.
The conclusion of this research is summarized as follows:
The new visualization system was built which indicate the database measured
and collected via Wireless Sensor Network in the actual indoor space by AR.
By observing indoor thermal environment with this system, the status of change
aiming to improve the environment was obtained, resulted in a significant
environmental improvement method.
REFERENCES
Hatori, F., Yabuki, N., Komori, E., and Fukuda, T. (2013). “Application of the
augmented reality technique using multiple markers to construction sites.”
Journal of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering
Informatics), 69 (2), I_24-I_33.
Ota, N., Yabuki, N., and Fukuda, T. (2010). “Development of an accurate positioning
method for augmented reality using multiple markers.” Proceedings of the
13th International Conference on Computing in Civil and Building
Engineering (ICCCBE), Nottingham, UK, Paper 4, Memory, 1-6.
Sato, Y., Ohguro, M., Nagataki, Y., and Morikawa, Y. (2009). “Development of
advanced VR system Hybridvision.” Technical center report of Taisei Corp.,
No. 42.
Yabuki, N., Furubayashi, S., Hamada, Y., and Fukuda, T. (2012). “Collaborative
Visualization of Environmental Simulation Result and Sensing Data Using
Augmented Reality.” Proceedings of the International Conference on
Cooperative Design, Visualization, and Engineering (CDVE 2012), 227-230.
© ASCE 8
Computing in Civil Engineering 2015 347
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 348
Outage and the 2008 China Winter Storm. Therefore, the need to build cities with
greater resilience has been widely recognized in both academia and practice.
However, there is no universally accepted definition of the concept of ‘resilience’
in urban studies (Francis and Bekera, 2014). Throughout the remainder of this
paper, ‘resilience’ refers to the overall response of CI systems in a short term after
disturbances, instead of long-term recoverability.
Despite the recognition of the importance of urban resilience and the
active research seen in the past decade, the following seemingly basic questions
still remain daunting: how to model the complex CI interdependency, and how to
systematically analyze the resilience of interdependent CI systems. To address
these challenges, this paper proposes a conceptual framework. The framework is
composed of a multilayer network model for representing technological networks
of CI systems, and a targeted attack-based approach for resilience analysis.
LITERATURE REVIEW
CONCEPTUAL FRAMEWORK
© ASCE 2
Computing in Civil Engineering 2015 349
© ASCE 3
Computing in Civil Engineering 2015 350
focus of this paper. They are selected because of their typical technological
network structure. Network-based modeling is often used for these networks.
© ASCE 4
Computing in Civil Engineering 2015 351
Criticality measurement
Measuring node criticality is an important step in interdependency
modeling and resilience analysis. The prior research concerning network-based
modeling often uses degree centrality of nodes to measure criticality (Newman,
2003). However, for directed network, a node has both out-degree and in-degree,
and having link directions makes a significant difference to criticality
measurement (Wang et al., 2012b). In terms of CI interdependency, out-degree
represents how many adjacent facilities this facility supports, while in-degree
represents how many adjacent facilities this facility relies on for support. Thus,
out-degree reflects more about the importance of this facility to a network. The
simple sum of out- and in-degrees, in other words degree centrality, cannot fully
reflect the criticality of a node. Therefore, in this network model, PageRank (PR)
algorithm is introduced to calculate node criticality. PR is an iterative algorithm
first used by Google search to measure website criticality. Note that for website, a
link is a hyperlink from one page to another, so link direction represents ‘being
supported’ instead of ‘supporting’. Therefore, in order to use the PR algorithm to
calculate CI network, link directions in CI network should first be all changed to
the opposite. Each node in directed network can be assigned a numerical weight
(PR value). The basic idea of the PR algorithm is that a more important node is
likely to receive more links from other nodes, which is also applicable in the CI
network case. Through the PR algorithm, each node in a layer is assigned a PR
value, which is the criticality, after an iterative computation. The PR value lays
the basis for targeted removal-based resilience analysis discussed later in the
paper.
The implementation of the PR algorithm in calculating node criticality is
explained as follows.
© ASCE 5
Computing in Civil Engineering 2015 352
1. Set a matrix
i;
2. Set initial PR values (i=1,2,…,N) to all nodes in a layer, and make
sure ;
3. Set the damping factor ‘d’ as recommended 0.85 (Brin and Page, 1998). In
Iteration Step t, divide every node’s PR value at Step (t-1) equally and give
shares to its pointing nodes. Correct every node’s PR value as the sum of
shares it gets multiplied by 0.85. The remaining 0.15 value is equally
distributed to all the nodes. Thus,
(where N is the number of all nodes in a layer);
4. Iterate until convergence.
It should be noted that the selection of initial PR values does not influence
the convergence result, but does influence computational time (Page et al., 1999).
Resilience analysis
Network response measurement
In prior research, characteristic path length has been widely regarded as an
important metric to measure network response (Dueñas-Osorio et al., 2007;
Ouyang et al., 2009), and changes to its value indicate changes in global
efficiency due to disturbances. Characteristic path length is the harmonic mean of
all geodesic paths between any pairs of nodes, and can be defined as follows:
,
where N is the number of nodes in a layer, including the two virtual nodes, and
is the geodesic path from node i to node j. For directed network, the shortest
path between two nodes is direction sensitive.
Targeted attack-based resilience analysis
To investigate the potential response of CI to hypothetical disturbance, a
common procedure is to first determine the criticality of every infrastructure asset
prior to disturbances based on certain criteria (Grubesic et al., 2008). Based on the
criticality ranking, targeted node removal is used to mimic disturbances to CI and
examine the responses of CI when exposed to the disturbances. Prior to resilience
analysis, based on propagation manner, disturbances are first classified into two
types, including functional disturbances and geographical disturbances.
Functional disturbances refer to the disturbances whose impact prorogates
to another layer via functional interdependency only. Typical functional
disturbances include mechanical breakdowns, material shortages, operational
mistakes of infrastructure assets, and small-range man-made damages, such as fire
of small scales. When a functional disturbance occurs, the infrastructure asset
loses its function, either partially or completely, and the subsequent impact
prorogates within the CI system and/or to another system via intra- and inter-links.
Geographical disturbances, on the contrary, refer to disturbances whose impacts
prorogate via at first geographical proximity and then functional interdependency.
Typical geographical disturbances include climate changes, natural disasters, and
man-made damages of large scales (e.g. explosions).
© ASCE 6
Computing in Civil Engineering 2015 353
ACKNOWLEDGMENTS
© ASCE 7
Computing in Civil Engineering 2015 354
REFERENCES
Brin, S., Page, L., 1998. The anatomy of a large-scale hypertextual Web search
engine. Computer networks and ISDN systems 30, 107-117.
Buldyrev, S.V., Parshani, R., Paul, G., Stanley, H.E., Havlin, S., 2010.
Catastrophic cascade of failures in interdependent networks. Nature 464,
1025-1028.
Dueñas-Osorio, L., Craig, J.I., Goodno, B.J., Bostrom, A., 2007. Interdependent
response of networked systems. Journal of Infrastructure Systems 13, 185-
194.
Dueñas-Osorio, L., Vemuru, S.M., 2009. Cascading failures in complex
infrastructure systems. Structural safety 31, 157-167.
Francis, R., Bekera, B., 2014. A metric and frameworks for resilience analysis of
engineered and infrastructure systems. Reliability Engineering & System
Safety 121, 90-103.
Grubesic, T.H., Matisziw, T.C., Murray, A.T., Snediker, D., 2008. Comparative
approaches for assessing network vulnerability. International Regional
Science Review 31, 88-112.
Johansson, J., Hassel, H., 2010. An approach for modelling interdependent
infrastructures in the context of vulnerability analysis. Reliability Engineering
& System Safety 95, 1335-1344.
Marsh, R. T., 1997. Critical foundations: Protecting America’s infrastructure.
President's Commission on Critical Infrastructure Protection.
Newman, M.E., 2003. The structure and function of complex networks. SIAM
review 45, 167-256.
Oh, E.H., Deshmukh, A., Hastak, M., 2012. Criticality Assessment of Lifeline
Infrastructure for Enhancing Disaster Response. Natural Hazards Review 14,
98-107.
Ouyang, M., Hong, L., Mao, Z.-J., Yu, M.-H., Qi, F., 2009. A methodological
approach to analyze vulnerability of interdependent infrastructures.
Simulation Modelling Practice and Theory 17, 817-828.
Page, L., Brin, S., Motwani, R., Winograd, T., 1999. The PageRank citation
ranking: Bringing order to the web.
Rinaldi, S.M., Peerenboom, J.P., Kelly, T.K., 2001. Identifying, understanding,
and analyzing critical infrastructure interdependencies. Control Systems,
IEEE 21, 11-25.
Thomas, W., North, M., Macal, C., Peerenboom, J., 2003. From physics to
finances: Complex adaptive systems representation of infrastructure
interdependencies. Naval Surface Warfare Center Technical Digest, 58-67.
Wang, S., Hong, L., Chen, X., 2012a. Vulnerability analysis of interdependent
infrastructure systems: A methodological framework. Physica A: Statistical
Mechanics and its Applications 391, 3323-3335.
Wang, X., Li, X., Chen, G., 2012b. Network Science: An Introduction. Higher
Education Press Beijing.
© ASCE 8
Computing in Civil Engineering 2015 355
INTRODUCTION
Construction project organizations are complex systems-of-systems consisting
of interconnected networks of different node entities: human agents, resources,
information, and tasks (Zhu and Mostafavi, 2014). The complex and dynamic
interactions between these node entities affect the ability of project organizations to
cope with uncertainties. Similar to other complex networks, the ability of construction
projects to cope with uncertainty can be understood based on investigation of the
structural properties of the network topology. Over the past two decades, social
network analysis (SNA) has been used for better understanding of the performance in
construction projects by taking social elements of project organizations into
consideration (Loosemore, 1998; Pryke, 2004; Chinowsky and Victor, 2008). Despite
the efforts made in investigating construction projects using SNA, there are certain
limitations in the existing studies. The major limitations are twofold: (i) the lack of
consideration of different types of entities and relationships in construction project
networks. SNA mainly focuses on the interactions between human agents. Other node
entities in construction project organizations (i.e., resource, information, and task) and
their interdependencies (e.g., interdependencies between human agents and resources,
or information and task) are not fully considered; (ii) the lack of consideration of the
impact of uncertainty. The unpredictability of performance in project organizations is
© ASCE
Computing in Civil Engineering 2015 356
mainly due to their complexity as well as the existing uncertainties (Jin and Levitt
1996). Uncertainty-induced perturbations would disrupt the original topology of the
network, and subsequently, affect the project performance. SNA, as a descriptive
approach, has failed to capture the impacts of uncertainty on the performance of project
organizations. Hence, an integrated framework for analysis of complex interactions and
uncertainty in construction projects is missing in the existing literature. This paper
proposes a multi-node, multi-link and dynamic network simulation framework to
address this important gap in knowledge.
© ASCE
Computing in Civil Engineering 2015 357
Network and node metrics: A large number of metrics can be used to assess the
node and network features and properties in meta-networks (Carley and Jeff, 2004).
The use of appropriate metrics depends on the objective of analysis. For example, for
analysis of performance, a significant metric is task completion. The value of task
completion metric ranges from 0 to 1. It measures the percentage of tasks that can be
completed by the agent assigned to them, based on whether the agents have the requisite
information and resource to do the tasks (Carley and Jeff, 2004). Equations (1)-(3)
show the knowledge-based task completion calculation steps when all the individual
networks are represented as binary matrices. Resource-based task completion is
calculated in a similar way, by replacing matrix AI with AR and matrix IT with RT.
The overall task completion is then obtained as the average of information-based task
completion and resource-based task completion.
𝑁𝑒𝑒𝑑 = [(𝐴𝑇 × 𝐴𝐼) − 𝐼𝑇 ] (1)
𝑆 = {𝑖|1 ≤ 𝑖 ≤ |𝑇|, ∃ 𝑗: 𝑁𝑒𝑒𝑑(𝑖, 𝑗) < 0} (2)
𝐾𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒 𝑏𝑎𝑠𝑒𝑑 𝑡𝑎𝑠𝑘 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 = (|𝑇| − |𝑆|)⁄ |𝑇| (3)
NUMERICAL EXAMPLE
A numerical example of a tunneling project is used here to illustrate the
application of the proposed framework. In this example, New Austrian Tunneling
Method (NATM) is adopted. The major significance of this approach is dynamic design
based on rock classification as well as deformation observed during construction. By
adjusting the initial design during construction, a tunnel can be constructed at a
© ASCE
Computing in Civil Engineering 2015 358
reasonable cost compared with the conventional method, which uses the suspected
worst rock condition for design. To achieve this purpose, in NATM, rock samples are
first collected by geologists during the early stage of the design process. Rock condition
is tested and analyzed in the laboratory, and the classification of rock mass type is
determined by comparing the test result with the rock quality designation index. Using
the rock type data, the designer conducts the initial design. According to the initial
design, excavation crew conducts excavation into the tunnel face, followed by loading
explosives and blasting. The safety supervisor does the safety inspection and issues the
safety approval before the blasting. After the excavation crew finishes the excavation
process, the support installation crew continues the work with initial lining, which
include applying shotcrete and installing the initial support (e.g., rockbolts, lattice
girders or wire mesh). Once initial support elements are installed, measurement
instrumentations are installed to observe the behavior of rock. Geologists undertake the
observation and report the rock deformation to the designer. The designer then decides
whether a design change is needed. If not, a final lining process composed of traditional
reinforced concrete in conducted; otherwise, the designer revises the initial design for
both initial lining and final lining. In this case, the support installation crew will use
the revised design to implement the initial and final lining. The whole tunneling project
is constructed in sections. At the end of each section, the risk manager reviews the
initial design, revised design, as well as the rock deformation reports to assess the risks
and makes a decision on the step length for excavation of the next section. For example,
if a relatively large deformation is observed, the risk manager will decrease the step
length to eliminate the risk of collapsing. Using the proposed framework, the complex
interactions in this project can be conceptualized as a meta-network. Table 3
summarizes the node entities in the project organization’s meta-network.
Table 3. Basic Entities in the Tunneling Project Case.
Agent geologists, designer, excavation crew, support installation crew, risk
manager, safety supervisor
Information rock condition, rock quality designation index, safety approval, initial
design, rock deformation, revised design, step length
Resource excavator, explosive, loader, truck, boomer, shotcrete machine, initial
support, concrete, reinforcement, measurement instrument, electric
power system
Task laboratory test, initial design, excavation, safety inspection, blasting,
mucking, apply shotcrete, install initial support, observe deformation,
revise design, final lining, risk assessment
© ASCE
Computing in Civil Engineering 2015 359
This project is exposed to various uncertain events that will cause perturbations
in the node entities, which would eventually affect the performance of the project.
Different scenarios of uncertain events were simulated to evaluate the vulnerability of
the project organization. Events were defined based on two features: perturbation effect
(Table 2) and likelihood of occurrence. In the simulation scenarios for this case study,
three levels of uncertainty were defined: low, medium, and high. Events of the three
levels of uncertainty have 10%, 20%, and 50% likelihood of occurrence, respectively.
Agent
Information
Resource
Task
Results
First, the meta-network (i.e., without perturbation) was analyzed and the key
node entities were identified. Then, the impacts of random perturbation on the
performance of the project were investigated.
Analysis of the meta-network: The project’s meta-network is composed of 36
nodes (of four different types) and 112 links (of ten different types). The density of the
meta-network as a whole is 0.178, which implies a relatively low level of
connectedness. The overall task completion is 1, which means under no perturbations,
the project organization is capable of completing all the tasks. The top five critical
agent, information and resource nodes are identified and shown in Figure 2. The
criticality of the nodes are evaluated by their total degree centrality values, which are
determined based on the connections that each node has in the meta-network. The
designer and excavation crew are the most critical agent nodes in the project, since they
undertake multiple tasks using different resources and information in the project. The
initial and revised design are two critical information nodes in the project. This is
because the design information in this adaptive construction project are connected and
used by many other node entities (e.g., designer, rock deformation, excavator, and
initial support). The electric power system is the most critical resource node. It is used
by the geologists, excavation crew and support installation crew in different tasks such
as excavation, safety inspection as well as rock deformation observation. Critical nodes
in the meta-network are crucial for the project success, and thus, any uncertainty-
induced perturbations in the critical nodes could lead to significant impacts on the
project performance. This information is important for developing mitigation plans to
eliminate or reduce the likelihood of perturbations in the critical nodes.
© ASCE
Computing in Civil Engineering 2015 360
© ASCE
Computing in Civil Engineering 2015 361
© ASCE
Computing in Civil Engineering 2015 362
CONCLUSION
In this paper, a dynamic network analysis and simulation framework is
proposed. The proposed framework is based on a multi-node, multi-link meta-network
structure that facilitates considering different types of entities and links in construction
projects. The proposed framework provides a novel approach for quantitative and
probabilistic assessment of project performance under uncertainty based on
consideration of the dynamic interactions between agents, information and resources.
The results of the analysis enable identification of: (i) critical node entities; (ii)
significant uncertain events; and (iii) the vulnerability of project performance to
different uncertain events individually and collectively. This information would be
critical in identification of effective mitigation actions and prioritizing them to optimize
the allocation of resources for reducing the impacts of uncertain events on project
performance under uncertainty. In future work, project organization’s speed and
capability of return to the new steady state following perturbations will be considered.
This would ultimately foster a paradigm shift toward proactive assessment and
management of project performance using an integrated approach.
REFERENCES
© ASCE
Computing in Civil Engineering 2015 363
ABSTRACT
Given the uniqueness and diversity of constraints from each construction site,
tower crane planning can be a challenging task, especially in terms of collecting and
presenting multiple sets of information with varying levels of detail. Several research
studies have been carried out to investigate information categories necessary for
tower crane planning. This paper investigates challenges related to information
representation of tower crane planning in a computational environment. Also, a site-
specific information representation schema for tower crane planning is presented.
Requirements and information resources for each category of information are
specified with the objective of visualizing and automating tower crane planning. This
information representation schema was tested on a building construction project and
is subject to continuous refinement based on additional case studies of building
construction projects that have different characteristics and constraints. Uniqueness
from individual projects is expected; however, critical constraints and characteristics
are commonly shared by the entire building construction field.
INTRODUCTION
For construction site layout planning activities, among many categories and
resource of information being concerned (Zolfagharian and Irizarry 2014), there are
three critical components: temporary facility layout planning (Elbeltagi et al 2004,
Razavialavi et al 2014), material handling (Pheng and Hui 1999), and construction
equipment planning, where optimization and automation are highly desired (David et
al 2009). Effective site layout planning has been recognized as an effective factor to
improve project safety and performance (Hornaday et al. 1993). Also, tower crane is
typically one of the most expensive equipment on a construction site and central to
the majority of structure-related material handling (e.g., equipment, tools and material
© ASCE
Computing in Civil Engineering 2015 364
transportation) as well. Therefore, tower crane planning is key to controlling the pace
of construction activities (Gray and Little, 1985).
The state-of-practice approach of tower crane planning is highly ad hoc and
heavily relies on planners’ “cognitive capabilities” (Tommelein et al 1991) and
professional experiences. Generally speaking, tower crane planning is about defining
and refining both the configuration and location of tower cranes for different
construction phases and for each work zone. Due to the dynamic nature and
considerable amount of unique constraints for each construction site, tower crane
planning requires a variety of information to derive an acceptable plan. Typically, the
preliminary tower crane planning requires input from the owner organization, and
engineering consultants, if applicable (Hornaday et al 1993), and the major
responsibility rests on the general contractor (specifically, project manager and
superintendents).
For building construction projects, tower crane planning is a “hand-in-hand”
process with other aspects of site layout planning to achieve the optimized goal
against cost, schedule, and safety performance. In other words, tower crane planning
requires much iteration and interaction with other jobsite facilities layout planning
(e.g. size, location). There are many criteria to declare a failed tower crane plan,
either failed partially due to safety concerns or failed as a whole against the three
criteria mentioned above.
Tower crane planning lacks a commonly agreed definition of desired optimum.
One reason is that each construction site has its own challenges that are hard to be
generalized. The second reason is that there are many uncertainties during the course
of the whole construction process. For example, material delivery delays, severe
weather conditions, and human resource issues. Any of these factors can undermine
the project in terms of cost, schedule, and safety performance. Hence, evaluating the
success of tower crane planning is difficult to practically achieve.
Albeit the dynamicity and uncertainty of construction projects, lift demand
analysis, which is the fundamental input for tower crane planning, should be
investigated and developed. Well-structured lift demand analysis can reduce the
burden of planners from tedious and error prone quantity takeoff and help them focus
on “means and methods” and many other constraints.
However, current research studies did not explicitly explore the framework for
synthesizing necessary information for efficient building information modeling (BIM)
implementation (e.g. 3D modeling, 4D simulation, and tower crane related constraints
detection). Information resources, level of detail for each information category and
modeling methods, have not been thoroughly understood.
BACKGROUND RESEARCH
During the last few decades, there have been many research studies focused
on developing mathematical models to improve the decision-making tools for tower
crane planning. For example, Rodriguez-‐Ramos and Francis (1983) developed an
algorithm, which captures the movement of trolley and boom swing activities to
model material transportation activities. Transportation associated cost is calculated
for single tower crane location optimization. This means that tower crane location can
be optimized according to quantitative metrics. Afterwards, many research studies
© ASCE
Computing in Civil Engineering 2015 365
© ASCE
Computing in Civil Engineering 2015 366
RESEARCH APPROACH
CASE STUDY
The developed case study is a 430,000 sq. ft. teaching and research facility in
a university campus with a total project cost of $310 million. This project includes
two major phases: existing facilities demolition and new building construction (at the
same location). Site condition is complex and congested. Access to the jobsite is
limited in all directions for logistics and especially for tractor-trailer trucks. To
accommodate these site-specific constraints and structural configurations, the
superintendent decided to use two stand-alone hammerhead tower cranes from the
general contractor’s in-house inventory. Due to availability issue, the two tower
cranes are of different types.
© ASCE
Computing in Civil Engineering 2015 367
employed to make sure that the ranges of both tower cranes cover all lift pickup and
drop down locations, and to avoid lifted loads swinging over major work zones to
reduce safety risks.
Detailed tower crane planning focuses on tower crane location and configuration
refinement with respect to spatial conflicts avoidance, tower crane structural and
foundation design, tower crane service plan (e.g. erection, dismantle, alteration, and
maintenance) and constructability. Figure 2 is a flowchart of the detailed planning
process. When this planning step is finished, tower crane planning is complete in
terms of location selection and configuration refinement. Further detailed sequencing
and scheduling of tower crane activities are determined in day-to-day tower crane
meetings.
Day-to-day tower crane meetings are held one day ahead of planned tower
crane-related activities to address specific tower crane activity sequencing, hook hour
assignment, and safety protocol compliance.
© ASCE
Computing in Civil Engineering 2015 368
Also, critical lift information, as one of the tower crane planning inputs, is
prepared by reviewing design documents, and integrating previous experience and
engineering judgment. Comprehensiveness of such information is at engineers’ own
risk. This process has various approaches and is not formalized for automation as well.
Lastly, tower crane schedule cannot be updated automatically according to
project schedule changes. Tower crane operation hours and related costs can hardly
be automatically updated when project schedule change happens. Construction
engineers and estimators have to work together, on case-by-case basis, to understand
the time and economic consequences casted over tower crane usage by any project
schedule changes, and vice versa.
© ASCE
Computing in Civil Engineering 2015 369
REFERENCES
Al-Hussein, M., Alkass, S., Moselhi, O. (2001). “An Algorithm for Mobile Crane
Selection and Location on Construction Sites.” Construction Innovation, 1(2),
91-105.
© ASCE
Computing in Civil Engineering 2015 370
© ASCE
Computing in Civil Engineering 2015 371
Abstract
In order to meet the CO2 emission requirements of sustainable construction, both
energy performance and ‘carbon’ accounting should be taken into account for the
construction information. This study proposes an energy-enhanced BIM framework that
integrates the carbon finance theory. With the integrated cost evaluation according to the
carbon finance theory, the proposed eeBIM-based platform can help achieve lower
carbon emission cost while enhancing the energy efficiency and further estimate the
carbon consumption of the building design during its construction. The project team thus
will be able to evaluate the overall sustainability and develop an optimized building
design solution at the design stage. In particular, this paper discusses the analysis of
existing sustainable building design systems through intensive literature review, identifies
key challenges and gaps remained unaddressed, and introduces the model proposed to
address these challenges and gaps.
Keywords: eeBIM; Carbon finance; Cost evaluation; Energy efficiency; Sustainable
construction
INTRODUCTION
Based on the report published by Intergovernmental Panel on Climate Change
(IPCC), some proposed policies would focus on emissions targets as well as temperature
change: for example, controlling global average temperature change to 2 °C above
preindustrial levels (IPCC 2014) and reducing greenhouse gases (GHGs) emissions to 80%
below 2005 levels by 2050 (White House 2013). Among all these activities related to
carbon emissions and energy concern, construction has played a significant role. In the
United States, residential and commercial buildings occupy about 40% of the country’s
primary energy and produce 20% of the national carbon dioxide budget (Stadel et al.
2011). Besides, in Hong Kong, with the development of ‘ten major infrastructure projects’
in the Policy Address 2007/08, the fuel-based carbon emissions during the construction
period leaded to a wildly concern about the negative impacts of these projects on the
environment as well as energy consumption (Wong et al. 2013). These facts show us that
both energy performance and ‘carbon’ accounting should be taken into consideration for
the sustainable construction.
With the development of computational and monitoring technology, computers
and many aid tools have been used in the architecture, engineering and construction
(AEC) industry. Building Information Model (BIM) represents the process of
development and use of a computer generated model to simulate the planning, design,
© ASCE
Computing in Civil Engineering 2015 372
construction and operation of a facility (Azhar et al. 2011). It can also act as a digital
database with comprehensive building information, work simulation and schedule control
properties. Thus, BIM brings in an effective and creative way for sustainability. Mah et al.
(2011) aim to establish a baseline for carbon dioxide (CO2) emissions quantification
based on a BIM platform in the current residential construction process. Chen et al. (2013)
combine BIM with energy simulation tools to confirm the lifecycle cost and limit the
carbon emission of a building during construction and operation stages. Since
construction industries in Ireland intend to achieve the requirements of high-energy
saving and ‘nearly zero’ standards, researchers plan to build an integrated BIM platform
to gain a low carbon energy future (McAuley et al. 2013). The McGraw-Hill Green BIM
Report (McGraw-Hill Construction 2010a) also indicated that BIM is such a smart tool
that can assist in controlling and reducing carbon emissions through energy performance
analysis. Furthermore, there are several projects planning to propose new strategies based
on the integration of BIM and green building rating systems. For example, Wu et al.
(2010) tried to achieve green building certification (such as LEED) and Motawa et al.
(2013) improved post-occupancy evaluation processes using BIM to meet sustainable
construction.
Nowadays, focuses have been mainly on either improving energy efficiency or
reducing carbon-related cost through the adoption of BIM, but an integrated platform is
needed to meet these two key requirements for sustainable construction. To reach this
objective, this paper focuses on integrating energy simulation and cost management
methods so that the project team will be able to consistently evaluate the energy
performance and see if the design can satisfy overall environmental requirements so that
to optimize the efficient lifecycle energy performance estimation and decision-making.
LITERATURE REVIEW
The eeBIM platform
The ‘Intelligent Use of Buildings’ Energy Information (IntUBE) project, funded
by the European Commission through the EU FP7, focused on the application of BIM for
sustainability designing. In this project, Crosbie et al. (2009) mentioned that BIM
combined with energy simulation tools for performing energy simulation would cost
almost 50 % of the project team’s time. Considering the time and lack of high
interoperability for the energy simulation, Gökçe et al. (2012) developed an
energy-enhanced BIM (eeBIM) platform. The eeBIM is an extended platform based on
BIM and it is built particularly aiming at enhancing energy efficiency. The eeBIM
integrates different information related to energy performance and creates a bridge model
between the BIM platform and energy simulation tools. It uses the industry foundation
classes (IFC), an international standard, in order to exchange information conveniently.
Further, it provides senor systems, which could enhance and complete energy related
database during the operation period (ibid).
In details, this eeBIM platform includes three steps as follows: (1) Simplify and
create the BIM model into an energy-related BIM model. The data contained in BIM is
too rich for energy simulation and the needs of the energy domain are also different, so
an energy model should be produced for simulation; (2) Extend the original BIM
information library and link it with the external data. Energy performance analysis
requires for the various necessary computations and information; (3) Build a linking
© ASCE
Computing in Civil Engineering 2015 373
bridge model between a BIM platform and computational application models (energy
simulation model, energy monitoring model and cost model) and successfully transfer
data between them.
In general, the eeBIM platform aims at creating an integrated platform for
extending energy related information, achieving the interoperability with the energy
simulation tools and providing a high efficient working environment focusing on the
energy field. (Gökçe et al. 2013).
The carbon finance theory
Carbon finance means giving a price to carbon. When pollution gets more
expensive, sustainable constructions will be taken into account aiming to decrease costs.
Carbon finance has been proven to be an efficient tool when controlling greenhouse gas.
Two approaches in carbon finance are commonly used: the market-based approach and
the command & control regulation (Feldman et al. 2011). In the market-based approach,
there are two methods to reduce carbon emissions, which are (1) cap and trade, and (2)
base-line and credit. In the cap and trade scheme, a maximum emission allowance will be
set under this market scheme and if there remains extra emission allowance, it can be
allowed for trading freely in the market. But if the project results to exceed this cap, they
must afford this additional part from the global carbon market. The European Emissions
Trading Scheme (EU ETS) is the largest carbon market based on this scheme. This
market concentrates on the development of the low-carbon economy around the whole
world, which intends to support and encourage effective policies and programs for
reducing emissions (Stern 2007).
When applying the base-line and credit scheme, it requires projectors to control
their emissions under the base-line. When they remain extra credits, they could also trade
these credits in the carbon market. The Clean Development Mechanism (CDM) created
by the Kyoto Protocol belongs to this scheme. Furthermore, the command & control
regulation mainly relies on government policies and suggestions to reduce carbon
emissions connected with human activities and operations (Bosi et al. 2010).
Possibilities and Advantages of Carbon Cost Evaluation based on the eeBIM Platform
in Sustainable Construction
Since BIM is an integrated digital platform for architects, engineers, contractors,
owners and facility managers, it contains a wealth of building information files. Thus,
BIM can provide an intelligent library, which owns multi-dimensional objects including
comprehensive properties (e.g., quantity and specification details). Similarly, it is also a
promising platform to achieve managing a low carbon energy future (McAuley et al.
2013). Based on related research, emissions can be reduced through increasing energy
efficiency, changing in demand, and adopting of clean power, heat and low or zero
carbon technologies. Among all the aspects related to sustainable construction, improving
the energy performance is the most effective one. But there still exists a variety of
limitations in analysis of building energy performance using BIM. The primary one of the
limitations is that most of BIM applications still do not have clear processes to transfer
information between BIM and energy simulation programs (GSA 2009), which could be
enhanced by adapting the eeBIM platform. In addition, the carbon finance theory (i.e.,
© ASCE
Computing in Civil Engineering 2015 374
giving a price on carbon) is proven to be the most efficient way in lowering carbon
emissions (Stern 2007).
A common global carbon price means that people should be responsible and pay
for their actions and activities, so this method in economic senses leads people to take
low carbon technologies and improving energy efficiency into consideration (Bosi et al.
2010). In order for the project team to be able to come up with an energy efficient design
solution while satisfying the carbon cost constraint, carbon cost evaluation based on the
eeBIM platform is introduced in this paper.
Research Objectives
This study aims to develop a BIM platform that integrates the eeBIM framework
with the carbon finance requirements: the proposed eeBIM + Carbon Finance Theory.
Based on this integrated platform, an extended sustainable information library, an
extended carbon emission data library and an extended carbon finance information
library will be created and added to enrich design solutions as well as improving design
plans, which will be energy efficient design solutions while satisfying the carbon cost
constraint for gaining sustainable construction.
RESEARCH FRAMEWORK
Simulation workflow
To achieve the objective of this study, the simulation workflow of this research
framework is designed as shown in Fig. 1 and the integrated BIM platform with extended
libraries is described as shown in Fig. 2. This process includes six steps:
1) Create a BIM model. Create a building information model and check if the BIM model
is valid for simulations.
2) Create analysis models for energy simulation. Base on the BIM model, two
energy-enhanced models will be produced: one is using the traditional method and set as
a baseline model for analyzing building energy performance; the other one is based on
the proposed eeBIM + Carbon Finance Theory and designed for supporting sustainable
construction. Since energy simulation models need information include construction,
operating schedules, equipment loads (lighting, equipment, etc.), heating, ventilating, and
air-conditioning (HVAC) systems, local weather data, and utility rates, these two models
will be checked and confirmed that all the details have been completed in this step.
3) Provide alternative design plans aiming at achieving goals of sustainable construction.
According to the extended sustainable building information library, data is provided
including impacts of site location, building shapes, envelope choices, building orientation,
alternative energy sources, low-carbon technologies and so on. When the project team
chooses useful information employing the proposed eeBIM + Carbon Finance Theory,
they should follow three rules: (a) Clarify the primary goal of this project and establish
the decision objectives; (b) Then, specify the types and level of detail of information that
can be reasonable and affordable in current practice; (c). Based on this library, make
comparison about energy performance.
4) Compare different preliminary models and confirm the design plan. There are two key
standards to make a decision: (a) energy usage and cost; (b) energy performance of the
design plan.
© ASCE
Computing in Civil Engineering 2015 375
5) Conduct energy simulation and carbon cost evaluation. According to the extended
carbon emission data library and the extended carbon finance information library, energy
consumption and carbon cost are calculated. They are two significant decision-making
criteria for sustainable construction.
6) Compare and evaluate the proposed model. In the last step, check if the proposed
model performs much better than the baseline model, and confirm the model is
satisfactory.
© ASCE
Computing in Civil Engineering 2015 376
CONCLUSIONS
With the eeBIM platform integrating the carbon finance theory (eeBIM + Carbon
Finance Theory) developed in this study, the CO2 emission requirement of sustainable
construction and improving energy performance will be both taken into account for the
decision making and construction information. Firstly, this study adapted an eeBIM
platform, which can help build energy simulation models in a timely manner. Then, three
extended libraries have been proposed and connected with this energy-enhanced BIM
platform. They are the extended sustainable building information library, the extended
carbon emission data library, and the extended carbon finance information library. These
© ASCE
Computing in Civil Engineering 2015 377
three libraries could provide information related to sustainable construction and carbon
cost aiming at achieving lower carbon emission while enhancing the energy efficiency.
Furthermore, the developed platform estimates the carbon consumption as well. The
proposed platform is able to evaluate the overall sustainability and help develop an
optimized building design solution at the design stage. In particular, this paper discussed
the analysis of existing sustainable building design systems through intensive literature
review, the research framework, and also provided a case study with the integrated cost
evaluation according to the carbon finance theory. Future work includes a series of
complete case studies, in which the effectiveness and efficiency of the proposed approach
are evaluated.
*All these data are collected and summarized from the studies by Stern (2007) and Chen et al. (2012);
*R1, W1, H1, G1 and M1 are carbon related data for the base-line model, while the others are options for improving
energy performance and reducing carbon emission.
REFERENCES
Azhar, S. (2011). “Building information modeling (BIM): Trends, benefits, risks, and
challenges for the AEC industry.” Leadership and Management in Engineering,
11(3), 241-252.
© ASCE
Computing in Civil Engineering 2015 378
Bosi, M., Cantor, S., and Spors, F. (2010). “10 years of experience in carbon finance:
insights from working with the Kyoto mechanisms”.
Chen, P. and Li, Y. (2012). “BIM-based integration of carbon dioxide emission and cost
effectiveness for building in Taiwan.” National Taiwan University, Taiwan.
Crosbie, T., Dawood, N. and Dean, J. (2009). “Energy profiling in the life-cycle
assessment of Buildings.” Management of Environmental Quality: An
International Journal, 21(1), 20-31.
Feldman, A., and R. Mellon (2011). “Putting a price on pollution: What it means for
Australia’s property and construction industry.” Green Building Council of
Australia, Australia.
Gökçe, K. U., and Gökçe, H. U. (2012). “eeBIM for energy efficient building operations.”
Proc., 14th Int. Conf. on Computing in Civil and Building Engineering, Moscow
State Univ. of Civil Engineering, Moscow, Russia.
Gökçe, H. U., and Gökçe, K. U. (2013). “Integrated System Platform for Energy Efficient
Building Operations.” Journal of Computing in Civil Engineering.
GSA (2009). “05 – GSA BIM Guide for Energy Performance. Version 1.0.” GSA
Building Information Modeling Guide Series.
Mah, D., Manrique, J., Yu, H., Al-Hussein, M. and Nasseri, R. (2011). “House
construction CO2 footprint quantification: a BIM approach.” Construction
Innovation: Information, Process, Management, 11(2), 161-178.
McAuley, B., Hore, A. V., and West, R. (2012). “Use of Building Information Modelling
in Responding to Low Carbon Construction Innovations: an Irish Perspective.”
International Conference on Management of Construction: Research to Practice,
Montreal.
McGraw-Hill Construction (2010a). Green BIM: How Building Information Modeling is
Contributing to Green Design and Construction. Smart Market Report.
McGraw-Hill Construction (2010b). The Business Value of BIM in Europe – Getting
BIM to the bottom line in the UK, France and Germany. Smart Market Report.
Motawa, I., and Carter, K. (2013). “Sustainable BIM-based evaluation of buildings.”
Procedia-Social and Behavioral Sciences, 74, 419-428.
Stern, N. (Ed.). (2007). “The economics of climate change: the Stern review.” Cambridge
University press.
Stadel, A., Eboli, J., Ryberg, A., Mitchell, J., and Spatari, S. (2011). “Intelligent
sustainable design: integration of carbon accounting and building information
modeling.” Journal of Professional Issues in Engineering Education and Practice,
137(2), 51-54.
Wong, J. K., Li, H., Wang, H., Huang, T., Luo, E., and Li, V. (2013). “Toward
low-carbon construction processes: the visualization of predicted emission via
virtual prototyping technology.” Automation in Construction, 33, 72-78.
Wu, W. (2010). “Integrating building information modeling and green building
certification: the BIM-LEED application model development.” Doctoral
dissertation, University of Florida.
Zhang, J., Zhang, Y., Yang, Z., Fath, B. D., and Li, S. (2013). “Estimation of
energy-related carbon emissions in Beijing and factor decomposition analysis.”
Ecological Modelling, 252, 258-265.
© ASCE
Computing in Civil Engineering 2015 379
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 380
phases to achieve a successful disaster recovery for the host community (Smith and
Wenger 2007, Olshansky et al. 2006). The participation of the different stakeholders
increases the individual utility from the development process through decreasing the
implementation of undesired plans by the host community (Boz and El-Adaway 2014,
Boz et al. 2014). Moreover, the communication between the recovery agencies and
system users increases the recovery rate, quality of the disaster recovery process
outcome and the host community’s resilience (Chang and Rose 2012, Olshansky et al.
2006). To this end, it is recommended that the different governmental sectors, NGOs,
and local residents participate with the other host community stakeholders in the
planning and decision making phases to attain a successful disaster recovery process.
In respect of the commonly used disaster recovery strategies by the different
stakeholders, the recovery strategies of the governments often include repair,
retrofitting, rebuild, and changing land usage patterns. From the residents point of
view, different decisions are preferred over others that may even lead them to move
out from the effected region and start a new life somewhere else (Cutter et al. 2006,
Olshansky 2006). These different post disaster strategies – both on governmental and
residential levels – need to be optimized to increase the overall community’s welfare.
This paper develops a holistic agent based disaster recovery model that takes
into account the different stakeholders attributes and vulnerability to hazardous events
as well as their different recovery strategies. Ultimately, this research will help better
support the community’s welfare by decreasing the community’ vulnerability and
increasing the individuals’ utility functions.
BACKGROUND INFORMATION
Olshansky et al. (2006), Olshansky (2006) and Cutter et al. (2006) illustrated
several disasters examples and their recovery processes. Through their studies, one
can understand patterns of successful recovery key items, and government and
residents commonly used strategies. The local governments interaction with the
different stakeholders in the host communities played a significant role in the
recovery stages associated with both the 1995 Kobe earthquake in Japan and the 2005
hurricane Katrina in the United States. The plans - that had been negotiated and
discussed with the residents in the impacted regions - achieved a high approval rate
by the residents and increased the host communities’ welfare. Moreover, the
commonly used government recovery plans included: financial compensation; repair;
rebuild; and upgrade the effected infrastructure. Further, the impact of the insurance
policies purchased by residents and subsidized by the government played an
important role in the preparedness phase that highly affected the recovery rate.
The goal of a sustainable disaster recovery is not merely to restore the system
functionality, but also to increase the impacted region resilience to hazardous events.
As such, several research has been carried out to investigate and quantify the
communities’ resilience and vulnerability to hazards (Burton G, 2012, Cutter et al.
2003, Gilbert, 1995). In the social science field, vulnerability is considered in general
as “the potential for loss” (Mitchell 1989). To this end, different models were
© ASCE 2
Computing in Civil Engineering 2015 381
METHODOLOGY
Resident Resident
Model Overview
© ASCE 3
Computing in Civil Engineering 2015 382
residents also chose the actions of repairing their damaged houses or leaving the
impacted region. Meanwhile, the State Disaster Recovery Agency (SDRA) offers
different residential disaster recovery action plans. These plans are transmitted to the
LDRAs that are in direct contact with the local residents. LCDAs propose the state’s
action plans to the residents to choose one that will increase their objective functions.
Residents
Resident agents tend to increase their current wealth through maintaining their
household value, decreasing potential expenses and increasing income. The residents
tend to maintain the household value through repairing it either by own investments,
insurance coverage or through the government aids. In this context, the
aforementioned actions act as the agents’ decision variables. The proposed ABM
illustrates the resident’s utility function as shown in the following equation:
Ui = Hi + Ii – T – P + C – R
Where, U is the utility function of resident i, H is the household value for
resident i, I is the monthly income for resident i, T is monthly distributed tax amount
(income and property taxes), P is the insurance premium cost if any, C is the
insurance compensation value if any and R is the self-paid repair costs.
Thus, to maximize the objective function, resident tends to communicate with
the other residents to learn which of the decision variables increase the other
residents’ utility function. To this end, the authors investigated different learning
techniques to be used for the resident agents. The techniques investigated included
Roth E’rve reactive learning, Derivative-Follower, Q-learning, Bayesian learning,
Maximum-Likelihood and Genetic Algorithms. Genetic Algorithms (GAs) was used
as GAs is capable of demonstrating the social learning of a group of individuals from
each other through mimicking the most fit of them (Riechmann 2001, Vriend, 1998).
© ASCE 4
Computing in Civil Engineering 2015 383
algorithm determines which decision variable has been used, and the achieved reward
(positive or negative) by applying this plan as shown in the following equation.
Ej(k)
Where, for each available action j, E is the reward given the actual used action
k. If j=k, E takes the value of +1 or -1 if the application is approved or denied,
respectively. Otherwise, E = 0.
Thus, the model can change the propensity of the decision variables and
eventually their selection probabilities as shown below:
Resident action’s propensity: qj(t+1)= qj(t)[1- ] + Ej(k) x (1- )
Resident action’s probability: prj(t)= qj(t)/ qj(t)
Where; qj(t) is the propensity of action j in time t, is the forgetting parameter
of this plan, and is the experimenting parameter. Both and allows the agent to
explore more options further on. pr is the probability distribution of action j.
Model Implementation
© ASCE 5
Computing in Civil Engineering 2015 384
this end, social data were collected for the three counties following the SoVI
methodology so as to calculate their relative social vulnerability that will guide the
recovery process. The data were collected from the US Census Bureau for each
census tract in the three aforementioned counties. Moreover, data was gathered from
the Mississippi Development Authority (MDA) to determine the set of action plans
followed by them for the housing sector. The most recognized disaster recovery
strategies were; (1) Housing Assistance, which includes repair, rebuild and relocation
financial funding to the damaged household; (2) Public Home Assistance, which
essentially targeted low income families so as to rebuild and house them; and (3)
Elevation Grant, which is an upgrade to elevate the household up to 6 feet and 4 inch,
thus making it more flood resilient. Finally, Hurricane Katrina’s data were gathered in
regard to the damage applied by the Hurricane on the study region’s households. The
data were input to the computer model to determine the optimum funding proportions
to each of the action plans introduced by the SDRA.
The outcome of the model was compared to the actual data gathered from the
MDA in regard to the expenditure proportions and the changes in the host
community’s social vulnerability calculated from the actual social properties of the
three coastal counties. The data comparison ranges from the year 2007 to 2012 as
MDA started federal reporting in 2007 and the most recent available data for SoVI
calculation in 2012. Figure 2 and Figure 3 illustrate the different funding proportions
used by the MDA and the proposed ABM outcome, respectively. The ABM starts
with a uniform funding distribution funding between the three action plans. It can be
noticed from Figure 2 the domination of the Housing Assistance plan over the other
two plans, .This can be justified by the pressure on the disaster management by that
time as the Housing Assistance plan have the highest rates of approval and recovery
in comparison to the other disaster recovery plans. On the other hand, the proposed
ABM model presented an alternative funding pattern that increases the residents’
utilities and the community’s resilience. As shown in Figure 3, the Public Home
Assistance and the Elevation Grant show a significant share. This is because public
home assistance targets the low income, thus highly affect the social vulnerability,
and the elevation grant increases the households’ value that in return affects both the
utility function and the resilience of the households. Still, due to the large
compensation award from Housing Assistance plan in comparison to the other plans,
more residents tends to apply for it, besides it is easier to be accepted by the LDRA.
Figure 4 and Figure 5 illustrate the SoVI (minimum, average and maximum)
change through the years and point out the difference between the actual SoVI and
the projected SoVI using the proposed ABM. Through a quick comparison, the
proposed ABM achieved a better SoVI score with maximum and minimum
vulnerability of 8.42 and -9.43, respectively. While the actual maximum and
minimum SoVI score post recovery process are 13.851 and -6.728, respectively.
To this point, the model can facilitate the decision making process for the
disaster recovery agencies by examining the complex system and the outcome of the
different disaster recovery strategies. Moreover, the system enables the disaster
© ASCE 6
Computing in Civil Engineering 2015 385
CONCLUSION
This paper presented an agent based model for disaster recovery strategies
optimization. The model represented the residents of the impacted region as well as
the local and state government disaster recovery agencies. The model is able to
represent the interrelation between the different stakeholders in the disaster recovery
process. The model also utilized a comprehensive social vulnerability assessment tool
to better guide the recovery process to an optimum funding proportions, thus,
increasing the community’s welfare. The model was then implemented via a Java-
based computer model utilizing GIS interface on the coastal Mississippi’s counties.
Nevertheless, the current model’s assumptions create certain limitations. The
Model takes into account the residents and state recovery agency as the main
controlling agents, while the local government acts as an assessor of the applicants
eligibility. Thus, the model did not fully capture the negotiation process between the
local government and the residents. Moreover, the model did not address the federal
agency role in the recovery process that highly affects the available recovery funding.
To this end, the future work for this research is to fully capture the disaster
recovery stakeholders’ interrelationship including the federal government, the
negotiation process between the local government and the residents, as well as the
insurance companies’ relationship to the recovery process. Moreover, further research
on the different policies utilized in the other states - handling various types of
disasters - will be carried out to incorporate into the model. Last but not least,
adjustments to the current agents will be implemented including: (1) better learning
modules capturing the spatial and economic standards learning barriers; and (2)
irrationality of some agents. Also, to provide a holistic disaster recovery decision
tool, the future work targets the utilization of different economic and environmental
indicators at the different recovery and agent levels. These indicators, along with the
© ASCE 7
Computing in Civil Engineering 2015 386
social indicator, can give a broader understanding of the complex system and a better
prediction of the outcome, thus, achieve a sustainable disaster recovery.
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 387
Abstract
INTRODUCTION
for the case of temporary lifts. Since the greatest peak traffic, more specifically up-
peak traffic, develops in the morning (Park et al. 2013), it is reasonable to analyze the
morning up-peak traffic to handle the aforementioned three issues. The ultimate goal
of this research endeavor is to create a method for developing cost-effective and
demand-meeting lifting plans. To that end, the study presented in this paper has
concentrated on creating an analytical approach to evaluating the effectiveness of the
configuration of service floors and lifts based on lifting time, concentrating on the
aforementioned three issues.
Figure 1 depicts the typical round trip cycle of a lift. A lift begins its cycle of
trip from ‘Wait’ on the lowest floor of its service zone, during which passengers are
loaded. While it is traveling upward, it makes a certain number of stops to unload or
load passengers until it reaches the highest floor of its service zone. As it returns to
the lowest floor, it may repeat what it did during the up-trip. While this pattern of
round trip is typical, the length of round trip time (or cycle time) varies depending on
the service requested per trip. The total lifting time is equal to the sum of individual
cycle time. When a lift is in a mode of inter-floor traffic, the lift changes its traveling
direction in the middle of up- or down-trip (Figure 1). Park et al. (2013) reports that
inter-floor traffic is not frequently observed on building construction sites. This can
2
© ASCE
Computing in Civil Engineering 2015 389
be explained by two reasons: (1) unlike permanent elevators, passengers cannot cause
direction changes while a lift is in motion and (2) inter-floor traffic normally occurs
during the non-peak periods when lifts serve transportation of materials. Minimizing
lifting time during peak traffic is one of the primary goals of the configuration of
service zones and lifting operation. Particularly highlighted traffic is up-peak, which
normally develops early morning in the beginning of each work day. This is because
the daily lifting demand experiences the greatest surge during the morning peak
period and that surge can delay lifting workers, resulting in the loss of labor hours.
3
© ASCE
Computing in Civil Engineering 2015 390
Model Development
In this study, RTT is classified based on the events occurring in the typical
pattern of a round trip as shown in Figure 1—waiting time, travel time, operator time,
and passenger time. Travel time is further classified into up- and down-trip time.
Up-trip is divided into two types—travelling through non-service-floors (out of zone)
and travelling through service-floors. Operator time accounts for time consumed by
the operator at each stop to perform a few simple tasks, such as opening and closing a
lift door and gate, as well as communicating with passengers. Passenger time is
considered for passengers’ boarding or un-boarding a lift. Waiting time is needed
while a lift briefly stays on the lower departing floor or the upper departing floor.
The detailed classification of the operational sequence of a round trip, along with the
conditions and assumptions for lift operation discussed in the previous section, allows
for identification of essential parameters to create a mathematical model for
calculation of RTT. The model is expressed as Eq. (1).
Eq. (1)
where = ith round trip; = ith round trip time; = distance across service-floors;
= distance across non-service-floors; = average velocity of a lift car when it
travels service-floors; = average velocity of a lift car when it travels non-service-
floors; = average operator time per stop; = average passengers time per person;
= number of stops; = number of passengers transported per round trip; =
average waiting time at the top floor of a zone; = average waiting time at the
lowest floor. Accordingly, total RTT can be represented as where =
time for completing lifting; = total number of round trips. As shown above, the model
accounts for parameters relevant to lift capacity in a deterministic manner. Parameter
values for waiting time, operator time, and passenger time can be estimated as a
4
© ASCE
Computing in Civil Engineering 2015 391
The number of stops per cycle depends on the number of passengers on board
each cycle. For example, let us assume that ten passengers are on board. If all ten
passengers want to go to different floors, then the lift has to make ten stops. On the
other hand, if all of them go to the same floor, then the lift will make only one stop.
Thus, the range of number of stops in this example case is 1 (minimum) to 10
(maximum). It is hard to deterministically estimate the number of stops, because that
is most likely random. To handle the randomness, a technique is created by applying
probability distribution. The following delineates the technique in a sequential order.
Given the range based on the number of passengers on board, find the number
of intervals and the width of each interval. For this, Sturges’ rule—a rule for
determining the number of classes—is effective to handle a small sample size:
and , where n =
number of intervals (smallest integer greater than or equal to right side of equation),
N = number of passengers on board, and w = interval width (maximum and minimum
represent the largest value and smallest values in data). Once the intervals are
established, then, find the median value of each interval to represent the number of
stops for that interval. Select a probability distribution for the median values, then
calculate corresponding probability mass function and cumulative density function.
The frequency of lift stops is assumed to follow the calculated probability distribution.
AN APPLICATION EXAMPLE
The mathematical approach was applied with a hypothetical case to test its
applicability and effectiveness for measuring the lifting time per zoning configuration,
and the impact of the overlapping of service floors. It was assumed that two lifts are
to be assigned to two zones dividing a 29-story building from 1st floor to rooftop,
progressively. Figure 2 shows the zoning plan and configuration of two lifts. The
example considers two different periods—Period 1 (P1) and Period 2 (P2). For each
period, two zoning plans were considered—for Period 1, P1-1 and P1-2, and for
Period 2, P2-1 and P2-2. P1-1 and P1-2 consider four configurations of two lifts (L1
and L2), respectively. The same was considered for P2-1 and P2-2. It was assumed
that ten percent of daily manpower would come down to the 1st floor and go up again
during the morning peak. In terms of lift capacity, the following parameters are
commonly applied to all configurations: = variant; = variant; = 1 m/sec;
= 3 m/sec; = 5 sec; = 10 sec; = 5 sec; = 5 sec; = variant; =
12 men; height of a floor = 4 m. The range of the number of stops is 1 (min
passenger) to 12 (max passenger) when a lift is loaded. Table 1 shows the number of
stops of each lift (L1, L2) that is randomly generated by applying the Sturges’ rule-
based technique.
5
© ASCE
Computing in Civil Engineering 2015 392
6
© ASCE
Computing in Civil Engineering 2015 393
The results relevant to 2C-3 of P2-1 are particularly notable. In this case, Tc’s of L1
and L2 were well balanced and much shorter than those of L1 and L2 of 2C-0.
Furthermore, the lift configuration required only about half as many landings as 2C-0.
Generally, when demand is balanced, it is favored to reduce the length of a zone.
CONCLUSIONS
7
© ASCE
Computing in Civil Engineering 2015 394
complicated. From the application example results, the following conclusions can be
drawn. Although the velocity of a lift is an important parameter that affects lifting
time of a lift, lifting time depends more on the demand served by the lift. Meanwhile,
as the building height grows, lifting time depends on the combination of the number
of floors served and the travel distance where lifting demand and travel time are
primarily determined by the two parameters. Thus, balancing demands between lifts
that serve different zones is critical to shorten the overall lifting time. Given the
selected capacity of a lift, a practical solution to shorten lifting time is to reduce the
number of stops of a lift where passenger time and operator time contribute more to
the growth of lifting time. It is worth noting that the existence of landings interferes
with many trades’ work by occupying building’s external façade and space in the
near proximity. Thus, the number of landings should be taken into consideration
when determining the number of service floors served by a lift. It is also critical to
know when to install additional lifts as work progresses and lifting demand grows
and vice versa. Conclusively, accurately evaluating lifting capacity of such
configuration can greatly assist in coordinating schedules for the installation and
removal of lifts.
There are a few agenda to be further investigated in future study. In the
present study, the normal probability distribution was applied to account for the
randomness of the number of lift stops. The influence of other distributions, such as
Beta or Poisson distribution, need to be examined. It is also worth considering
probability distribution for the number of passengers transported per cycle. Finally,
future study will look into permanent elevators, jointly with the temporary lifts,
where permanent elevators are normally used along with temporary lifts as building
enclosure and internal finish work approach to completion.
REFERENCES
Barney, G.C. (2003). “Elevator Traffic Handbook: Theory and Practice.” Taylor &
Francis Routledge.
Daewoo E&C (2002). “Telekom Malaysia Project.” Technical Report, Daewoo
Engineering & Construction, CD version.
Hakonen, H., and Siikonen, M. L. (2008). “Elevator Traffic Simulation Procedure.”
Elevator World, 57(9), pp. 180-190.
Hwang, S. (2009). “Planning temporary hoists for building construction.” Proc. of the
2009 Construction Research Congress, ASCE, 2009, pp. 1300–1307.
Ioannou, P.G., and Martinez, J.C. (1996). “Scalable simulation models for
construction operations.”1996 Winter Simulation Conference (28th), IEEE
Computer Society, pp. 1329–1336.
Newll, G.F. (1998). “Strategies for serving peak elevator traffic.” Transportation
Research Part B: Methodological, 32 (8), 583–588.
Park, M., Ha, S., Lee, H., Choi, Y., Kim, H., and Han S. (2013). “Lifting demand-
based zoning for minimizing worker vertical transportation time in high-rise
building construction.” Automation in Construction 32, pp. 88–95
Siikonen, M. (2000). “On Traffic Planning Methodology.” Elevator Technology,
27(3), pp. 267-274.
8
© ASCE
Computing in Civil Engineering 2015 395
Abstract
BACKGROUND
© ASCE 1 of 8
Computing in Civil Engineering 2015 396
work, healthcare facility information needs to be unified into one model. Sharing the
information with all the stakeholders frees the FM personnel from having to look for
information from various systems, divisions, or people, and from producing repeated
communication, documentation, and reports, thus saving FM personnel time (Lucas
& Bulbul et al., 2011).
BIM use is explored for healthcare facilities in the early programming period
with benefits of visualization, time saved relative to concept updates, and quantity
takeoffs (Manning et al., 2008), but several problems have to be addressed in order to
apply BIM successfully to healthcare FM. For example, some equipment types (such
as fire alarm system, HVAC system, elevator system, etc.) are managed by different
contractors, and the FM personnel have to go to each proprietary system to locate
needed information. There are various types of information stored in different formats
(such as printed documents, engineering drawings, handwritten papers, oral
information, etc.), which have originated from different phases of the project, by
different parties, and at different levels of detail. In addition to this, the FM
procedures need to follow the guidelines or standards by different organizations, such
as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO),
Occupational Safety and Health Administration (OSHA), and Facility Guidelines
Institute (FGI). Former work (Lucas & Bulbul et al., 2013c) has analyzed major
healthcare standards and guidelines and has discussed how they can be incorporated
into a healthcare facility information framework to support facility operation and
performance compliance.
This paper discusses a case study analysis to identify information needs and
communication methods of the FM group in a healthcare setting. It examines how
BIM can be implemented to help close the gaps and seamlessly transfer information.
Similar case studies have been conducted regarding the links between facility
information and the healthcare delivery process (Mohammadpour et al., 2012),
BPMN flow charts and UML use cases have been created to examine information
needs for existing healthcare facility maintenance operations and identify the
information origin along the facility lifecycle (Lucas & Bulbul et al., 2013 a), and a
product model is developed as a result of various case analyses (Lucas & Bulbul et
al., 2013 b). Based on the previous work used to identify information links, this paper
discusses case study research conducted on the analysis of routine maintenance and
emergency reactions in a clinical organization.
This case study research is built upon a one-week work shadowing experience
in a healthcare organization. The observation took place at a clinic housed in a four-
floor building that serves around 1500 patients and their companions per day. The
shadowing was conducted with FM personnel to learn the details of their regular
work activities and their interactions with the FM web-based system, clinical groups,
the mechanical systems, and the HVAC system. The work of the FM personnel
includes preventive maintenance tasks, building inspections, emergency responses,
© ASCE 2 of 8
Computing in Civil Engineering 2015 397
furniture repair, and some cleaning. Two kinds of tasks are selected for the case
study, a preventative (routine) task and an emergent task. The preventative task is
analyzed with respect to the interaction of the systems under normal conditions. The
emergent response analysis involves the interaction of different systems when there is
an inherent danger to the occupants or systems and response time is limited. The aim
of preventative maintenance tasks is to maintain and replace systems before a
problem occurs and reduce the possibility of an emergent situation. The aim of
emergent responses is to contain the damage to people and the facility as quickly as
possible. Among the possible incidents, a fire event has been selected for analysis.
The preventative maintenance task for this event is the monthly inspection of all fire
extinguishers in the building. The emergent task is responding to a small fire that is
caused by events such as a motor burnout, vacuum filter bag bursting (as dust
particles passing into the motor could get ignited or cause an internal explosion), or
equipment overheating.
After the above two scenarios are identified, the work processes for the FM
personnel to finish the tasks and their interactions with other personnel and systems
are documented using the Business Process Modeling Notation (BPMN). BPMN is a
flow-chart based notation for defining business processes at various levels of
complexity (White & Miers, 2008). Specifically, a BPMN diagram separates different
participants of a system in separate lanes. Arrow lines are used to present the flow of
the activity and dotted arrow lines show the communication between different
participants. After completion of scenarios and BPMN diagrams, an interview with
the FM manager was arranged for finalizing the diagrams.
© ASCE 3 of 8
Computing in Civil Engineering 2015 398
cost (hours spent) is also put into the system for tracking and reports can be created
easily for a person, a team, a place or area, a piece of equipment, or a time period.
The narrative of the preventative maintenance process according to the activity
sequence is listed below. The BPMN diagram is shown in figure 1.
© ASCE 4 of 8
Computing in Civil Engineering 2015 399
Figure
g 1: Routine fire case BPMN diagram
g
© ASCE 5 of 8
Computing in Civil Engineering 2015 400
Figure 2: Emergent fire case BPMN diagram (partial)
© ASCE 6 of 8
Computing in Civil Engineering 2015 401
From this shadowing experience, we have observed that much time of the FM
personnel has been spent on just walking, from maintenance shop to the place of
issue, back for drawing sheets and other information, to the place of issue again, back
for materials and tools, then completing the work, and back again for updating the
work order in the FM management system. Further time was lost in confirming the
right location and right piece of equipment. Extending BIM to support FM activities
can provide intuitive virtual presentation of the facility and save time of FM
personnel by integrating various sources of information and providing quick and
visual access to dynamic and static facility data. The functionality and information
necessary for the routine case and the emergent case have been discussed below.
Routine case: The expected functionality from BIM-based FM support
system for this routine job includes: storing and checking maintenance history,
indicating the position and serial numbers of the equipment, checking for omitted
equipment, storing equipment service life information, and automatically reminding
personnel of equipment maintenance needs and service life limit. The information
needed includes: the maintenance history, the equipment locations on the installation
drawings, the equipment information of serial numbers, service life begin date and
service life limits, suppliers’ contacts, and possible problems and corresponding
solutions. Among these, the equipment information like serial number and location in
spaces or zones could be stored using the structure and data format of COBie’s
component in BIM. Such information is not difficult to identify and available at the
beginning of the equipment’s service life. Although the case of fire extinguisher does
not seem very hard to do and does not involve complex problems to detect and fix, it
is still an example of the various FM activities, such as elevator inspection, HVAC
unit inspection, and generator inspection. Further, in a larger building, or a set of
buildings, the quantity of these activities is substantial and prone to error when
performed manually.
Emergent case: The expected functionalities from BIM in the emergency
case includes: presenting the fire location, receiving alarm signals and providing
reaction guidance according to the life safety plan, providing patient location
information and room availability for evacuation purpose, and the ability to access
this information using mobile devices. The source of information needed for the
emergent case functionality include: instant instructions in case of incident according
to the life safety plan and training modules, the dynamic data from the fire alarm
panel, and the dynamic information of patient location, mobility and room number.
The life safety plan is easy to get and can be integrated into the 3D model in support
of decision-making. However, the dynamic data from the fire alarm system and the
clinical system is not easy to obtain. Moreover, in consideration of other kinds of
incidents, more complex systems such as the mechanical system and the HVAC
system may also be involved. Many of these systems in a facility may be equipped
with automated sensors by different suppliers and managed on different proprietary
systems, they are independent from each other and also from BIM, so there is no
automatic flow of information among these systems. With dynamic system data being
captured updated as additional properties of BIM’s objects, the FM personnel can
© ASCE 7 of 8
Computing in Civil Engineering 2015 402
access all data at no time and conduct maintenance activities with comprehensive
understanding of the facility and their tasks.
CONCLUSION
This paper presents a case study based on the shadowing experience in a
healthcare clinic. The case scenarios regarding the fire safety issues are generated.
BPMN diagrams and narratives are developed to analyze the sequence of activities
and interaction among different participants. Points of improvement or
communication gaps are identified for the purpose of using BIM to further facilitate
the FM performance in consideration of patient and staff member safety, cost
reduction and efficiency. The information needed to realize the functionality and the
information origins are analyzed. For future research, the feasibility of linking the
BIM system to systems in a healthcare facility to obtain and update dynamic
information in the 3D model will be further studied. The former ontology research
(Lucas & Bulbul et al, 2013a) has laid the foundation of how to organize the collected
data and what functions to perform using the data. Moreover, the feasibility of using
device support (such as a mobile smart phone or tablet to provide immediate
information to the FM personnel without them leaving the jobsite) to run the adapted
BIM tool will also be studied. As for the information to be stored or included in a
BIM based model, more cases need be created to reflect the various aspects of the
healthcare environment. More case studies will be created for risk assessment and
damage assessment to support decision making of the FM personnel.
REFERENCES
EN 15221-1: 2006. (2006). Facility management–Part 1: terms and definitions.
Lucas, J., Bulbul, T., & Anumba, C. (2013). (c) Gap Analysis on the Ability of
Guidelines and Standards to Support the Performance of Healthcare Facilities.
Journal of Performance of Constructed Facilities.
Lucas, J., Bulbul, T., & Thabet, W. (2013). (a) An object-oriented model to support
healthcare facility information management. Automation in Construction, 31,
281-291.
Lucas, J., Bulbul, T., Thabet, W., & Anumba, C. (2013). (b) Case Analysis to Identify
Information Links between Facility Management and Healthcare Delivery
Information in a Hospital Setting. Journal of Architectural Engineering, 19
(2), 134-145.
Lucas, J. Bulbul, T. Thabet, W. (2011) “A Facility Information Model to Support
Operations and Maintenance of a Healthcare Environment” Proceedings of the
CIB W078, W102 Computer Knowledge Building Conference, October 2011,
Sophia Antipolis, France.
Manning, R., & Messner, J. (2008). Case studies in BIM implementation for
programming of healthcare facilities. ITcon, 13 (special issue), 246-257.
Mohammadpour, A., Anumba, C., Bulbul, T., & Messner, J. (2012). Facilities
Management Interaction with Healthcare Delivery Process. Construction
Research Congress 2012, 728-736.
White, S. A., & Miers, D. (2008). BPMN modeling and reference guide:
understanding and using BPMN. Lighthouse Point: Future Strategies Inc.
© ASCE 8 of 8
Computing in Civil Engineering 2015 403
Abstract
Reducing fuel consumption on roadway networks can have a huge impact
on the nation’s economy and environment. Existing ad-hoc transportation
planning efforts that allocate limited funding on need-based criteria are
insufficient for providing a significant reduction in fuel consumption. Therefore,
there is an urgent need for new research to analyze the impact of planning effort
on fuel consumption to support transportation’s decision making. This paper
presents the development of a new model for estimating fuel consumption in
transportation networks under budget constraints by taking into consideration the
effect of pavement deterioration on fuel consumption. The model is composed of
three main modules to (1) estimate vehicle fuel consumption of transportation
networks; (2) allocate limited funding to competing highway rehabilitation
projects; and (3) evaluate the impact of pavement roughness and deterioration on
fuel consumption. An application example is analyzed to evaluate the developed
model and illustrate capabilities of the model. The application result demonstrates
the significant impact of highway rehabilitation planning on fuel consumption on
roadway networks. This study should prove useful to planners and decision
makers in evaluating the impact of highway rehabilitation efforts on fuel
consumption.
1
© ASCE
Computing in Civil Engineering 2015 404
INTRODUCTION
According to the U.S. Energy Information Administration (EIA),
transportation ranked second among all economic sectors with 28% of total
energy consumption in 2014 (EIA 2014). Transportation sector is also the largest
consumer of petroleum (EIA 2014) and is therefore the second highest greenhouse
gas (GHG) generating sector (EPA 2013). That is, reducing fuel consumption on
transportation networks can have a direct and significant impact on improving the
nation’s economy and environment. Substantial reduction of fuel consumption can
be achieved not only by developing energy-efficient engines, but also by
optimizing consumption related to vehicle-pavement interaction. To this end,
current highway programming and planning efforts are based on ad-hoc models
that allocate limited budget following need-based criteria such as worst pavement
conditions or highest traffic volume. These models are therefore insufficient for
providing significant reduction of fuel consumption and GHG emissions.
Accordingly, there is a pressing need for new models in evaluating and
minimizing fuel consumption as a result of highway programming and planning
decisions.
Existing research on fuel consumption in the transportation sector focused
on: (1) analyzing the impact of several factors (e.g. vehicle type and pavement
roughness) on fuel consumption (Watanatada et al. 1987; Epps et al. 1999; Amos
2006); (2) estimating vehicle fuel consumption (Bennett and Greenwood 2003;
Chatti and Zaabar 2012; Zaabar and Chatti 2010); (3) modeling life-cycle energy
consumption at the road level (Zhang et al. 2009; Yu and Lu 2012); and (4)
modeling life-cycle energy consumption for network rehabilitation programs
(Zhang et al. 2012; Wang 2013). Despite the significant contributions of these
studies, no reported research enables evaluating and estimating fuel consumption
at the network level using data readily available to planners and decision makers
in developing highway rehabilitation programs.
In order to address this important research gap, this paper presents the
development of a novel model for estimating fuel consumption of highway
rehabilitation programs. The model is developed in three main stages that are
designed to (see Figure 1): (1) allocate limited funding to competing rehabilitation
projects; (2) model the impacts of pavement condition deterioration on fuel
consumption; and (3) evaluate and estimate the impact of highway rehabilitation
decisions on total fuel consumption in transportation networks. The following
sections describe these three stages in detail.
2
© ASCE
Computing in Civil Engineering 2015 405
Where,
= total cost (in dollar) of the rehabilitation program; P = number of projects;
= cost of rehabilitation project (p); and F = available funding.
Where,
= pavement roughness index (in meter per kilometer) of road section (p) at
year (y) after rehabilitation; SNC = modified structural number; and =
cumulative equivalent standard axle loads (ESAL) of road section (p) at year (y)
after rehabilitation (in million ESAL per lane).
To this end, a new pavement performance algorithm is developed to
forecast the deterioration in pavement conditions. Figure 2 shows a flowchart of
this new algorithm that consists of eight main steps as follows:
1. Check whether road section (p) is selected for rehabilitation. If the
section is included in the rehabilitation program, proceed to step 2; otherwise,
proceed to step 3.
2. Set the initial pavement roughness index (IRIp,I) of section (p) to the
expected value after rehabilitation. It is assumed that rehabilitation efforts will
decrease IRI to 0.5 m/km for flexible pavement, according to MnDOT (2003) and
Utah LTAP (2004). Go to step 4.
3. Set the initial pavement roughness index to the current IRI (IRIp,N) if the
section (p) is not included in the rehabilitation program. For example, a road
3
© ASCE
Computing in Civil Engineering 2015 406
section with a current IRI of 3 m/km that is not selected for rehabilitation
implementation will have its initial pavement roughness index set to 3 m/km.
4. Calculate section pavement roughness index (IRIpy) of the road section
(p) at the end of each year (y) after rehabilitation over an analysis time span
identified by the user using equation (2).
5. Check whether the current year is the last of the analysis span (Y). If
yes, proceed to step 7; otherwise, go to step 6.
6. Verify if the road section is due to periodical maintenance during the
current year. In this paper, periodical maintenance is implemented in 5-year
cycles. If a processing year falls in the maintenance cycle year (MD), a road
section (p) will then move to check with the next criteria in step 7; otherwise,
repeat steps 1-6 for the next year (y+1). Please note that the performance
improvement due to periodical maintenance is not included in this study.
Start
p=1
Yes 1 No
Is project rehabilitated ?
Assign initial IRI at post- Assign initial IRI with current
2 3
rehabilitation with 0.5 m/km IRI from actual data
y=1
5
Is processing year last year of
operational lifespan?
Yes
No
6
Does processing year fall in the No
maintenance cycle year ?
Yes
7
Is current IRI higher than maximum No
acceptable value?
Yes
Assign IRI with
8
0.5 m/km
Yes
Is y < Y? y=y+1
No
Is p < P? p=p+1
End
4
© ASCE
Computing in Civil Engineering 2015 407
7. Check whether the current estimated IRI (IRIpy) is higher than the
maximum acceptable value according to applicable practices and guidelines. If
yes, assume this road section will undergo new rehabilitation effort and go to step
8; otherwise, proceed to step 6. In this paper, the maximum acceptable IRI value
is assumed to be 4 m/km representing an average value for poor pavement
conditions, according to MnDOT (2003).
8. Adjust the current pavement roughness index (IRIpy) to 0.5 m/km as a
result of implementing new rehabilitation effort if the road section (p) has an IRI
greater than the maximum acceptable value (4 m/km). All steps are repeated for
each road section of the transportation network.
ESTIMATION OF TOTAL FUEL CONSUMPTION
The objective of this stage is to evaluate and estimate the impact of
decisions made during programming of highway rehabilitation efforts on total fuel
consumption in the transportation network. To this end, the model evaluates the
impact of planning decisions, such as selection of rehabilitation projects from a
group of roads due for repair, on the total network fuel consumption. The fuel
consumption is estimated as a function of traffic volume, highway mileage,
vehicle speed, vehicle type, and pavement condition. An average vehicle fuel
consumption rate is estimated for each road section depending on the free flow
speed, type of vehicles, and pavement surface conditions of that section. In this
study, equation (3) was developed for passenger car based on HDM 4 model
mentioned in Chatti and Zaabar (2012).
Where,
= average vehicle fuel consumption rate on road section (p) in year (y) of
the rehabilitation efforts (in milliliter per vehicle-kilometer); = free flow
speed on road section (p) (in mile per hour); and = pavement roughness index
(in meter per kilometer) of road section (p) at year (y) after rehabilitation.
In this model, pavement conditions are measured using the international
roughness index (IRI). As discussed in the previous section, the pavement surface
conditions deteriorate over time, which will in turn significantly affect fuel
consumption. Therefore, the average vehicle fuel consumption rate for each road
section is estimated on a yearly basis. The total fuel consumption of the
transportation network resulting from the implementation of the selected highway
rehabilitation program is therefore estimated using equation (4), as follows:
Where,
TF = total fuel consumption of the transportation network (in liter); Y = number
of years of the rehabilitation effort; P = number of road sections undergoing
5
© ASCE
Computing in Civil Engineering 2015 408
rehabilitation; = traffic volume (in terms of AADT) on road section (p); and
= length of road section (p) (in kilometer).
APPLICATION EXAMPLE
An application example is analyzed to evaluate the performance of the
developed model and illustrate its capabilities in estimating the total fuel
consumption of transportation networks as a result of highway rehabilitation
decisions. The example analyzes a hypothetical rehabilitation program applied to
a transportation network in Miami-Dade County, Florida. Several road sections
throughout the network are assumed to be suffering of varying surface conditions
and in need of rehabilitation efforts. In order to improve the overall pavement
conditions of the network, ten candidate rehabilitation projects are being
considered by the decision makers under budget limitations. Table 1 shows the
current pavement roughness index, section length, free flow speed, number of
lanes, rehabilitation cost, and total equivalent standard axle load for each of these
projects.
Table 1. Candidate Rehabilitation Projects.
Traffic Speed No. Construction Total ESAL
IRI Length
Project Volume Limit of cost (million (million
(m/km) (km)
(veh/day) (mph) Lane dollars) ESAL/lane)
1 4.50 45,500 4.61 40 4 8.02 0.3546
2 3.20 55,000 3.40 40 3 4.44 0.5715
3 2.80 37,500 6.52 40 2 5.67 0.5845
4 3.00 50,500 3.22 45 3 4.2 0.5247
5 4.00 35,000 3.28 35 2 2.85 0.5455
6 4.00 48,500 2.60 40 3 3.39 0.5039
7 3.80 33,500 2.72 45 3 3.55 0.3481
8 5.00 63,000 4.28 45 3 5.58 0.6546
9 4.00 13,000 2.80 40 1 1.22 0.4052
10 3.80 71,000 3.60 45 3 4.7 0.7377
6
© ASCE
Computing in Civil Engineering 2015 409
570
Alternative#2
470
Highway Programs based
420 on Traffic Volume
370
Highway Programs based
on Pavement Conditions
320
Alternative#1
270
0 20 40 60 80 100
Possible Highway Programs
7
© ASCE
Computing in Civil Engineering 2015 410
8
© ASCE
Computing in Civil Engineering 2015 411
Key Laboratory of Civil Engineering Safety and Durability of the China Education
Ministry, Department of Civil Engineering, Tsinghua University, Beijing, P.R. China
100084. E-mail: [email protected]
Abstract
Keywords
© ASCE
Computing in Civil Engineering 2015 412
INTRODUCTION
In recent years, more and more large-scale structures (e.g., high-rise buildings,
super-tall buildings, large-span bridges and high dams) are being designed and built
worldwide. Concurrently, research on the seismic behavior of large-scale structures
has become increasingly common, owing to the important social function of these
structures and the frequent occurrence of earthquakes. In addition, research to date
indicates that numerical simulation, using finite element (FE) analysis, is an effective
method of investigating the nonlinear seismic behavior of such structures (Lu et al.
2011, 2013a; Xu et al. 2013; Li et al. 2014; Xie et al. 2014a). Among existing FE
software, OpenSees, an open-source FE program, has been widely used because it is
versatile, extensible and shareable (McKenna et al. 2009).
A multi-layer shell element, based on the principles of composite material
mechanics, has been developed in OpenSees for shear walls, core tubes and floor
slabs, which are important components of large-scale structures (Lu et al. 2013b; Xie
et al. 2014a, 2014b). This multi-layer shell element has been applied in investigating
the seismic behavior of super-tall buildings and large-span bridges, providing useful
references and an effective tool for further research on the seismic behavior of
large-scale structures. However, the abovementioned research also indicated that the
computational efficiency of OpenSees based on CPU computing cannot satisfy the
demand for numerical simulation of large-scale structures. This restricts further
investigation on the seismic behavior of such structures using this software package.
Recently, GPU have been rapidly developed and applied in the general
computing field, due to their powerful parallel computing capability and low cost
(Owens et al. 2007). Further, seismic damage simulations of urban areas have been
conducted by Lu et al. (2014a) using GPU/CPU cooperative computing. Their
benchmark cases indicate that the computing efficiency of GPU could be up to
almost 40 times that of a traditional CPU. Accordingly, a GPU may have the
potential to provide a high-performance alternative for the seismic performance
analysis of large-scale structures based on OpenSees.
In this study, two new parallel-iterative solvers for the sparse SOEs are
proposed and implemented in OpenSees, based on GPU-powered high-performance
computing. The nonlinear time history analysis (THA) of a 141.8 m frame-core tube
building and a super-tall building (the Shanghai Tower with a height of 632 m) are
performed using the proposed solvers. In comparison with the existing CPU-based
SparseSYM solver in OpenSees, the speedup ratio using the proposed solver is up to
9 to 15, with high accuracy in result. The outcome of this study can provide an
effective computing technology for numerical analysis of the seismic behavior of
large-scale structures.
© ASCE
Computing in Civil Engineering 2015 413
COMPUTING METHOD
© ASCE
Computing in Civil Engineering 2015 414
The matrix is integrated in CPU, and then copied into the graphics memory to
perform parallel computing. Finally, the results are returned into CPU for subsequent
computing.
CPU GPU
Preceding Steps
Build LinearSOE
Integrate Matrx A
and Vector b
Subsequent Steps
There exists several storage formats for sparse matrices. Among these, the
compressed sparse row (CSR) format is commonly adopted. This storage format can
quickly convert into the coordinate (COO) format, with less storage space than
alternatives. Some useful matrix characteristic values, such as the number of
non-zero elements in a row of the matrix, can be quickly obtained. In addition, the
CSR format is convenient for the parallel computing of matrix multiplication and
vector-matrix multiplication. Hence, this format is used to store the sparse matrices
in this study. A SparseGenRowLinSOE class is provided in OpenSees, which can
store sparse matrices in the CSR format. Therefore, this class is used directly as the
designed SOE class in this work.
The designed Solver class should be written in a language suitable for GPU
computing. Therefore, two GPU-accelerated solving libraries for the sparse SOEs,
based on Compute Unified Device Architecture (CUDA) (NVIDA, 2014a), are
adopted, namely: (1) CulaSparse, which is a linear algebra function library
(Humphrey et al. 2010) and (2) CuSP, which is an open source library for sparse
linear algebra (NVIDIA, 2014b). The corresponding scripts of CuSP are provided in
detail at http://opensees.berkeley.edu/wiki/index.php/Cusp. The corresponding
source codes that illustrate the implementation procedure of these two solvers in
detail are available at the website of OpenSees (PEER, 2014). This will facilitate the
reproduction of this research.
© ASCE
Computing in Civil Engineering 2015 415
CASE STUDY
© ASCE
Computing in Civil Engineering 2015 416
1.2
0.9
Top Displacement /m
0.6
/m
0.3
0
-0.3
-0.6
-0.9
0 5 10 15 20 25 30 35 40 45
Time /s/s
Cusp CulaSparse CPU_SparseSYM
0.6
0.3
/m
0
-0.3
-0.6
-0.9
0 5 10 15 20 25 30 35 40 45
/s
Time /s
Cusp CulaSparse CPU_SparseSYM
Computing
Computing
time of the
time of the Speedup Speedup
Platform Solvers Shanghai
TBI2N ratio ratio
Tower
model
model
CPU SparseSYM 168 h \ 409 h \
CulaSparseSolver 18 h 9.33 38 h 10.76
GPU
CuSPSolver 11 h 15.27 27.5 h 14.87
© ASCE
Computing in Civil Engineering 2015 417
solvers, which is acceptable. Table 2 shows the comparison of the computing times
and the speed-up ratios. The speed-up ratio is up to 9-15 times by using the two
proposed GPU-based solvers. Evidently, the GPU-based solvers in OpenSees exhibit
significant reliability and high efficiency for the nonlinear THA of large-scale
structures.
CONCLUSIONS
In this study, two new parallel-iterative solvers for the sparse SOEs are
proposed and implemented in OpenSees, based on two GPU-based libraries (CuSP
and CulaSparse). The THAs of two high-rise buildings are conducted using the two
proposed GPU-based solvers and an existing CPU-based solver. The results indicate
that GPU-based solvers have a good agreement with CPU-based solver in accuracy.
Furthermore, a speedup ratio of 9-15 times is achieved using the proposed solvers.
This work provides an important computing technology for the high performance
analysis of large-scale structures based on OpenSees.
ACKNOWLEDGEMENTS
REFERENCES
Humphrey, J. R., Price, D. K., Spagnoli, K. E., Paolini, A. L., and Kelmelis, E. J.
(2010). “CULA: hybrid GPU accelerated linear algebra routines.” SPIE
Defense, Security, and Sensing. International Society for Optics and
Photonics, 770502-770502.
Li, M. K., Lu, X., Lu, X. Z., and Ye, L. P. (2014). “Influence of the soil-structure
interaction on the seismic collapse resistance of super-tall buildings.” Journal
of Rock Mechanics and Geotechnical Engineering,
doi:10.1016/j.jrmge.2014.04.006.
Lu, X., Lu, X. Z., Zhang, W. K., and Ye, L. P. (2011). “Collapse simulation of a super
high-rise building subjected to extremely strong earthquakes.” Sci. China. Ser.
A., 54(10), 2549-2560.
Lu, X., Lu, X. Z., Guan, H., and Ye, L. P. (2013a). “Collapse simulation of reinforced
concrete high-rise building induced by extreme earthquakes.” Earthq. Eng.
Struct. D., 42(5), 705-723.
Lu, X. Z., Xie, L. L., Huang, Y. L., Lin, K. Q., Xiong, C., and Han, B. (2013b).
© ASCE
Computing in Civil Engineering 2015 418
“Nonlinear simulation of super tall building and large span bridges based on
OpenSees.”
http://www.luxinzheng.net/download/OpenSees_Workshop/LuXinzheng.pdf
>(Feb. 6, 2015).
Lu, X. Z., Han, B., Hori, M., Xiong, C., and Xu, Z. (2014a). “A coarse-grained
parallel approach for seismic damage simulations of urban areas based on
refined models and GPU/CPU cooperative computing.” Adv. Eng. Softw., 70,
90-103.
Lu, X. Z., Li, M. K., Lu, X., and Ye, L. P. (2014b). “A comparison of the seismic
design of tall RC frame-core tube structures in China and the United States.”
Proceedings of the 10th National Conference in Earthquake Engineering,
Earthquake Engineering Research Institute, Anchorage, AK, doi:
10.4231/D3DZ03252.
Lu, X. Z. (2014c). “OpenSees Tall Buildings.”
http://www.luxinzheng.net/download/OpenSeesTallBuildings.zip >(Feb. 6,
2015).
McKenna, F., Scott. M. H., and Fenves, G. L. (2009). “Nonlinear finite-element
analysis software architecture using object composition.” J. Comput. Civil
Eng., 24(1), 95-107.
NVIDIA (2014a). “CUDA C programming guide.”
http://docs.NVIDIA.com/cuda/pdf/CUDA_C_Programming_Guide.pdf >(Feb.
6, 2015).
NVIDIA (2014b). “CuSP Home Page.” http://cusplibrary.github.io/ >(Feb. 6, 2015).
Owens, J. D., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A. E., and
Purcell, T. J. (2007). “A survey of general-purpose computation on graphics
hardware.” Comput. Graph. Forum., 26, 80–113.
PEER (2014). “Subversion Repositories.”
http://opensees.berkeley.edu/WebSVN/listing.php?repname=OpenSees&path
=%2Ftrunk%2FSRC >(Feb. 6, 2015).
Xie, L. L., Huang, Y. L., Lu, X. Z., Lin, K. Q., and Ye, L. P. (2014a). “Elasto-plastic
analysis for super tall RC frame-core tube structures based on OpenSees.”
Engineering Mechanics, 31(1), 64-71. (in Chinese)
Xie, L. L., Lu, X., Lu, X. Z., Huang, Y. L., and Ye, L. P. (2014b). “Multi-layer shell
element for shear walls in OpenSees.” Computing in Civil and Building
Engineering, ASCE, 1190-1197.
Xu, L. J., Lu, X. Z., Guan, H., and Zhang, Y. S. (2013). “Finite element and
simplified models for collision simulation between over-height trucks and
bridge superstructures.” J. Bridge. Eng., ASCE, 18(11), 1140-1151.
© ASCE
Computing in Civil Engineering 2015 419
Abstract
The recurrent underground utility incidents (e.g., utility strikes and utility
conflicts) highlight two underlying causes: failure to comply with spatial rules
prescribed in utility specifications and unawareness of utility locations. It is critical to
address these two causes to prevent utility incidents. This paper presents a framework
that integrates natural language processing (NLP) and spatial reasoning to infer the
vertical positions of underground utilities from textual utility specifications and plans,
and to automate the utility compliance checking. The natural language processing
(NLP) algorithm extracts the spatial rules specified in textual utility documents, and
converts the extracted spatial rules to a computer-interpretable format. The spatial
reasoning scheme models the spatial components in a spatial rule as topological,
directional, and proximate relations, and executes the extracted spatial rules in a
geospatial information system (GIS) to automate the depth estimation and the
compliance checking. Several examples are presented to prove the concepts.
INTRODUCTION
Underground utility incidents such as utility conflicts and utility strikes are
long-standing and worldwide problems, which cause time and cost overruns in
construction projects, property damages, environmental pollution, and personnel
injuries and fatalities. In the United States from 1994 to 2013, 5,623 significant
pipeline incidents were reported with a total of 362 fatalities, 1,397 injuries, and $ 6.6
billion in property damage (PHMSA 2014). The actual total cost is magnificently
higher than the amounts being reported because numerous incidents that are not
classified as significant or serious have not been reported. The recurrent incidents
highlight two underlying causes. One is the failure to comply with provisions
prescribed in engineering principles, utility industry codes, and government
regulations. The second cause is the unawareness of the location, especially the
vertical position or buried depth of many utility lines. To address the root causes of
underground utility incidents, this paper presents a framework that integrates natural
language processing techniques with spatial reasoning schemes in a geographical
© ASCE 1
Computing in Civil Engineering 2015 420
information system (GIS) to infer the vertical positions of underground utilities from
textual utility specifications and plans, and to automate the utility compliance
checking.
LITERATURE REVIEW
© ASCE 2
Computing in Civil Engineering 2015 421
language, which enables the spatial analysis of building information models and the
extraction of partial models subject to certain spatial constraints. These research
efforts pave the way for our study. Our study complements previous research by
incorporating a NLP module to automatically provide semantic spatial rules for
reasoning.
SYSTEM FRAMEWORK
Utility documents contain many domain specific terms such as “encased” and
“ground rod”. It is difficult and error-prone to recognize or tag these terms using the
off-the-shelf NLP packages. However, these terms are important for the follow-up
data manipulation in GIS. Our study adopts a underground utility taxonomy as a
corpus database to assist domain terminology recognition. In addition, the spatial
relations documented in this taxonomy further facilitate the recognition of spatial
terms in the specification language through enhancing the part-of-speech tagging in
the NLP module, to be detailed later. In the spatial reasoning module—Module 3, the
vocabulary used for an entity archived in GIS might be slightly different from that in
utility documents (e.g., “gas line” and “natural gas pipeline”). The taxonomy can
match the multiple vocabularies that refer to a single entity, and thus eliminate the
ambiguity in the spatial reasoning in GIS.
© ASCE 3
Computing in Civil Engineering 2015 422
© ASCE 4
Computing in Civil Engineering 2015 423
extracting these short phrases from a part-of-speech tagged sentence (Steven et al.,
2009). Chunking is a required task in information extraction (Morarescu, 2007). The
main rationale behind chunking is that by investigating the patterns of part-of-speech
tags, separate pieces of information (e.g., single words) are organized or grouped
together to form a more meaningful phrase (e.g., noun phrase). Hence, entities and
relations between them can be identified and extracted from these meaningful phrases.
In this study, we use regular expressions in Python to specify the chunk patterns and
use NLTK built-in method to chunk the main components of the SLM. These
elements not only carry critical information but also constitute the functional structure
of the specification language. The NLTK built-in methods are limited in correctly
recognizing and labeling the domain-specific terms. One option to deal with this
problem would be to look up the words, especially the un-chunked words, in an
appropriate list of terms. Such list is also known as gazetteer. In this study, a number
of gazetteer lists are compiled to group the domain-specific vocabularies under the
classes of the taxonomy. For example, terms including, but not limited to, “encased”,
“non-encased”, “high voltage”, “low voltage” are stored in the utility attribute
gazetteer list. The gazetteer lists serve as a corpus database where the domain-specific
terms can be inquired and identified by using gazetteer lookup. Thereafter, the
targeted information (i.e., the essential components in the SLM) is retrieved based on
a set of well-designed rules. For example, the noun phrase directly follows a spatial
preposition is regarded as the landmark. After retrieving the information, a
morphological analysis is conducted to map various nonstandard forms (e.g., plural
form of noun) of a word to the lexical form (e.g., singular form of noun). In addition,
the comparative relation is mapped to its symbolic form (e.g., minimal is mapped to >
=). The retrieved information is stored in tuples.
© ASCE 5
Computing in Civil Engineering 2015 424
© ASCE 6
Computing in Civil Engineering 2015 425
rulees will be exxtracted usinng the NLP P techniques and fed intto the spatiaal reasoning
module. After implementin
i ng the spatial rules, the figures
f and landmarks arre supposed
to be
b selected in n GSI. In a 2D2 GIS (e.g.., ArcMap), the program m will insert a column in
the attribute tabble of the figure
f (i.e., utility)
u and name it as buried
b depthh. Then the
quaantitative req
quirement associated
a w the deptth in the maain spatial rule
with r will be
coppied to the atttribute tablee. In a 3D GIS (e.g., ArccScene), thiss quantitativee value will
be directly
d used
d to modify thet Z value of o the geomeetry, which will
w move it to the right
placce in the 3D scene. Figuure 4 presents an example of estimatiing utility deepth.
F
Figure 5. Compliance ch
hecking exaample
CO
ONCLUSIONS
© ASCE 7
Computing in Civil Engineering 2015 426
ACKNOWLEDGEMENT
This research was funded by the National Science Foundation (NSF) via
Grant CMMI-1265895. The authors gratefully acknowledge NSF’s support. Any
opinions, findings, conclusions, and recommendations expressed in this paper are
those of the authors and do not necessarily reflect the views of NSF or Purdue
University.
REFERENCES
Abuzir, Y., and Abuzir, M.O. (2002). “Constructing the civil engineering thesaurus
(CET) using ThesWB.” Proc., Intl. Workshop on Info. Tech. in Civ. Eng.
2002, ASCE, Reston, VA, 400-412.
Al Qady, M. and Kandil, A. (2010). “Concept Relation Extraction from Construction
Documents Using Natural Language Processing” Journal of Construction
Engineering and Management, 136 (3), 294-302.
Borrmann, A. and Rank, E. (2009), Topological analysis of 3D building models using
a spatial query language, Advanced Engineering Informatics, 23, 370-385.
Logcher, R.D., Wang, M.T., and Chen, F.H.S. (1989). “Knowledge Processing for
Construction management data base.” Journal of Construction Engineering
and Management, 115(2), 196-211.
Nguyen, T.H. and Oloufa, A.A. (2002). “Spatial information: classification and
application in building design” Computer-aided Civil and Infrastructure
Engineering, 17, 246-255.
PHMSA (Pipeline & Hazardous Materials Safety Administration). (2014).
“Significant pipeline incidents.”
http://primis.phmsa.dot.gov/comm/reports/safety/sigpsi.html?nocache=1494
Salama, D. M., and El-Gohary, N.M. (2013). "Semantic text classification for
supporting automated compliance checking in construction." Journal of
Computing in Civil Engineering.
Yu, W. and Hsu, J. (2013). “Content-based text mining techniques for retrieval of
CAD documents.” Automation in Construction (31), 65-74.
Zhang, J. and EI-Gohary, N. M. (2014). “Semantic NLP-based information extraction
from construction regulatory documents for automated compliance checking”
Journal of Computing in Civil Engineering.
© ASCE 8
Computing in Civil Engineering 2015 427
Abstract
The ability to retrieve accurate information from databases without an
extensive knowledge of the contents and organization of each database is extremely
beneficial to the dissemination and utilization of freight data. Advances in the
artificial intelligence and information sciences provide an opportunity to develop
query capturing algorithms to retrieve relevant keywords from freight-related natural
language queries. The challenge is correctly identifying and classifying these
keywords. On their own, current natural language processing algorithms are
insufficient in performing this task for freight-related queries. High performance
machine learning algorithms also require an annotated corpus of named entities which
currently does not exist in the freight domain. This paper proposes a hybrid named
entity recognition approach which draws on the individual strengths of models to
correctly identify entities. The hybrid approach resulted in a greater precision for
named entity recognition of freight entities – a key requirement for accurate
information retrieval from freight data sources.
INTRODUCTION
Natural Language Processing (NLP) applications provide users with the
opportunity to ask questions in conversational language and receive relevant
answers—rather than formulating a query into possibly unfriendly (or “unnatural”)
formats that machines can understand (Safranm 2013). It provides individuals who
have no in-depth knowledge of a particular area or domain to question and receive
answers either by using a search engine or, more popularly in recent times, through
speech recognition. Numerous advances in this area have been made over the years
but challenges still remain (Liddy 2001); particularly, in identifying domain specific
keywords from a multitude of questions.
Even as search engines and consumer electronic products become more
accessible. NLP applications will continue to have an increasing role in both our
social and work activities. Policy makers making decisions about transportation
infrastructure improvements would benefit if they could ask questions such as “How
many accidents occurred on Interstate 35 [at Dallas] in 2013 compared to 2012?”,
“How many trucks crossed the border between the U.S. and Mexico in the first
quarter of 2014?”, “Which are the top commodities exported from the U.S. to Brazil
in the last decade?” – and receive answers instantaneously. Interestingly, the answers
© ASCE 1
Computing in Civil Engineering 2015 428
to the questions provided above are stored in some of the available freight databases.
A two stage process has to function if various NLP advances offer decision makers
this tool, specifically the approach must:
1. Correctly identify only the relevant information and keywords when dealing
with multiple sentence structures, and
2. Understand multiple data sources to determine which ones best answer a
user’s query.
This paper addresses the former challenge as off-the-shelf domain-independent NLP
systems can identify entities such as a person, a location, date, time, and a
geographical area, but cannot extract information for specific questions in the freight
planning domain. In freight planning, entities such as unit of measurement, mode of
transport, route names, commodity names, and trip origin and destination are
predominant when performing information extraction tasks as shown in Table 1.
© ASCE
Computing in Civil Engineering 2015 429
RESEARCH APPROACH
The research task is to represent multiple natural language queries into a
format that a computer can understand and process. This requires converting
unstructured data from natural language sentences into structured data, and
identifying specific kinds of information relating to the freight planning domain. In
IDEF0 nomenclature, the input for this task is any naturally formed question relating
to freight planning. The control is the grammar for the query language, which in this
case is the English language. The reasoning mechanism involves i) developing an IE
and NER approach that addresses freight-related queries, ii) ensuring ambiguity in
names are correctly handled, e.g., relevant roadways names are constrained to only
places specified in the query, and iii) resolving conflicting query items, e.g., pipelines
move only liquid and gas commodities. The expected output from this task is a list of
data items with very high categorization accuracy of named entities—ideally greater
than 95%.
© ASCE
Computing in Civil Engineering 2015 430
Table 2. Sample Queries Used in Testing and Comparing IE and NER Models
1. What are the top five commodities/industries utilizing IH-35 as a major freight
corridor?
2. What is the average travel time and level of service on major arterial roads
during peak hours?
3. What is the number of truck related accidents which occurred on IH-35 from
May 2013 to June 2013?
4. What is the total number of oversize/overweight vehicle permit fees collected in
Texas for FY 2013?
5. What is the number of bridges along the IH-45 corridor requiring improvements?
6. Where are Amazon shipping facilities?
7. How has the focus on freight changed in the various highway trust fund bills?
8. With the expansion of the Panama Canal, what mode of freight will see the
greatest change within the US?
9. If $500 million was available for freight infrastructure nationally, where and how
would you suggest the money be spent?
10. Who pays for freight?
© ASCE
Computing in Civil Engineering 2015 431
.
An approach similar to what was used in the “TRANSPORT MODE” category
was used in developing the “LINK” category. Regular expression patterns were
developed from a list of roadway suffices and data dictionary values. Examples of
keywords identified this approach include:
The DATE and TIME regular expressions patterns were also developed by
modifying an existing temporal expressions pattern developed by Bird (2009b) to
include, amongst others, terms such as “non-peak period”, “past month”, “last N
years”, and the four seasons
This UNIT OF MEASURE category was also developed using data values from
the various freight data dictionaries. Examples of keywords identified include "tons",
"value", "level of service", "AADT", "truck traffic", "travel time", "percentage",
"count", In addition to the above, descriptive texts such as “average”, “number of”,
“top five’, “most”, and “cheapest” are included into this category.
© ASCE
Computing in Civil Engineering 2015 432
COMPARISON OF MODELS
The performance of an NER model is based on the model’s ability to correctly
identify the exact words in a sentence that belong to a specific named entity type or
category. The commonly used metric for quantitative comparison of NER systems are
Precision, Recall, and F-measure. The trained and untrained Stanford CRF models
and the dictionary-based and feature-based rules were tested with 100 questions
collected and used in developing the initial freight data corpus. The output of the
hybrid model is shown in Figure 1.
The trained CRF model recorded a high precision for the categories it is
familiar with – in this case the LOCATION and TIME – with f-measures of 77.08
and 69.52, respectively. When combined with the dictionary-based rules, slightly
better improvements were recorded for the TIME category, i.e. f-measure 78.43. The
trained CRF model also performs well with the UNIT OF MEASURE category which
recorded an f-measure of 59.65. The dictionary-based rules perform best with the
COMMODITY and MODE categories recording 61.36 and 68.33 f-measures,
respectively. Concerning ORIGIN and DESTINATION, the feature-based rules
provided the best opportunity to classify these categories though the current setup
showed very low f-values. With more robust rules the classification of these entities
can be improved and additional training of the CRF model may assist with better
classification of this category. The LINK category was equally classified by both the
© ASCE
Computing in Civil Engineering 2015 433
trained and the dictionary-based rules which when combined record f-measures of
70.69. Based on the observations from the result, the best combination sub-models for
freight transport entity classification as shown in Figure 2, are:
• Dictionary-based rules for the COMMODITY and MODE categories
• A combination of dictionary-based rules and a trained CRF for the LINK
category
• A trained CRF model to handle TIME, UNIT OF MEASURE, and
LOCATION entities, and
• Feature based-rules to handle ORIGIN and DESTINATION entities. It is
probable that should a larger corpus be eventually developed, the trained CRF
model may be able to better handle this category.
CONCLUSION
The technological shift towards NLP usage is inevitable considering the
investments being made by large technology companies and consumer adoption of
speech recognition products. Applications in the civil engineering domain is however
lagging as an annotated corpus for training machine learning algorithms, for example,
does not currently exist. This paper presents two main contributions to NLP usage in
the civil and transport data domains. The first contribution is the development of an
NER approach to correctly identify and classify keywords from freight-related natural
language expressions and queries. Future research on freight database querying can
utilize this work to develop applications that do not require stakeholders to
necessarily have in-depth knowledge of each database to get answers to their
questions. The second contribution is the beginning of a collection of freight-related
questions to develop a freight specific corpus similar to what has been done in the
bio-medical field. This annotated corpus can be further expanded to the broader
transportation planning domain. Providing decision-makers with the ability to ask
questions in conversational language and receive relevant answers is an exciting
prospect for policy development, planning, management, and funding of
infrastructure projects.
© ASCE
Computing in Civil Engineering 2015 434
REFERENCES
Bickel, P. J., Ritov, Y., and Ryden, T. (1998). “Asymptotic normality of the
maximum-likelihood estimator for general hidden Markov models.” The
Annals of Statistics, 26(4), 1614–1635.
Bird, Steven, Ewan Klein, and Edward Loper. ) “Natural Language Processing with
Python.” O'Reilly Media, Inc., 2009.
Boldyrev, A., Weikum, G., and Theobald, M. (2013). “Dictionary-Based Named
Entity Recognition.”
Borthwick, Andrew. “A Maximum Entropy Approach to Named Entity Recognition.”
PhD diss., New York University, 1999.
Calì, Davide, Antonio Condorelli, Santo Papa, Marius Rata, and Luca Zagarella.
"Improving intelligence through use of Natural Language Processing. A
comparison between NLP interfaces and traditional visual GIS interfaces".
Procedia Computer Science 5 (2011): 920-925.
Finkel, J. R., Grenager, T., and Manning, C. (2005). “Incorporating non-local
information into information extraction systems by gibbs sampling.”
Proceedings of the 43rd Annual Meeting on Association for Computational
Linguistics, Association for Computational Linguistics, 363–370.
Florian, R., Ittycheriah, A., Jing, H., and Zhang, T. (2003). “Named entity recognition
through classifier combination.” Proceedings of the seventh conference on
Natural language learning at HLT-NAACL 2003-Volume 4, Association for
Computational Linguistics, 168–171.
Gao, Lu and Hui Wu. (2013) “Verb-Based Text Mining of Road Crash Report.”
Transportation Research Board 92nd Annual Meeting Compendium of Papers.
Lafferty, J., McCallum, A., and Pereira, F. C. (2001). “Conditional random fields:
Probabilistic models for segmenting and labeling sequence data.”
Liu, Xuesong, Burcu Akinci, Mario Bergés, and James H. Garrett Jr. “Domain-
Specific Querying Formalisms for Retrieving Information about HVAC
Systems.” Journal of Computing in Civil Engineering 28, no. 1 (2013): 40-49.
Nadeau, D., and Sekine, S. (2007). “A survey of named entity recognition and
classification.” Lingvisticae Investigationes, 30(1), 3–26.
Oudah, M., and Shaalan, K. F. (2012). “A Pipeline Arabic Named Entity Recognition
using a Hybrid Approach.” COLING, 2159–2176.
Pereira, Francisco C., Filipe Rodrigues, and Moshe Ben-Akiva. "Text analysis in
incident duration prediction." Transportation Research Part C: Emerging
Technologies 37 (2013): 177-192.
Pradhan, Anu, Burcu Akinci, and Carl T Haas (2011). Formalisms for Query Capture
and Data Source Identification to Support Data Fusion for Construction
Productivity Monitoring, Automation in Construction, 20 (4), 389-98
Safranm, N. (2013). “The Future Is Not Google Glass, It’s Natural Language
Processing.” Conductor Blog, Oct. 10, 2014.
Sekine, S., Grishman, R., and Shinnou, H. (1998). “A decision tree method for
finding and classifying names in Japanese texts.” Proceedings of the Sixth
Workshop on Very Large Corpora.
© ASCE
Computing in Civil Engineering 2015 435
Srivastava, S., Sanglikar, M., and Kothari, D. C. (2011). “Named entity recognition
system for Hindi language: a hybrid approach.” International Journal of
Computational Linguistics (IJCL), 2(1).
Zhang, Jiansong, and Nora M. El-Gohary. 2013. “Semantic NLP-Based Information
Extraction from Construction Regulatory Documents for Automated
Compliance Checking.” Journal of Computing in Civil Engineering.
© ASCE
Computing in Civil Engineering 2015 436
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 437
RELATED WORKS
This section presents computer-assisted technology used for designing and planning
of temporary structures. As shown in Table 1, the capabilities of existing technology
were summarized according to the topics related to planning and analysis of
temporary structures. Many approaches exist that focus on expanding the benefits of
information technology. Existing industry and academic approaches were reviewed to
identify the problems that information technology can solve. Automated, manual, and
insufficient processes were marked as A, M, and I, respectively.
© ASCE 2
Computing in Civil Engineering 2015 438
© ASCE 3
Computing in Civil Engineering 2015 439
and Ahn (2011) automatically designs scaffolding systems around a building model.
Smart Scaffolder (2014) also generates pre-defined types of scaffolding systems
automatically around walls. While these two approaches assist users to generate
scaffolding systems rapidly, they design scaffolding systems exclusively for walls
and fail to distinguish scaffolding systems for different construction tasks. Scia
scaffolding (2009) provides functions in its user-interface that assist users to design
scaffolding systems manually. It also provides automated code-compliance and
structural stability checking for scaffolding systems. Sulankivi et al. (2010) and Kim
and Ahn (2011) incorporated safety features such as guardrails into the temporary
structure models. Lee et al. (2009) developed a tool that generates the formwork
layouts based on the prioritized design requirements. Zhang et al. (2013) presents a
BIM-based automation that identifies potential falling hazard locations and generates
fall protection systems such as guardrails and covers. Even though this approach
demonstrated its capability to analyze the building geometry to automatically identify
and design temporary structures needed for fall protection, many types of temporary
structures used by construction workers were not addressed in this research.
Several approaches incorporate the temporary structure models into the main
models and 4D construction simulations to analyze their impact on the project.
Jongeling et al. (2008) inserted temporary structure objects into building models to
simulate the distances between work crews. Akinci et al. (2002) specified the space
occupied by a scaffolding system in an attempt to analyze the spatial conflicts
between spaces occupied by construction activities and temporary structures.
However, these efforts rely on manually generated temporary structure plans.
© ASCE 4
Computing in Civil Engineering 2015 440
building models. The current scope was limited to temporary stair towers used during
roof construction.
The placement rules are project-specific and they provided the basis for
developing planning algorithms in this research.
© ASCE 5
Computing in Civil Engineering 2015 441
In this research, candidate stair tower locations for roofing activity were
assumed to be identical to leading edges. Thus, geometric relationships between roofs
and between roofs and walls have been analyzed automatically. The steps and results
are shown in Figure 1.
All the roof elements were selected manually (1. Roof selection) and fall
edges were identified automatically. Each edge was divided into three-feet-length (0.9
meters) edges. Then, each of the short edges were analyzed if there is a sudden
change in elevations in front of the edge (2. Detect falling edges). Finally, edges in
front of walls have been removed from the list of possible locations (3. Remove edges
in front of walls). The vertices of all the edges form a group of candidates for possible
temporary stair tower locations.
Work locations
Work locations were identified by creating a grid and identifying grid points
within ceiling objects. As shown in Figure 2, a grid was created and each point in the
grid has been represented by its center point. By examining if each center point is
within the boundary of the roofs, work locations were identified.
© ASCE 6
Computing in Civil Engineering 2015 442
If the entire roof is composed of several work packages and each work
package is linked to a schedule activity, work locations for each work package need
to be identified as shown in Figure 3.
© ASCE 7
Computing in Civil Engineering 2015 443
REFERENCES
Jongeling, R., Kim, J., Fischer, M., Mourgues, C., Olofsson, T., (2008) “Quantitative
analysis of workflow, temporary structure usage, and productivity using 4D
models.” J. Aut. in Constr., 17 (6), 780-791.
Kim, J., Fischer, M., Kunz, J., Levitt, R. (2014). “Semiautomated Scaffolding
Planning: Development of the Feature Lexicon for Computer Application.” J.
Computing in Civil Engr.
Kim, H., Ahn, H. (2011). “Temporary facility planning of a construction project using
BIM (Building Information Modeling).” Proc., 2011 ASCE International
Workshop on Computing in Civil Engineering, ASCE, 627-634.
Kim, K., Teizer, J. (2014). “Automated design and planning of scaffolding systems
using building information modeling.” J. Adv. Engr. Informatics, 28, 66-80.
Lee, C., Ham, S., Lee, G. (2009). “The development of automatic module for
formwork layout using the BIM.” Proc., ICCEM/ICCPM, Vol. 3, 1266-1271.
Nemetschek (2009). “Scia Scaffolding: providing an accurate design and time-saving
workflow.” <http://www.scia-online.com/www/websiteUS.nsf/0/
e87baaf3b09f3439c125758a004a411a/$FILE/Scia-Scaffolding.pdf> (Nov.10,
2012).
Ratay, R. (1996). Handbook of Temporary Structures in Construction: Engineering
Standards, Designs, Practices and Procedures, McGraw-Hill, New York.
© ASCE 8
Computing in Civil Engineering 2015 444
© ASCE 9
Computing in Civil Engineering 2015 445
1
Dept. of Civil Engineering, National Taiwan Univ.,
No. 1, Roosevelt Rd., Sec. 4, Taipei City, 10617, Taiwan.
E-mail: [email protected]; [email protected]; [email protected]
Abstract
Technical documents are often generated during
Architecture/Engineering/Construction (A/E/C) projects and research.
Information Retrieval (IR) is a common technique to manage growing technical
document collections. However, due to the complexity of technical documents,
applying IR directly to technical documents often leads to unsatisfactory results.
Many semantic approaches, such as ontology, are applied to enhance the
performance of IR. Developing domain ontologies requires human effort and
thus, is a time-consuming task. Further, developing automated approaches for
supporting ontology development has become an important requirement.
Reference collections developed to evaluate IR performance can be found in
many IR research studies. They are representative subsets from entire
document collections and are labeled by domain experts. In order to enhance IR
performance, the authors propose an automated approach to grow the base
domain ontology from a reference collection. The authors also validate the
proposed approach on an earthquake engineering reference collection, called
the NCREE collection. The results show that the proposed approach is effective
in enhancing IR performance. In addition, the workload of the proposed
approach is also affordable for domain experts and can become a possible
solution for the automation of ontology development.
© ASCE
Computing in Civil Engineering 2015 446
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 447
© ASCE
Computing in Civil Engineering 2015 448
METHODOLOGY
Top-N Documents
Average Precision Provides from Retrieved NO
the Measure to Optimize N The Depth L Terminate the
Documents
and K Reaches the YES Concept
Upper Bond Expansion
Top-K Important
Terms from Each
Top-N Document
Level-(L+1)
Expanded Concepts
Figure 1. The concept for the development of a domain ontology from a reference collection.
© ASCE
Computing in Civil Engineering 2015 449
© ASCE
Computing in Civil Engineering 2015 450
Term 1-3
Term 2-1
Search Engine System
DOC 2 Term 2-2
Upper-level
Query term
concept
Term 2-3
Term 3-1
NO
DOC 3 Term 3-2
Macth the
Terminate the Term 3-3
YES termination
iteration
requirement
EXPERIMENTS
© ASCE
Computing in Civil Engineering 2015 451
The results show that the strategy “ontology-based” achieves the highest IR
performance for most information requests. This indicates that the ontology
developed in this paper can not only perform successful query expansions, but
also replace human-defined definitions for IR tasks.
CONCLUSIONS
© ASCE
Computing in Civil Engineering 2015 452
ontology from a part of the reference collection (111 documents) and applied it to
an entire document set (with 360 documents) for assisting with IR tasks. The
results showed that the proposed methodology can reorganize the existing
knowledge into better knowledge presentation and also achieve satisfactory IR
performance for the growing document collection.
REFERENCES
Gruber, T. R. (1995). "Toward principles for the design of ontologies used for
knowledge sharing." International Journal of Human-Computer Studies,
43(5–6), 907-928.
Hsieh, S. H., Lin, H. T., Chi, N. W., Chou, K. W., & Lin, K. Y. (2011). "Enabling
the development of base domain ontology through extraction of knowledge
from engineering domain handbooks." Advanced Engineering Informatics,
25(2), 288-296.
Lin, H.-T., Chi, N.-W., & Hsieh, S.-H. (2012). "A concept-based information
retrieval approach for engineering domain-specific technical documents."
Advanced Engineering Informatics, 26(2), 349-360.
Lin, K. Y., Hsieh, S. H., Tserng, H. P., Chou, K. W., Lin, H. T., Huang, C. P., &
Tzeng, K. F. (2008). "Enabling the creation of domain-specific reference
collections to support text-based information retrieval experiments in the
architecture, engineering and construction industries." Advanced Engineering
Informatic., 22(3), 350-361.
Manning, C. D., Raghavan, P., & Schütze, H. (2008). "Introduction to Information
Retrieval." Cambridge University Press.
Qiu, Y., & Frei, H.-P. (1993). "Concept based query expansion." in the
Proceedings of the 16th annual international ACM SIGIR conference on
Research and development in information retrieval, Jun. 27-Jul. 01,
Pittsburgh, Pennsylvania, USA.
Rezgui, Y. (2007). "Text-based domain ontology building using Tf-Idf and metric
clusters techniques." Knowledge Engineering Review, 22(4), 379-403.
Salton, G., Wong, A., & Yang, C. S. (1975). "A vector space model for automatic
indexing." Commun. ACM, 18(11), 613-620.
© ASCE
Computing in Civil Engineering 2015 453
ABSTRACT
This research focuses on developing Building Information Modelling (BIM)
guidelines and libraries for Islamic Architecture (IA) and aims to enhance and reduce the time
and cost of projects that use Islamic Architecture styles. Our main objective is to create BIM-
driven objects and strategies for Islamic Architecture elements that are organized
chronologically according to the history of Islamic Architecture. Islamic architecture contains
a massive amount of information that has yet to be digitally classified according to eras and
styles. Categories in use include styles and characters, construction methods, structural
elements, and architectural components. This part centers on providing the fundamentals of
Islamic Architecture informatics by building the framework for BIM models and guidelines. It
provides schema and critical data such as identifying the different models, shapes, and forms
of construction, structure, and ornamentation of Islamic Architecture for digitalized parametric
building identity.
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 454
folds by which an image is characterized and the image’s fundamental region and class.
Okamura et. al (2007) have similarly established semantic digital resources of Islamic historical
buildings focusing on Islamic architecture in Isfahan, Iran. Okamura et al’s research
demonstrated that a topic maps-based semantic model applied to collaborative metadata
management paradigms can be easily exploited as a tool to enhance traditional architectural
design and cross-disciplinary studies. Another example is the research effort conducted by
Djibril et.al (2008), which investigated geometrical patterns in IA and developed an indexing
and classification system using discrete symmetry groups. It is a general computational model
for the extraction of symmetry features of Islamic Geometrical Patterns (IGP) images. IGPs are
classified into three pattern based categories. The first pattern-category describes the patterns
generated by translation along one direction. The second-pattern contains translational
symmetry in two independent directions. The third, which is called rosettes, describes patterns
that begin at a central point and grow radially outward.
These cited studies demonstrate recent efforts to classify certain geometric patterns in
Islamic Architecture into specific categories while ignoring the overall categorization of Islamic
structures into a general purpose digital classification system. This study aims to develop a
digital library of IA by classifying, categorizing, and labeling all the elements, forms and data
of Islamic architecture in order to facilitate access to the information and applications. This
work strives to enrich the creativity and skills of designers working with Islamic architecture
and culture.
METHODOLOGY
Classification
There are myriad of architectural styles used throughout the Islamic regions. This work
seeks to develop a classification schema (Figure 1) of Islamic Architectural elements
in order to build a BIM digital library that clarify IA styles and their appropriate eras.
The classification chart is arranged so that element types are identified and sorted
based on their historical setting.
The data used to generate this chart is extracted from genuine Islamic
Architectural references accredited and recommended primarily by Harvard and MIT
© ASCE
Computing in Civil Engineering 2015 455
universities through the Aga Khan Program for Islamic Architecture (Islamic
architecture - Aga Khan Documentation Center, 2015). Other references include, Art
of Islam by Titus Burckhardt (2009), Historical Atlas of The Islamic World (Malise
Ruthven and Azim Nanji (2004), and Atlas Tarikh Aleslam by Hussain Moans (1987).
Figure 1 depicts style and building object type classifications that assist
organizing the information according to a State’s timeline. The hierarchal schema
shown in Figure 2 is developed and organized first, to ease the extraction of
information, and second, to help identify the sequential flow of information of an IA
element or style based on its origin, period, style, and building. Furthermore, this will
aid consumers of the library when navigating the complex sets of architectural objects
included in the digital IA data sets. These preliminary charts represent the initial
schema for the Islamic Architecture database digital library.
© ASCE
Computing in Civil Engineering 2015 456
Ornamentations.
Decorations.
© ASCE
Computing in Civil Engineering 2015 457
Arabic Pools
(Birka).
quadripartite
garden
Abbasid style i. Most of the buildings
A.D 750-1250, H(132-656) Variety in
affected by the Persian and
Minarets/spires
Iraqis.
shapes.
ii. Adobe was the main
More ornamented
material for construction.
buildings.
iii. Samarra Mosque.
Separated
iv. Spiraling cone of minaret.
Minbares.
v. Ibn Toulon Mosque.
More using of vi. Baghdad City.
moqarnace
Massive
courtyard in the
main mosques.
Plaster
Any Islamic Architecture project requires a lot of facts and references to educate
designers regarding the vocabulary, style history and various other details related to a
particular architectural style. The digital classification will aid in providing key
information about IA that enhances and enriches design ideas and processes (Figure 3).
Examples for the use of the IA library include answering questions like: what was the
most famous buildings in the same category, what was the story, concept and
philosophy of projects, and what construction methods were used during those
projects?
© ASCE
Computing in Civil Engineering 2015 458
(a) Era: Otthomani; Character: Hijazi styel; (b) Era: Otthomani; Character: Hijazi styel;
From: Maqad.
Form: Door way.
Form: Typical Magad for a mosque.
Figure 3: Examples of data provided by the BIM-driven Islamic Construction.
CONCLUSION
Islamic Architecture has a substantial amount of distinctive information about design
concepts, spatial features, forms, façades, and building functions that need to be made available
to designers and engineers. BIM-driven Islamic Architecture is proposed in this paper as a
vehicle to produce a digital library of BIM models that include accurate details and descriptions
of Islamic construction and philosophy. The digital data classification phase of BIM-driven
Islamic construction is an essential step for the creation of a digital library that includes the
model data of the most iconic Islamic eras. The classification is based on historical chronology,
location, vocabulary and style and other related details. Information provided in this
classification aims to enlighten designers with a comprehensive overview of relevant data
regarding Islamic construction.
ACKNOWLEDGEMENT
Authors would like to thank King Abdul-Aziz University, College of environmental Design,
Architecture Department, for sponsoring Mr. Ayad Almaimani Ph.D. research at the University
of Florida.
© ASCE
Computing in Civil Engineering 2015 459
REFERENCES
Albert , F.,Albert , F., Valiente, J., & Gomis, J. (2005). A computational model for pattern and
tile designs classification using plane symmetry groups. CIARP'05 Proceedings of the
10th Iberoamerican Congress conference on Progress in Pattern Recognition, Image
Analysis and Applications, 11. Retrieved November 11, 2014, from
http://dl.acm.org/citation.cfm?id=2099369.2099458&coll=DL&dl=GUIDE
Burckhardt, T. (2009). Art of Islam, Language and Meaning. Bloomington, Indiana: World
Wisdom, Inc. Retrieved October 22, 2014.
Djibril , M., Hadi , Y., & Haj Thami , R. (2006). Fundamental region based indexing and
classification of islamic star pattern images. ICIAR'06 Proceedings of the Third
international conference on Image Analysis and Recognition , Volume Part II, 11.
Retrieved October 28, 2014, from
http://dl.acm.org/citation.cfm?id=2110938.2111024&coll=DL&dl=GUIDE
Djibril, M., & Haj Thami, R. (2008). Islamic geometrical patterns indexing and classification
using discrete symmetry groups. Journal on Computing and Cultural Heritage
(JOCCH), 1(2), 14. doi:10.1145/1434763.1434767.
GRUBE, E. (1987). The Art of Islamic Pottery (Vol. 1). The Metropolitan Museum of Art &
JSTOR. Retrieved from The Metropolitan Museum of Art Bulletin.
Hussain Moanis. (1987). Atlas Tarikh Aleslam. Cairo: Alzahraa elaam alarabi. Retrieved
August 29, 2014.
Miller, S. G. (2005). Finding Order in the Moroccan City: The Hubus of the Great Mosque of
Tangier as an Agent of Urban Change. Muqarnas : An Annual on the Visual Culture of
the Islamic World, XXII, 265-283. Retrieved october 3, 2014, from archnet:
http://archnet.org/sites/6353/publications/5427
Okamura, T., Fukami, N., Robert, C., & Andres, F. (2007, June). Digital Resource Semantic
Management of Islamic Buildings Case Study on Isfahan Islamic Architecture Digital
Collection. International Journal of Architectural Computing, 5(Volume 5, Number 2 /
June 2007). doi:10.1260/1478-0771.5.2.356.
Peterson, A. (1996). Dictionary of Islamic architecture (first edition ed.). london: routlege.
Rabbat, N. (2012, Number 6). What is Islamic architecture anyway? Journal of Art
Historiography(6), 15. Retrieved Novmber 12, 2014, from
https://arthistoriography.files.wordpress.com/2012/05/rabbat1.pdf
Ruthven, M., & Nanji, A. (2004). Historical Atlas of Islam. Harvard University Press. Retrieved
November 05, 2014.
The Aga Khan Documentation Center at MIT Libraries. (n.d.). Islamic architecture - Aga Khan
Documentation Center. Retrieved February 16, 2015, from MIT Libraries:
http://libguides.mit.edu/islam-arch.
© ASCE
Computing in Civil Engineering 2015 460
ABSTRACT
KEYWORDS
Building information modeling (BIM), Islamic Architecture (IA), BIM libraries, IA database,
and digital classification.
1
© ASCE
Computing in Civil Engineering 2015 461
INTRODUCTION
The proposed approach will allow the project to have an overarching sense of unity.
This means the library will contain similar shapes, elements, constructions, structures, and more
importantly a single unified style language. The data provided will help influence design
decisions quickly because of testimonies provided in the application and due to its ease of
accessibility. Information about various Islamic architectural eras will be immediately available
to designers. For instance, a state’s history, the most famous buildings or, more importantly, the
architectural forms and elements of windows, domes, and spires will be readily accessible.
Details about windows, domes and spires will be ready to be taken to fabrication machines so
that physical models can be immediately produced. Every form will include data that enriches
a designers structural and architectural design understanding of Islamic Architecture.
© ASCE
Computing in Civil Engineering 2015 462
DIGITAL LIBRARY
In the modern digital era, modeling has advanced significantly in the last decades.
Particularly, building information modeling (BIM), which is fundamentally changing the role
of computation in building design by creating a database of building objects that can be used
from the design phase to the actual construction of a building (Nawari et. al 2014). This research
aims to develop a BIM library for Islamic Architecture. The research study represents a critical
step towards Islamic Architecture informatics.
3
© ASCE
Computing in Civil Engineering 2015 463
Using the digital classification presented in part 1 (Ayad and Nawari, 2015a), a BIM
library will be developed using Autodesk Revit software. This digital library is sorted by
historical chronology where the architectural elements will then be arranged by the period in
which they belong. For example, the BIM-Islamic Architecture library for the Ottoman period
will maintain all the architecture elements for that period from the time of its founding to its
collapse (Figure 1).
APPLICATIONS
The proposed BIM digital library for Islamic Architecture is intended to provide
enhanced guidance and intricate details regarding Islamic Architecture to its user. An organized
BIM library will make it easier for designers to learn and gain design ideas from an interactive
system that outlines the different Islamic construction eras. Furthermore, focusing on specific
Islamic architectural styles will guide the user to design a better building that will have a clear
architectural identity. For example, designing a mosque needs a great deal of three dimensional
forms and elements to support the design vision. The Islamic Architecture Library, which would
© ASCE
Computing in Civil Engineering 2015 464
be supported by the building information modeling concept, has all the forms and data that can
lead the architects toward their creative vision. The designer can choose the desired template
for their design from a list of templates provided by the BIM-driven Islamic construction. For
example, if the designer chooses the Ottoman era then all the architectural elements, characters,
calligraphy, ornaments, furniture, light fixtures and other elements will be limited to that era
(Figures 2 and 3).
This proposed library also has extended benefits during the conceptual design phase.
This includes the ability to enhance the expression and meaning of architectural and structural
concepts by supplying the user with information and resources that are appended to the library.
This can aid the user in identifying and selecting additional resources that are appropriate to
their design vision (Figure 3).
5
© ASCE
Computing in Civil Engineering 2015 465
Figure 4a depicts a plan of a mosque and illustrates some of the key elements that will
be designed with the help of the proposed digital library. Figure 4b is a 3D view showing
additional objects that are necessary for the design of a mosque. Figures 4c and 4d, illustrate
how desiners can choose the type of ornamentaion and material from a specific era provided by
the proposed digital library. For the design of a Mehrab (Figure 4e), the BIM-driven libray
provides the designer with assistance in developing the necessary geomteric properties and type
character. The digital library offers also various options for columns and walls (Figure 4f and
4g). Examples of further options for the desing of windows, doors and entrances are given in
Figures 4h and 4k.
© ASCE
Computing in Civil Engineering 2015 466
(a)
(b)
(c) (d)
(h) (k)
7
© ASCE
Computing in Civil Engineering 2015 467
CONCLUSION
ACKNOWLEDGEMENTS
Authors would like to thank King Abdul-Aziz University, College of Environmental Design,
Architecture Department, for sponsoring Mr. Ayad Almaimani’s Ph.D. research at the
University of Florida.
REFERENCES
Almaimani, A. and Nawari, N.O. (2015). BIM-Driven Islamic Construction: Part 1-Digital
Classification. Proceeding of the 2015 ASCE International Workshop on Computing in
Civil Engineering, June 21st – 23rd, 2015, Austin, Texas.
Britannica, Encyclopædia. Encyclopædia Britannica. Ed. Encyclopædia Britannica. 22 july
2014. <http://www.britannica.com/EBchecked/topic/631310/Vitruvius>.
Clarke, Somers and R. Engelbach(2014) . Ancient Egyptian Construction and Architecture
(Dover Books on Architecture). Dover Publications, 2014. 11 november 2014.
Kostof, Spiro (1977) . The Architect: Chapters in the History of the Profession. Ed. Spiro Kostof.
illustrated, reprint. University of California Press, 1977. 20 october 2014.
Nawari, O. N. and Kuenstle, M. (2015). Building Information Modeling: A Framework for
structural design, CRC Press, April 2015.
Peterson, Andrew (1996). Dictionary of Islamic architecture. first edition . london: routlege,
1996.
© ASCE
Computing in Civil Engineering 2015 468
Integration of BIM and GIS: Highway Cut and Fill Earthwork Balancing
Hyunjoo Kim1; Zhenhua Chen2; Chung-Suk Cho3; Hyounseok Moon4; Kibum Ju4; and
4
Wonsik Choi
1
Department of Engineering Technology and Construction Management, University
of North Carolina at Charlotte, 9212 University City Blvd., Charlotte, NC 28223.
E-mail: [email protected]
2
Department of Engineering Technology and Construction Management, University
of North Carolina at Charlotte, 9212 University City Blvd., Charlotte, NC 28223.
E-mail: [email protected]
3
Department of Civil Engineering, Khalifa University, Abu Dhabi, UAE. E-mail:
[email protected]
4
ICT Convergence and Integration Research Division, Korea Institute of Construction
Technology.
Abstract
The proposed paper intends to provide a technical review between BIM and
GIS and measure the different strengths and weaknesses of each approach. Then, the
proposed project aims to present a newly developed data integration approach in
facilitating the continuous work processes of the two distinct environments. This
research utilizes BIM based (IFC) system to store road components data in highway
construction and then GIS system to import data such as land boundaries, and
topographic data. In order to retrieve and integrate the two distinct types of data
formats, this research uses the concept of semantic web in RDF format to provide
semantic interoperability between BIM and GIS operations.
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 469
calculation (Kim et al., 2014; Yabuki, and Shitani, 2005). However, not much
research has been conducted in utilizing GIS data for earthwork calculations. This
research proposes an integration system of BIM and GIS to perform a spatial data
analysis needed in earthwork calculations.
A case study is conducted and applied to a highway construction project
located in the southern part of Korea, which consists of earthwork and road pavement
construction. The related project data is retrieved/integrated through semantic web
approach from remote BIM and GIS files in REST web service protocol. A prototype
system is implemented first by processing initial retrieved data and then analyzing to
perform the necessary cut and fill simulations.
LITERATURE REVIEW
© ASCE 2
Computing in Civil Engineering 2015 470
METHODOLOGY
The overall process for cut and fill calculation in this research includes the
necessary steps such as BIM data input process, GIS data input process, Semantic
Web integration , and spatial data analysis in earthwork calculations as shown in
Figure 1. At the first step of cut and fill quantity simulation process, Building
Information Modeling (BIM) model data file is extracted in describing the
infrastructure geometric information such as road shape, centerline, cross section,
elevation, curb, and embankment. Secondly, raw project data (triangle coordinates
and group, geographical reference, soil type, infrastructure data, terrain data, etc.) are
collected as GIS input information. Industrial Foundation Class (IFC) file is
© ASCE 3
Computing in Civil Engineering 2015 471
implemented for BIM model and Geography Markup Language (GML) file stores
GIS data as ISO (International Organization for Standard) standards. Then, semantic
web integration will combine GIS and BIM in the same platform and lastly, spatial
data analysis such as earthwork operations will be performed.
BIM GIS
© ASCE 4
Computing in Civil Engineering 2015 472
OwnerHistory, etc. Each attributes has its own data type. For example, the data type
of GlobalId is string, the data type of OwnerHistory is entity called IfcOwnerHistory.
Among the attributes of IfcRoad entity, one of the most important entities is
IfcRoadComponent. It is inherited into different structural members like IfcRoadWay,
IfcRoadBase, IfcRoadBed, IfcRoadCurb and IfcRoadDrainageSystem. The IfcRoad
entity is also inherited into IfcMunicipalRoad, IfcHighwayRoad, and other entities as
subentities. The geometry and location information of the infrastructure construction
components are stored as attributes of the entities.
Project
Information
BIM Link
GIS link
Figure 2. Example of RDF file with project information, BIM and GIS
Links
CASE STUDY
The case study was conducted and applied to a highway construction project
with earth excavation/embankment, and road pavement. The total length of the
project is 318 meters and the total construction cost was $17,182,637. The quantities
of cut and fill are output from the IFC model based on the information obtained from
the IFC entities. The general method of quantifying cuts and fills is based on the
existing topography elevation in GIS data by locating the IFC model at different
elevations.
© ASCE 5
Computing in Civil Engineering 2015 473
CONCLUSIONS
In this paper, a highway data model in IFC based BIM data model was used to
store infrastructure information. Then, Geographic Information System (GIS) file was
used to store geographic information, such as land boundaries, environmentally
sensitive regions, and topographic data. In order to easily retrieve and integrate those
two types of data format, the semantic web data schema was implemented to
exchange data in infrastructure (BIM) and geographic information (GIS) in two ways,
which greatly improved the interoperability of the current approach.
The developed data integration system was applied in highway construction
for earthwork calculations located in the southern part of Korea and consists of
earthwork and road pavement infrastructure. The related project data retrieved by
semantic web file from remote BIM and GIS files showed a seamless data integration
without creating a laborious data schema process. A prototype system was
implemented to process initial retrieved data which is then analyzed by the genetic
algorithms to perform multiple cut and fill simulations. An optimal construction
equipment plan is generated based on the simulation result.
REFERENCES
© ASCE 6
Computing in Civil Engineering 2015 474
Irizarry, J., Karan, E., and Jalaei, F. (2013), ntegrating BIM and GIS to improve the
visual monitoring of construction supply chain managementOriginal Research
Article Automation in Construction, Volume 31, May 2013, Pages 241-254
Karan, E. and Irizarry, J. (2014) Developing a Spatial Data Framework for Facility
Management Supply Chains. Construction Research Congress 2014: pp. 2355-
2364.
Kim, E., Jha, M., Schonfeld, P., and Kim, H. (2007). ”Highway Alignment
Optimization Incorporating Bridges and Tunnels.” J. Transp. Eng., 133(2)
Kim, H., Orr, K., Shen, Z., Moon, H., Ju, K., and Choi, W. (2014). ”Highway
Alignment Construction Comparison Using Object-Oriented 3D Visualization
Modeling.” J. Constr. Eng. Manage., 140(10)
Nassar, K., Aly, E. A., and Osman, H. (2011). “Developing an efficient algorithm for
balancing mass-haul diagrams.” Autom. Constr., 20(8),1185–1192.
Shi, J. and Liu, P. (2014) An Agent-Based Evacuation Model to Support Fire Safety
Design Based on an Integrated 3D GIS and BIM Platform. Computing in Civil
and Building Engineering (2014): pp. 1893-1900.
Yabuki, N. and Shitani, T. (2005) A Management System for Cut and Fill Earthworks
Based on 4D CAD and EVMS. Computing in Civil Engineering (2005): pp. 1-
8.
© ASCE 7
Computing in Civil Engineering 2015 475
ABSTRACT
Buildings and their systems are primarily designed based on several
assumptions about end-users’ requirements and needs, which in many cases are
incomplete and result in inefficiencies during operation phases of buildings. With
advancements in fields of augmented and virtual reality, designers and engineers have
now the opportunity to collect information about end-users’ requirements, preferences,
and behaviors for more informed decision-making during the design phase. These
approaches allow for buildings to be designed around the users, with the goal that the
design will result in reduction of energy consumption and improved building
operations. The authors examine the effect of design features on occupants’ preferences
and performance within immersive virtual environments (IVEs). Specifically, this
paper presents an approach to understand end-users’ lighting preferences and collect
end-user performance data through the use of IVEs.
INTRODUCTION
Buildings’ energy use accounts for roughly 45 percent of the energy
consumption in the United States (EPA 2013). Buildings and their systems are
generally designed to operate based on code-defined occupant comfort ranges to ensure
satisfactory temperature, luminance, and ventilation, and standardized (recommended)
set-points to accommodate occupants’ needs, and comfort levels (Brandemuehl and
Braun 1999). Previous research has shown there is a weak correlation between these
codes and the actual occupant reported satisfactory ranges. Many times these standard
set-points do not fulfill comfort and satisfaction in buildings (Barlow and Fiala 2007;
Jazizadeh et al. 2012). Research has also suggested that by tailoring the design of a
building’s elements and systems around occupants, there is a potential to reduce the
energy consumption of buildings, as well as increase occupant satisfaction (Janda 2011;
Klein et al. 2012). User-centered design (UCD) has shown to be an effective approach
1
© ASCE
Computing in Civil Engineering 2015 476
2
© ASCE
Computing in Civil Engineering 2015 477
indoor environments (Boyce et al. 1989; Romm 1994). For instance, (Fisk and
Rosenfeld 1997) has shown that activities, such as reading speed and comprehension
(Smith and Rea 1982; Veitch 1990), locating and identifying objects, and writing are
highly affected by luminance, amount of glare, and spectrum of light. This stream of
research suggests that designing environments based on people’s lighting preferences
have the potential to affect their productivity and performance. Prior research has also
shown that personal preferences have more of an effect on people’s choice of light
source than the available daylight when occupants use their offices for a short period
of time (Correia da Silva et al. 2013). Being able to design environments with
satisfactory lighting levels based on the occupants’ preferred settings not only could
improve user satisfaction but also could potentially reduce the total energy
consumption in buildings.
This paper presents an approach to collect data about end-users’ different
lighting preferences and performance in order to form profiles. The lighting preferences
(natural and artificial) of participants are measured within an IVE along with their
performance on doing office-related activities (reading and identifying objects) in
participant’s preferred lighting settings. The paper presents the research methodology,
the IVE system for data acquisition, initial pilot profile data, and discussion on the
proposed approach and planned future work.
METHODOLOGY
The profiles are based on end-users’ lighting preferences and performance in
their preferred light settings. These parameters are measured based on the choices end-
users make to adjust the lighting levels and their performance on a set of assigned visual
tasks.
In order to create user profiles, a number of participants were recruited to
measure their preferred light settings in an office environment in order to perform a set
of office related tasks. Once participants chose their most preferred environment based
on the different lighting levels, the 3D models’ settings (artificial light settings,
geographical location, time of day, etc.) were imported into a simulation software in
order to collect the light maps and lux values of the entire office area. Along with the
lux values, their performance data (reading and comprehension) was collected.
Experiment Design. Although there are many design alternatives that could
affect the amount of lighting levels in an office space (e.g., number, type, and size of
windows, type of light bulbs, geometrical design of the room, reflective surfaces, etc.),
the authors designed their experimental environment similar to one of the office spaces
of an actual office building. As a result, a 150 square meter (10 m x 15 m x 3 m) office
space was modeled for this experiment. The designed office space consisted of three
windows (a set of blinds per window) and 12 light fixtures (three light bulbs per fixture)
with the possibility of having three artificial light settings (one light bulb on, two light
bulbs on, and three light bulbs on).
The modeled environment was used to measure the participants’ most preferred
light settings, as well as their performance on a set of assigned tasks (reading speed and
comprehension) in the same lighting environment. The participants were placed in a
dark room (Figure 2a) and were instructed to setup the room’s lighting levels based on
their most preferred settings (in terms of use of natural light, artificial light, or a
combination of both, as well as the amount of lighting). The participants had the options
3
© ASCE
Computing in Civil Engineering 2015 478
to open/close each set of blinds to increase/decrease the availability of natural light and
turn the light switches on/off to control the artificial light levels in the room.
4
© ASCE
Computing in Civil Engineering 2015 479
texture) based on the participants’ interactions with the virtual office space. To increase
the sense of presence and allow participants to realistically interact with the IVE, the
Oculus Rift DK2 positional tracker was used to track the participants’ neck
displacement (3 Degrees-of-Freedom - DoF), the HMD was used to track the head
rotation (3 DoF), and the Xbox-360 controller was used to navigate through the room,
providing 6 DoF. Figure 1 illustrates the modeling steps and the apparatus.
a. All blinds closed & all lights off b. All blinds open & all lights off c. All blinds closed & all lights on d. All blinds open & all lights on
5
© ASCE
Computing in Civil Engineering 2015 480
Figure 4 – Example of lux values and light maps based on participant profiles.
The light maps represent the room from the top view.
In addition to these light maps, the participants’ reading speed and
comprehension were measured (Figure 4). The participants’ reading comprehension
was measured based on the number of questions they answered correctly; their reading
speed was measured in words read per seconds. Since there were only four
comprehension questions, the researches chose 75 percent (at least three out of four)
accuracy as a reasonably good performance level. Figure 4 shows the profiles for three
different participants in terms of comprehension, reading speed, preferred light setting,
and preferred lighting map.
CONCLUSION, LIMITATIONS AND FUTURE WORK
The ability to design buildings around the needs and comfort levels of
occupants can result in better interactions between buildings and their occupants,
6
© ASCE
Computing in Civil Engineering 2015 481
7
© ASCE
Computing in Civil Engineering 2015 482
8
© ASCE
Computing in Civil Engineering 2015 483
ABSTRACT
Since the Occupational Safety and Health Act of 1970 was established, which
places the responsibility of construction safety on the employer, various injury
prevention strategies have been developed and resulted in a significant improvement
of safety management in the construction industry. However, during the last decade,
construction safety improvement has decelerated and, due to the dynamic nature of
construction jobsites, most safety management activities have focused on safety
monitoring during the construction phase. Traditional safety planning approaches rely
mostly on static information, tacit knowledge, regulations, company safety policies,
and 2D drawings. As a result, site-specific dynamic information, temporal (e.g. when
and who will be exposed to potential hazards) and spatial (e.g. location of dangerous
zones) information are currently not specifically addressed. This paper presents a
formalized 4-dimensional (4D) construction safety planning process that addresses
site-specific temporal and spatial safety information. The safety data, which includes
general safety knowledge, site-specific temporal and spatial information, will be
integrated from a project schedule and a 3D model, respectively. The proposed safety
planning approach is expected to provide safety personnel with a site-specific
proactive safety planning tool that can be used to better manage jobsite safety. In
addition, visual safety materials can also aid in training workers on safety and,
consequently, being able to identify site-specific hazards and respond to them more
effectively.
INTRODUCTION
1
© ASCE
Computing in Civil Engineering 2015 484
to the average fatal work injury rate for all industries, which is 3.6. Of the 806
construction industry fatal injuries, 39.8% are construction vehicle and machinery-
related fatalities (Bureau of Labor Statistics, 2012).
Since the Occupational Safety and Health Act of 1970 was established, which
places the responsibility of construction safety on the employer, fatality and disabling
rate in the construction industry has dramatically decreased. After this federal law
came into effect, various injury prevention strategies have been developed and
resulted in a significant improvement of safety management in the construction
industry (Esmaeili and Hallowell, 2012). However, during the last decade,
construction safety improvement in terms of fatality and disabling rate has
decelerated (Esmaeili and Hallowell, 2012) and fatality rate in the construction
industry is still much higher than other industries (Bureau of Labor Statistics, 2012).
Therefore, innovative injury prevention practices such as integration of project
schedules and information technology can be leveraged to significantly improve
current construction safety management practices.
Construction safety management activities are typically categorized into
safety planning and execution processes. Despite the interdependent relationship
between safety planning and execution processes, current safety planning practices
lack a systematic approach to effectively identify and manage hazards prior to
construction because of limited safety data and the dynamic nature of construction.
Due to ineffective safety planning processes, safety planning and execution processes
are generally segregated and, consequently, most safety execution processes rely on
ad-hoc safety activities during construction. Given that the majority of hazards are
generated from dynamic conditions and activities in construction work zones,
developing dynamic safety plans is fundamental in order to improve the safety
execution process and, consequently, site-specific safety management at the jobsite.
The main objective of this paper is to systematically formalize the
construction safety planning process through a 4-dimensional (4D) environment,
which integrates 3D and time, to address site-specific temporal and spatial safety
information.
BACKGROUND RESEARCH
Construction projects are dynamic (Bobick, 2004) and unique factors are
frequent work team rotation, weather, changes in topography, and different
concurrent activities including various combinations of workers and equipment
(Rozenfeld et al., 2010). Due to the dynamic characteristics of construction sites, the
construction schedule gained attention when combined with safety planning. Two
main safety sources, safety regulations and risk data, have been integrated with a
project schedule. Kartam (1997) attempted to integrate the Occupational Safety and
Health Administration (OSHA) regulations into schedules. Saurin et al. (2004) and
Cagno et al. (2001) attempted to link construction activities and safety control
measures prepared by safety experts. The main objective of integrating risk data into
2
© ASCE
Computing in Civil Engineering 2015 485
project schedules is to identify high risk work periods and minimize possible risks in
the identified periods prior to the start of the activities. Akinci et al. (2002) showed
the possibility of automatically detecting and avoiding hazardous situations by
integrating the project schedule into her time-space conflict analysis tool. Wang et al.
(2006) attempted to identify high risk periods by integrating activities and expected
injury cost data in a simulation-based model (SimSAFE). Yi and Langford (2006)
stated that hazardous situations vary according to different project progress and the
schedule should be considered for the hazard identification process. To address this
issue, Yi and Langford (2006) attempted to predict when and where risky situations
would occur by combining historical accident sources and developed ‘safety resource
scheduling’. Navon and Kolton (2006 and 2007) introduced an automated safety
monitoring and control model to identify fall hazards and possible locations.
Hallowell et al. (2011) considered task interaction risks and suggested an integrated
model of safety risk data into project schedules based on Yi and Langford’s (2009)
model. Esmaeili and Hallowell (2013) developed an integration model by identifying
common highway construction work tasks and quantifying risks of tasks in the
schedule.
Previous studies emphasized the importance of safety and schedule integration
to address when and where hazards are expected using third party applications.
However, there are little studies addressing how safety knowledge is dynamically
updated as a project schedule is updated. Since a project schedule is frequently
updated, this paper will focus on dynamic linkage of safety and schedule integration
with minimum efforts.
3
© ASCE
Computing in Civil Engineering 2015 486
hazardous activities during the 4D simulation. Another attempt of BIM for safety is
applying safety rule checking system to automatically detect hazards and generate
corresponding safety measures (Benjaoran and Bhokha, 2010; Zhang et al., 2013).
Previous studies showed the effectiveness of visualizing safety information
using BIM. However, there are little studies considering safety impacts of concurrent
activities in 4D environment, thus this paper will address how to represent this gap.
RESEARCH SCOPE
Even though all safety practices are important and interrelated, this paper will
focus on improving the macro level of construction safety planning practices, given
that subsequent safety practices can be significantly impacted by macro level plans.
This proposed safety tool will automatically address dynamic updates of construction
documents, especially project schedules, rather than dynamic changes of micro level
work situations such as uncertainties of workers, equipment, weather, or activity
delays which are not updated in a project schedule or 3D model. These kinds of micro
level uncertainties should be considered in the field of real-time jobsite monitoring
and may be integrated with the macro level safety planning process. In addition, the
proposed safety framework will identify risky work periods and risky work zones, but
specific risk controls will not be provided because it is believed that properly trained
safety experts should ultimately make risk control decisions.
RESEARCH METHODOLOGY
4
© ASCE
Computing in Civil Engineering 2015 487
5
© ASCE
Computing in Civil Engineering 2015 488
CONCLUSIONS
REFERENCES
Akinci, B., Fischen, M., Levitt, R., and Carlson, R. (2002). "Formalization and
Automation of Time-Space Conflict Analysis." Journal of Computing in Civil
Engineering, 16(2), 124-134.
Benjaoran, V., and Bhokha, S. (2010). "An integrated safety management with
construction management using 4D CAD model." Safety Science, 48(3), 395-
403.
Bobick, T. G. (2004). "Falls through Roof and Floor Openings and Surfaces,
Including Skylights: 1992-2000." Journal of Construction Engineering and
Management, 130(6), 895-907.
Bureau of Labor Statistics (2012). “Revisions to the 2012 Census of Fatal
Occupational Injuries (CFOI) counts.” United States Department of Labor,
Washington, DC.
Cagno, E., Giulio, A. D., and Trucco, P. (2001). "An algorithm for the
implementation of safety improvement programs." Safety Science, 37(2001),
59-75.
Chantawit, D., Hadikusumo, B. H. W., Charoenngam, C., and Rowlinson, S. (2005).
"4DCAD-Safety: visualizing project scheduling and safety planning."
Construction Innovation, 5(2), 99-114.
7
© ASCE
Computing in Civil Engineering 2015 490
Esmaeili, B., and Hallowell, M. (2013). "Integration of safety risk data with highway
construction schedules." Construction Management and Economics, 31(6),
528-541.
Esmaeili, B., and Hallowell, M. (2012). "Diffusion of Safety Innovations in the
Construction Industry" Journal of Construction Engineering and Management,
138(8), 955-963.
Hallowell, M., Esmaeili, B., and Chinowsky, P. (2011). "Safety risk interactions
among highway construction work tasks." Construction Management and
Economics, 29(4), 417-429
Kartam, N. A. (1997). "Integrating Safety and Health Performance into Construction
CPM." Journal of Construction Engineering and Management, 123(2), 121-
126.
Leite, F., Akcamete, A., Akinci, B., Atasoy, G., Kiziltas, S. (2011) "Analysis of
modeling effort and impact of different levels of detail in building information
models”. Automation in Construction, 20(5), 601–609.
Mallasi, Z., and Dawood, N. (2004). "Workspace competition: assignment, and
quantification utilizing 4D visualization tools." Proc., Construction
Application of Virtual Reality, Lisbon, 13-22.
Navon, R., and Kolton, O. (2006). "Model for Automated Monitoring of Fall Hazards
in Building Construction." Journal of Construction Engineering and
Management, 132(7), 733-740.
Navon, R., and Kolton, O. (2007). "Algorithms for Automated Monitoring and
Control of Fall Hazards." Journal of Computing in Civil Engineering, 21(1),
21-28.
Rozenfeld, O., Sacks, R., and Rosenfeld, Y. (2009). "'CHASTE: construction hazard
assessment with spatial and temporal exposure." Construction Management
and Economics, 27(7), 625-638.
Rozenfeld, O., Sacks, R., Rosenfeld, Y., and Baum, H. (2010). "Construction Job
Safety Analysis." Safety Science, 48(2010), 491-498.
Sacks, R., Rozenfeld, O., and Rosenfeld, Y. (2009). "Spatial and Temporal Exposure
to Safety Hazards in Construction." Journal of Construction Engineering and
Management, 135(8), 726-736.
Saurin, T. A., Formoso, C. T., and Guimaraes, L. B. M. (2004). "Safety and
production: an integrated planning and control model." Construction
Management and Economics, 22(2), 159-169.
Wang, W.-C., Liu, J.-J., and Chou, S.-C. (2006). "Simulation-based safety evaluation
model integrated with network schedule." Automation in Construction,
15(2006), 341-354.
Yi, K.-J., and Langford, D. (2006). “Schedule-based risk estimation and safety
planning for construction projects.” Journal of Construction Engineering and
Management, 132(6), 626-635.
Zhang, S., Teizer, J., Lee, J.-K., Estman, C. M., and Venugopal, M. (2013). "Building
Information Modeling (BIM) and Safety: Automatic Safety Checking of
Construction Models and Schedules." Automation in Construction, 29(2013),
183-195.
8
© ASCE
Computing in Civil Engineering 2015 491
I. Flood1
1
Rinker School of Construction Management, College of Design, Construction and
Planning, University of Florida, Gainesville, FL 32611-5703. E-mail: [email protected]
Abstract
INTRODUCTION
A genealogy of the alternative tools that have been developed for modeling
construction processes (Flood et al. 2006) suggests that they can be grouped into three
main categories: the Critical Path Methods (CPM); the linear scheduling techniques;
and discrete-event simulation. Most other tools are either hybrids of these approaches
or variants with non-structural enhancements. For example, 4D-CAD and nD-CAD
planning methods (Issa et al. 2003; Koo & Fischer 2000) that include time as a
dimension are strictly CPM based modeling tools hybridized with 3D-CAD for
visualization purposes.
© ASCE 1
Computing in Civil Engineering 2015 492
The goal in developing the new approach to modeling was to attain the
simplicity of CPM, visual insight of linear scheduling, and the modelling versatility
of simulation. In addition, hierarchical structuring of a model (see, for example,
Huber et al. (1990); and Ceric (1994)) and interactive development of a model were
identified as requisite attributes of the new approach since they facilitate model
© ASCE 2
Computing in Civil Engineering 2015 493
© ASCE 3
Computing in Civil Engineering 2015 494
© ASCE 4
Computing in Civil Engineering 2015 495
19 Type A
rebar supply delay
18 Component
17
16 Type B
15 Component
Components (units)
14
13
12
11
10 Type A
9 Comp.
8
7
6
5
4
3
2
limited storage induced delay
1
Time
Figure 3. Foresight model with all constraints added.
e) A limit of 3 components within the curing room at any time (a high humidity
space designed to facilitate concrete hydration). This is implemented by
introducing a new attribute Curing Space Permits, assigning all fourth level work
units within Place Concrete and Cure Concrete a value of 1 in the Curing Space
Permits dimension, and setting the first level work unit for the system to a value
of 3 in this dimension. The impact of this limit can be seen in Figure 3 whereby
every 3rd component experiences a delay to Place Concrete.
a) The final constraint is concerned with the delivery of rebar. This may be
constrained in another dimension, measuring say weight of steel, although for
convenience here it is measured in components. The constraint limits the start of
© ASCE 5
Computing in Civil Engineering 2015 496
Cut & Fix Rebar and is shown in green in Figure 3. The impact of the scheduled
delivery is also indicated within this figure.
Set-Up
Forms
Start 6
components
Place Cure Remove
Concrete Concrete Forms
Start 6
components
Cut &
Fix Rebar
Type B Component Fabrication
© ASCE 6
Computing in Civil Engineering 2015 497
equivalent (that is the version without the second batch of Type A components and
the rebar delivery). This is similar to the findings made by Flood and Nowrouzian
(2014) where they made a direct comparison between Foresight and STROBOSCOPE
for construction operations and found that Foresight required around one third of the
number of terms to define a model. It was also shown that while STROBOSCOPE
may employ 25 or more modeling concepts for a relatively simple model, the number
of basic modeling concepts employed in Foresight will never exceed 5 (the work unit,
constraint, attribute, nesting, and repetition). This comparison is for deterministic
versions of both the CYCLONE and Foresight models; if stochastic factors were
considered then both models would require the input of additional information
describing the uncertainty. For CYCLONE these parameters would define uncertainty
in the activity durations, for Foresight they would define uncertainty in the value of a
constraint. This highlights another advantage of Foresight over CYCLONE that
uncertainty can be applied to any model parameter not just activity duration, although
simulation in general is also capable of this.
By comparing the model representations of Figure 3 and 4 several additional
important differences between CYCLONE and Foresight can be understood. First,
note that CYCLONE requires the complete logic of the model (as represented by the
CYCLONE diagram of Figure 4) to be finalized before the system’s performance can
be predicted in a simulation run. The Foresight model, on the other hand, integrates
the structure and logic of the model and the estimated performance of the system
within a single format as represented by Figure 3. As a consequence, as elements are
added to the Foresight model and its parameters altered, the impact of these edits on
the performance of the system are seen immediately, thus aiding verification and
validation of the model. A second advantage is that the way in which a model’s logic
and structure impact performance is directly visible, which in turn assists in the
optimization of the design of the system. For example, inspecting Figure 3 it can be
seen visually that delays in production due to limited curing room space could be
removed by expanding this facility to enable storage of an additional 4 components.
CONCLUSION
The paper has proposed a new approach, Foresight, for modeling construction
processes built on concepts relevant to contemporary project planning, and
demonstrated its application to manufacturing systems. The principles upon which
Foresight is based provide it with the versatility necessary to model the broad
spectrum of construction systems that until now have required the use of several
different modeling tools. The resultant models are highly visual in form, representing
the progress of work within the model structure. This facilitates model verification
and validation, provides insight into how the design of a process will impact its
performance, and suggests ways of optimizing project performance. Foresight is also
simpler to use than conventional simulation, employing fewer modeling concepts and
allowing models to be defined using a fraction of the number of terms.
Research is on-going developing detailed models using this method for a
variety of project types. The objective of these studies is to determine the successes
© ASCE 7
Computing in Civil Engineering 2015 498
and limitations of the proposed planning method in the real-world, and to determine
refinements that will increase its value as a modeling tool.
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 499
Abstract
Tower cranes are widely used in construction jobsite for their efficiency.
However, tower cranes and construction workers themselves suffer a significant
safety hazards from natural sway of payloads. Besides, the external disturbance of
wind leads to additional sway and intensifies the oscillation amplitude of crane load
on construction site. Therefore, we propose a hybrid control mechanism that combine
electronic and mechanical gyroscopes to produce a balancing torque, keeping crane
load stable. We assume the crane load as in the case of the inverted pendulum and
simulate the oscillation movement under continuous wind. A hybrid control
mechanism is designed and developed, with the electronic gyroscope to track the real-
time position and orientation of payload, with the mechanical gyroscope as an
actuator of collected feedback to control the oscillation amplitudes. A wind tunnel
test has been conducted to validate the developed hybrid mechanism.
Keywords: Hybrid control mechanism; Tower crane; Oscillation amplitude;
Wind tunnel test
INTRODUCTION
Tall and flexible tower cranes are widely utilized on construction site, but
natural sway of payloads is nearly unavoidable. Besides, environmental wind is the
major external disturbance source of crane loads, which leads to additional sway.
Supporting structures and hoisting lines of a tower crane sometimes are too
maneuverable to withstand strong wind, over torques and heavy load. Especially,
prefabricated building components and materials require tower cranes to lift heavier
load to more precise locations at specified heights. In addition, strong wind in
construction site put construction workers in danger. For safety reasons, tower crane
operation must be halted when the wind speed exceeds 20 m/s (Winn et al., 2005).
Therefore, any undesirable movement of crane load will prolong construction
schedule and increase construction risk.
According to the passive control of convey-cranes (Collado et al., 2000), the
oscillation of crane load and hoisting system is assumed as an inverted pendulum.
Then the hoisting system is divided into two sections. One section is the rigging
© ASCE 1
Computing in Civil Engineering 2015 500
connection between trolley and hook assumed as flexible, while the other section is
the cable link between hook and payload assumed as rigid. Considering piece-wise
connection among trolley, hook and payload will have a significant impact on
stabilization.
In this manuscript, we try to explore the four major parameters (swing angular
velocity, swing angular acceleration, payload position velocity, and payload position
acceleration) and understand the relationship between crane load and external wind
through simulation. Based on the simulation model, a hybrid control mechanism with
electronic and mechanical gyroscopes is developed, and a wind tunnel test is
conducted to validate the hybrid control mechanism.
© ASCE 2
Computing in Civil Engineering 2015 501
Without wind load, the trend of velocity and acceleration of swaying angle is
periodicity; while with wind load, the velocity and acceleration of swaying angle is
gradually rising. So it is difficult to stabilize the payload oscillation when considering
the environmental wind. In order to further compare simulation and wind tunnel test,
© ASCE 3
Computing in Civil Engineering 2015 502
the simulation model mimics the scalar wind speed and wind direction in Baker City
in US (Wilde, 2015). From the dynamic simulation of pendulum type hoisting system,
the angular velocity and angular acceleration of oscillation according to the
continuous wind speed can be calculated out, shown in Table 1. Then the position and
orientation of payload can be identified through the expression of state space of the
inverted pendulum system (Precup et al., 2012). Therefore, the relationship between
oscillation position and environmental wind force in pendulum-type hoisting system
is established.
© ASCE 4
Computing in Civil Engineering 2015 503
There are two mechanical configurations in the experiment: one with a single
gyroscope; the other with dual gyroscopes. With one mechanical gyroscope, the
device could stabilize the oscillation in one direction, which along the fitting gimbal
direction, namely roll direction. The dual gyroscopes configuration is a cross-shaped
layout along roll and pitch directions, respectively. The hybrid mechanism with dual
gyroscope could stabilize the oscillation in roll and pitch directions, as shown in
Figure 4.
© ASCE 5
Computing in Civil Engineering 2015 504
© ASCE 6
Computing in Civil Engineering 2015 505
position and environmental wind speed have a certain influence on time to stabilize
the system; when angle position of mechanical gyroscope is 45 degree with a rotor
spinning speed of 10230rpm, it has a dramatic effect on payload stabilization.
Therefore, the developed stabilizer could reduce the oscillatory amplitude of the
payload with hybrid control mechanism, and the optimal installation angle of
mechanical gyroscope is 45 degree.
Then, the wind tunnel is testified the performance of the hybrid control
mechanism under different wind speed disturbances. The real-time environmental
wind data comes from Earth System Research Laboratory in National Oceanic &
Atmospheric Administration. The sample environmental wind is set based the wind
on September 21, 2014, in Baker US (Wilde, 2015). The environmental temperature
is 36.2 , wind pressure is 972.9mb, and relative humidity is 18.6%. The total weight
of gyroscope is 345g. The wind direction is 134°and last 35s, at beginning with wind
speed 3.6m/s, 5 second with wind speed 3.8m/s, 10 second with wind speed 3m/s, 15
second with wind speed 2.7m/s, 20 second with wind speed 1.5m/s, respectively. The
swing angle of payload could be stabilized at 28 seconds later, shown in Figure 6.
60
40
20
0
Swing angle (deg)
-20
-40
-60
-80
-100
-5 0 5 10 15 20 25 30 35 40
Time (Sec)
Figure 6. Result of wind tunnel test with different wind speed disturbances
CONCLUSION
It is extremely difficult to stabilize the hook and payload due to their suspend
status under environmental wind disturbance. In this research, the hoisting system of
tower crane is treated as a pendulum, which allows oscillation to occur during crane
motion. Computer-based simulation is created in this paper to optimize the payload
positon and orientation under the environmental wind. The simulation proves that the
pendulum system can reduce the oscillatory amplitude of the payload with the
proposed hybrid control mechanism of combining mechanical and electronic
gyroscope. In the hybrid system, the electronic gyroscope tracks the real-time
amplitude of payload, and a mechanical gyroscope was utilized as the actuator to
produce a balancing torque and keep crane payload in a stable state. An experiment
has been conducted to validate the integrated device with wind tunnel tests. The
experimental results suggest that comparing to traditional control system our
© ASCE 7
Computing in Civil Engineering 2015 506
proposed hybrid control mechanism is a more efficient method which can stabilize
the payload faster and optimize installation angle of mechanical gyroscope.
ACKNOWLEDGEMENT
This paper is based in part upon work supported by Construction Industry
Council of Hong Kong, National Natural Science Foundation of China (Grant No.
51205350), Zhejiang Provincial Research Program of Public Welfare Technology
Application of China (Grant No. 2013C31027), and Hong Kong Scholars Program of
China (Grant No. XJ2013015).
REFERENCES
Abe, A. (2011). “Anti-sway control for overhead cranes using neural networks.”
Journal of Innovative Computing, Information and Control, 7(7).
Chang, Y.-C., Hung, W.-H., Kang, S.-C. (2012). “A fast path planning method for
single and dual crane erections.” Automation in Construction, 22: 468-480.
Collado, J., Lozano, R., Fantoni, I. (2000). “Control of convey-crane based on
passivity.” American Control Conference, 2000. Proceedings of the 2000.
IEEE, pp. 1260-1264.
Fang, Y., Ma, B., Wang, P., Zhang, X. (2012). “A motion planning-based adaptive
control method for an underactuated crane system.” Control Systems
Technology, IEEE Transactions on, 20(1): 241-248.
Hong, K.-S., Ngo, Q.H. (2009). “Port Automation: modeling and control of container
cranes.” Inter. Conf. on Instrumentation, Control and Automation, pp. 19-26.
Inoue, F. et al. (1997). “A practical development of the suspender device that controls
load rotation by gyroscipic moments.” Proceedings of the 14th International
Symposium on Automation and Robotics, pp. 8.
Nasir, A.K., Roth, H. (2012). “Pose Estimation by Multisensor Data Fusion of Wheel
Encoders, Gyroscope, Accelerometer and Electronic Compass, Embedded
Systems.” Computational Intelligence and Telematics in Control, pp. 49-54.
Neitzel, R.L., Seixas, N.S., Ren, K.K. (2001). “A review of crane safety in the
construction industry.” Applied Occupational and Environmental Hygiene,
16(12): 1106-1117.
Precup, R. et al. (2012). “Signal processing in iterative improvement of inverted
pendulum crane mode control system performance.” Instrumentation and
Measurement Technology Conference (I2MTC), 2012 IEEE International.
IEEE, pp. 812-815.
Wilde, N. (2015). “Physical Sciences Division Profiler Data.” Earth System Research
Laboratory.http://www.esrl.noaa.gov/psd/data/obs/datadisplay/.html>(Sep. 21,
2014).
Winn, R.C., Slane, J.H., Morris, S.L. (2005). “Aerodynamic Effects in the Milwaukee
Baseball Stadium Heavy-Lift Crane Collapse.” American Institute of
Aeronautics and Astronautics, 10-13.
Zhao, Y., Gao, H. (2012). “Fuzzy-model-based control of an overhead crane with
input delay and actuator saturation.” Fuzzy Systems, IEEE Transactions on,
20(1): 181-186.
© ASCE 8
Computing in Civil Engineering 2015 507
Abstract
Keywords:
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 508
by Tokyo
T Univeersity (Sobhaaninejad et aal. 2011) proovides a morre reasonablle method
of seeismic damagge simulatioon for regionnal buildings, which adoppts a multi-ddegree-of-
freeddom (MDOF F) model an nd nonlinearr THA. How wever, this method sign nificantly
increeases the com mputational workload. H Hence, only supercompuuters have th he ability
to peerform the simulation
s with
w IES. Inn addition, Sobhanineja
S ad’s work fo ocuses on
consttructing a frramework off IES, and thhe method usedu to deterrmine the paarameters
for building
b moddels is not prrovided.
To overcomme those probblems, a mooderate-fidellity model, nnamed the multi-story
m
conccentrated-maass shear (M MCS) model, and the asssociated paraameter deterrmination
approoach based on the HAZ ZUS perform mance databaase are propposed, with which
w the
seismmic responsses of each story can be predictted through nonlinear THA. A
comp puter prograam is develo oped to impplement the simulation.. With this program,
userss can constrruct numericcal models of buildingss automaticaally, select a ground
motion of interesst, perform nonlinear
n THHA, and obsserve the simmulation resuults of the
wholle region. TheT Tsinghuua campus in China has h 619 buildings with h various
strucctural types. The information of the buildings caan be easily accessed. Therefore,
T
the Tsinghua
T caampus is seelected to bbe the case study areaa to demonsstrate the
proposed seismic damage simulation. T The case stuudy shows tthat a 40 s nonlinear
THA A of 619 builldings can bee completedd in 15 s on a desktop com mputer. Thee outcome
of thhis study pro ovides a seiismic damagge simulatioon method fo for regional buildings
that balances acccuracy and efficiency, which
w is verry importantt for seismicc damage
assesssment and emergency
e m
management of an urban
n area.
COM
MPUTATIO
ONAL MOD
DELS
© ASCE
Computing in Civil Engineering 2015 509
The inter-story mechanical behavior of the MCS model can be defined by the
backbone curve and the hysteretic curve. The tri-linear backbone curve is adopted as
the backbone curve of the MCS model. For the hysteretic curve, three different types
of models are adopted to describe the hysteretic behaviors of different types of
structures. Specifically, the bilinear elasto-plastic model is suitable for describing
steel moment frames (Miranda and Ruiz-Garcia 2002), while the modified-Clough
model (Mahin and Lin 1984) is generally used for reinforced concrete (RC) frames
for which the flexural failure is significant. For other structures that easily fail by
shear, the pinching model (Steelman and Hajjar 2009) is adopted to describe their
hysteretic behavior.
Model Validation
A cyclic pushover analysis and a nonlinear THA for a six-story RC frame
building are performed to validate the MCS model (Xu et al. 2014). The analysis
results (e.g., the time histories of the lateral hysteretic behavior and the top
displacement) of the MCS model are compared with those of a refined FE model for
the same building, as shown in Figure 2 (Xu et al. 2014). The comparison shows a
good agreement between the MCS model and the refined FE model.
(a) Comparison of the lateral hysteretic properties between the refined FE model and
the MCS model on the bottom story
© ASCE
Computing in Civil Engineering 2015 510
(b) Comparison of the top displacement versus time histories between MCS model
and the refined FE model
Figure 2. Validation of the MCS model (Xu et al. 2014)
© ASCE
Computing in Civil Engineering 2015 511
textbboxes of the basic buildding informaation are fillled-in, as shhown in Figuure 5, the
inter-story hysterretic parameeters as well as the criterria of story ddrift ratio rep
presenting
diffeerent damage states willl be autom matically callculated by the program m. These
autom matically geenerated paraameters mayy be modifiedd manually. When the paarameters
are changed,
c thee corresponnding hysterretic curve shown
s in thhe program dialogue
(Figuure 5) will be
b updated accordingly,
a , which cleaarly illustratees how the hysteretic
h
curvee is influennced by thesse parameteers. Furtherm more, the dyynamic propperties of
buildding (e.g., the free viibration freequencies an nd modes) are also calculated
c
autom matically byy the programm to assist in determinin ng whether thet parameteers of the
buildding are corrrect or not. For example, if the fun ndamental peeriod of a thhree-story
masoonry structuure is greateer than 1 s,, there may y be somethhing wrong with the
param meters of thhe building because thiis type of building
b gennerally cannnot be so
flexibble.
Figuree 4. 3D map
p of the Tsin
nghua Univeersity campu
us (source:
htttp://map.tsiinghua.edu.cn/3d/)
Figure 5. The
T parametters of the M
MCS modell for an indiividual build
ding
Selecction of Gro
ound Motion ns
In the moddule for the selection off ground mo otions (Figurre 6), several widely
used ground mottion records (e.g., the Ell Centro recoord of Imperrial Valley, California,
C
19400) are readilyy provided inn the prograam. In additiion, user-deffined groundd motions
© ASCE
Computing in Civil Engineering 2015 512
can be
b added innto the proggram by prooviding the file paths of o the correesponding
grounnd motion records.
r Thee ground mootions and th heir acceleraation responsse spectra
can be
b drawn in different collors so that users
u can cleearly select aand comparee different
grounnd motions.
Figure 6.
6 Visual sellector for seeismic wavees
© ASCE
Computing in Civil Engineering 2015 513
CON
NCLUSION
NS
To balancee the efficiiency and aaccuracy off the regionnal seismicc damage
simuulation for bu
uildings, a moderate-fid
m elity numeriical model fo
for building structures
s
and the associaated param meter determmination appproach baseed on the HAZUS
perfoormance dattabase are proposed.
p A numerical model for eeach buildinng can be
© ASCE
Computing in Civil Engineering 2015 514
easily constructed if five basic building properties are available. The seismic
response of buildings is simulated through nonlinear THA, and the displacement
time-history and damage states of each story can be observed in a visualization
module, which provides more detailed information than the results using SDOF
models. The case study shows that a 40 s seismic response simulation for the 619
buildings of the Tsinghua campus can be completed in 15 s on a desktop computer,
so this method can serve as a useful tool for the quick evaluation of regional seismic
damage.
ACKNOWLEDGEMENTS
The authors are grateful for the financial support received from the National
Key Technology R&D Program (No. 2013BAJ08B02) and the National Natural
Science Foundation of China (No. 51178249).
REFERENCES
© ASCE
Computing in Civil Engineering 2015 515
Abstract
Discrete event simulation (DES) is widely regarded as an effective tool for
modeling, analyzing and establishing the correct design of construction operations,
including the proper quantity and sequencing of resources within the context of a
selected field construction method. However, a gap exists in current construction
operation studies with respect to human factors, specifically, workers’ physical
aspects. Construction workers, one of the most important resources in construction
operation, have limited physical capabilities due to fatigue depending on individual-
and task-related factors. However, less attention to fatigue has been paid in DES
studies in construction, despite its significant impacts on construction performance.
To understand dynamic impacts of workers’ physical constraints on construction
performance, we propose worker-oriented construction operation simulation
integrating a fatigue model into discrete event simulation. Specifically, workers’
activities during construction operations are modeled using DES in an elemental task
level. Then, physical demands from the operation are estimated through
biomechanical analysis on elemental tasks. Fatigue, which refers to diminished
physical capabilities under given physical demands, is predicted using a fatigue
model, and its impact is subsequently reflected in DES. As a preliminary study, we
tested the feasibility of the proposed approach on masonry work by varying crew
size. The results indicate that excessive physical demands beyond workers’
capabilities result in productivity losses. Ultimately, the proposed approach has the
potential to decide alternatives for construction operation planning to secure high
productivity without compromising workers’ health.
INTRODUCTION
Discrete Event Simulation (DES) has been a useful technique for modeling
and analyzing construction operations, which helps to develop better project plans,
optimize resource usage, reduce costs and duration, or improve overall project
performance (Martinez and Ioannou 1999; AbouRizk 2010). For successful
applications of DES, it is essential to build accurate models which represent
construction operations (Shi and AbouRizk 1997). Especially, modeling of resources
© ASCE
Computing in Civil Engineering 2015 516
and their state is one of the important elements for construction operation modeling
because resources are the predominant requirement for activities and activities also
affect the state of resources (Martinez and Ioannou 1999).
Resources in construction generally refer to materials, equipment and labor,
all of which has a set of constant attributes in DES, for example, an amount of
materials required for one cycle of an activity, working capacity of equipment or
labor. However, unlike other resources, there is significant variability in human
physical capabilities which are affected by workloads (Chaffin et al. 2006). For
example, the increased workload may result in reduced performance (e.g.,
productivity) due to fatigue (Keller 2002; Alvanchi et al. 2011). However, human
factors such as fatigue and their impact have been rarely addressed in DES (Perez et
al. 2014).
To address this issue, this paper proposes construction operation simulation
that reflects dynamic interactions between construction operations and human factors.
To represent these interactions, we combine a DES model with a biomechanical
model for estimating workloads from construction operations and a fatigue model for
estimating changes in workers’ physical capabilities, which in turns affect workers’
performance. This method is demonstrated through a pilot study on masonry work.
Based on the pilot study, we discuss the benefits of the proposed approach for
understanding how workers’ physical capabilities under given workloads affect
construction operations, and suggest the future direction of research.
© ASCE
Computing in Civil Engineering 2015 517
© ASCE
Computing in Civil Engineering 2015 518
modeling can be derived from a time and motion study (e.g. direct and continuous
observation of construction operations).
Once basic tasks are determined, a biomechanical model estimates physical
workloads by simulating the tasks. A biomechanical model estimates musculoskeletal
stresses required to perform a task (e.g., joint moments, muscle forces) as a function
of postures, external loads and anthropometric data (Chaffin et al. 2006).
Biomechanical models provide an effective means to understand physical workloads
during construction tasks (Seo et al. 2013; Seo et al. 2014). One example of
computerized biomechanical models is 3D Static Strength Prediction Program
(3DSSPPTM) (Center for Ergonomics, University of Michigan 2011). The program is
applicable to worker motions in three dimensional space. For example, tasks can be
evaluated by breaking the activity down into a sequence of static postures and
analyzing each individual posture. Therefore, representative postures for each basic
task can be simulated in this program, estimating joint moments required to perform
the task. The results of biomechanical models for basic tasks are combined with the
sequence and duration data from the DES model, creating workloads required for
whole construction operations (e.g., work-rest time, repetitions) as shown in Figure 3.
(1)
• MVC : Maximum voluntary contraction (maximum capacity of muscle)
• : Current exertable maximum force (current muscle strength)
• : Forces required for the task (e.g., workloads)
• : current time (seconds)
The equation indicates that the current capacity of muscle strength can be
determined by the negative exponential function of cumulative workloads.
Maximum capacity of muscle varies depending individuals, and thus we assumed the
© ASCE
Computing in Civil Engineering 2015 519
value of muscle strength for 50th percentile male population. However, it can be
adjusted based on workers’ strength tests.
However, one of the limitations of Ma et al. (2009)’s model is that the model
does not consider the recovery process during rest time. Thus, we suggest a recovery
model that estimates the amount of current exertable maximum force recovered
during non-working time based on the physiological recovery rate (See Eq. 2).
Previous studies on recovery rate of muscle strength identified that 5% to 20% of
muscle strength was recovered per one minute (Kuorinka 1988; Shin and Kim 2007).
We assumed 5% of recovery rate per one minute during non-working time.
(2)
• : Current exertable maximum force at start time a of non-working time)
• : Current exertable maximum force at finish time b of non-working time
According to the workloads, the current exertable maximum forces are
calculated using the fatigue generation or recovery model. The workloads beyond the
current exertable maximum forces indicate that potential performance issues may
occur due to fatigue.
PILOT STUDY
A pilot study was performed for masonry work to demonstrate the feasibility
and potential of the proposed approach. The DES model was built based on a time
and motion study by field observations (Figure 4). Figure 4(b) shows basic tasks and
their sequence for block laying tasks. While three masons lay blocks on the wall
repeatedly, one helper moves mortar and blocks near the wall. It took about 54
minutes to build the concrete block wall with 7 courses. Durations for basic tasks are
determined by average observed durations during repetitions.
© ASCE
Computing in Civil Engineering 2015 520
minutes of idle time were found from the observation, which was not considered in
the model. Without idling time, the model performed well to represent this masonry
work, showing 4% of the difference in working time.
Using the workloads in Table 1, physical capabilities are estimated using the
fatigue model as shown in Figure 6. The blue line means forces required for the task
(%MVC) while the red line indicates current exertable maximum forces (%MVC)
(i.e., current muscle strength). As workers perform the tasks, the current exertable
maximum forces decrease due to muscle fatigue. During non-working time, the
current exertable forces are recovered according to recovery rate. The result indicates
that the workloads of masons for this work are appropriate without any fatigue issues.
However, the helper may experience fatigue failure before finishing the work (about
at 25 minutes). In the short term, the fatigue failure can directly result in performance
loss when the helper wants to task a rest to be recovered from fatigue. When fatigue
failure is repeated without enough recovery, it may lead to work-related
musculoskeletal disorders such strain, myalgia or tendonitis in the long term.
© ASCE
Computing in Civil Engineering 2015 521
ACKNOWLEDGEMENT
The work presented in this paper was supported financially with a National
Science Foundation Award (No. CMMI-1161123). Any opinions, findings, and
© ASCE
Computing in Civil Engineering 2015 522
conclusions or recommendations expressed in this paper are those of the authors and
do not necessarily reflect the views of the National Science Foundation.
REFERENCES
AbouRizk, S. (2010). Role of simulation in construction engineering and management.
Journal of Construction Engineering and Management, 136(10), 1140-1153.
Alvanchi, A., Lee, S., & AbouRizk, S. (2011). Dynamics of working hours in construction.
Journal of Construction Engineering and Management, 138(1), 66-77.
Armstrong, T. J., Buckle, P., Fine, L. J., Hagberg, M., Jonsson, B., Kilbom, A., ... & Viikari-
Juntura, E. R. (1993). A conceptual model for work-related neck and upper-limb
musculoskeletal disorders. Scandinavian journal of work, environment & health, 73-
84.
Center for Ergonomics, University of Michigan (2011). 3D Static Strength Prediction
Program: User’s Manual. University of Michigan, MI.
Chaffin, D. B., Andersson, G., & Martin, B. J. (2006). Occupational biomechanics. New York:
Wiley.
Cooper, R., Kuh, D., & Hardy, R. (2010). Objectively measured physical capability levels
and mortality: systematic review and meta-analysis. Bmj, 341.
Durand, M. J., Vézina, N., Baril, R., Loisel, P., Richard, M. C., & Ngomo, S. (2009). Margin
of manoeuvre indicators in the workplace during the rehabilitation process: a
qualitative analysis. Journal of occupational rehabilitation, 19(2), 194-202.
Everett, J. G., & Slocum, A. H. (1994). Automation and robotics opportunities: construction
versus manufacturing. Journal of construction engineering and management, 120(2),
443-452.
Halpin, D. W., & Woodhead, R. W. (1976). Design of construction and process operations.
John Wiley & Sons, Inc..
Keller, J. (2002). Human performance modeling for discrete-event simulation: workload. In
Simulation Conference, 2002. Proceedings of the Winter (Vol. 1, pp. 157-162). IEEE.
Kuorinka, I. (1988). Restitution of EMG spectrum after muscular fatigue. European journal
of applied physiology and occupational physiology, 57(3), 311-315.
Ma, L., Chablat, D., Bennis, F., & Zhang, W. (2009). A new simple dynamic muscle fatigue
model and its validation. International Journal of Industrial Ergonomics, 39(1), 211-
220.
Martinez, J. C. (2001, December). EZStrobe: general-purpose simulation system based on
activity cycle diagrams. In Proceedings of the 33nd conference on Winter simulation
(pp. 1556-1564). IEEE Computer Society.
Martinez, J. C., & Ioannou, P. G. (1999). General-purpose systems for effective construction
simulation. Journal of construction engineering and management, 125(4), 265-276.
McGill, S. M. (1997). The biomechanics of low back injury: implications on current practice
in industry and the clinic. Journal of biomechanics, 30(5), 465-475.
Perez, J., de Looze, M. P., Bosch, T., & Neumann, W. P. (2014). Discrete event simulation as
an ergonomic tool to predict workload exposures during systems design.
International Journal of Industrial Ergonomics, 44(2), 298-306.
Seo, J., Han, S., Lee, S., and Armstrong, T. J. (2013). Motion-Data–driven Unsafe Pose
Identification through Biomechanical Analysis.” Proceeding of 2013 ASCE
International Workshop on Computing in Civil Engineering, Los Angeles, CA.
Seo, J., Starbuck, R., Han, S., Lee, S., & Armstrong, T. J. (2014). Motion Data-Driven
Biomechanical Analysis during Construction Tasks on Sites. Journal of Computing in
Civil Engineering.
Shi, J., & AbouRizk, S. M. (1997). Resource-based modeling for construction simulation.
Journal of construction engineering and management, 123(1), 26-33.
Shin, H. J., & Kim, J. Y. (2007). Measurement of trunk muscle fatigue during dynamic lifting
and lowering as recovery time changes. International journal of industrial ergonomics,
37(6), 545-551.
© ASCE
Computing in Civil Engineering 2015 523
Abstract
INTRODUCTION
The temperature increases that happened over the last three decades account
for two thirds of the change that occurred over the last 100 years (Committee on
America's Climate Choices 2011). Global warming has been observed as a trend.
Despite the dispute over the theory of global warming, 70% of Americans have
accepted it as a fact (Leiserowitz et al. 2011). Global warming could cause climate
changes, including temperature increases, climate pattern shifts, and extreme weather
events such as, droughts, heavy rain fall, and excessive heat waves (Lu et al. 2007).
If global warming continues to develop at the currently observed magnitude, more
© ASCE 1
Computing in Civil Engineering 2015 524
noticeable climate change would appear sooner. This will eventually affect human
activity, including industry productivity.
The construction industry is susceptible to the environment, since almost 50%
of the construction activities are subject to weather influences (Benjamin and
Greenwald 1973). Hot working environments have always been a concern for the
construction industry. Psychologically, unpleasant working conditions that arise from
hot environments can invoke workers’ apathy to work, and physiologically,
construction workers can suffer from heat stress or stroke (Koehn and Brown 1985).
The immediate effects of global warming are broad scales of temperature increases.
It is very likely that prolonged warmer seasons would appear. This implies that
construction workers will have more working days under hot environments, hence
decreased productivity. For contractors, productivity correlates with a project's
profitability, while for owners, productivity determines the final cost of a project.
Therefore, understanding the impact of global warming on construction labor
productivity is instrumental for the construction industry to understand the challenge
that the industry will face. To accomplish this objective, this study aims to develop a
framework that is able to assess the impact of global warming at a project level.
SCOPE OF RESEARCH
RESEARCH METHODS
Framework
The framework integrates BIM with unit rate productivity data, temperature
and humidity projection data, CPM schedules, and labor productivity models. Figure
1 describes the process and information flow of the developed framework.
Throughout this process, BIM shows its advantages in 1) rich information, 2)
versatility in interacting with a back-end database, 3) enabled automated-data
processing, and 4) 4D schedule analyses (not presented in paper due to page limits).
© ASCE 2
Computing in Civil Engineering 2015 525
The first step in the framework is to assign the required workhour information
to individual steel members, including beam, column, bracing, and miscellaneous
steel in three weight classes (light, medium, and extra heavy). In this study, this
process was carried out in ConstructSimTM, one of Bentley's BIM software suites,
which cross-referenced the steel members’ basic attributes (such as size, function,
length, weight class) in the design model and unit rate productivity table through
structured query language (SQL) relational database tables to compute the workhour
information for individual steel members. The unit rate labor productivity was
obtained from Richardson's Construction Estimating Standards (Richardson 2013).
The labor productivity herein is defined as work hours per unit of installed quantity;
therefore a lower number is better. According to Richardson’s description of
environments where the unit rate productivity was collected, the productivity rate
should be deemed as the optimal baseline productivity without negative effects of
temperature and humidity.
Second, a critical path method baseline schedule is developed. A CPM
schedule can be developed based on the logic sequence of activities and available
labor resources. The task units of the schedule for this study are work packages.
Since each work package has the information of total workhours and allocated labor
resources, the duration for each work package can be calculated. By referring to the
sequence of the activities, a baseline schedule can be created. The scheduling process
is an integrated BIM-based process.
Third, the temperature and humidity’s impact on productivity throughout the
course of construction is assessed through a simulation process. The detailed
description of the simulation process is documented in the author’s previous work
(Shan and Goodrum 2014). The simulation utilizes a model developed by Koehn and
Brown (1985) that describes the relationship between labor productivity and
temperature and humidity. The relationship between the productivity factor (see table
footnote for definition) and temperature and humidity is tabulated in Table 1. The
simulation interface developed in this study takes the computed productivity factor
and labor resources and adjusts the baseline schedule to reflect the productivity
impact due to temperature and humidity. Each simulation generates durations and
workhours required to build the project.
© ASCE 3
Computing in Civil Engineering 2015 526
Model Project
This study used a structural steel model of the first phase of the University of
Kentucky’s Albert B. Chandler Hospital Pavilion project as the prototypical model to
develop the described framework. The project's construction started in 2009 and
completed in spring 2011, with a total area of 1.2 million square feet and a total cost
of $532 million. The building has a five-story podium plus one story of basement
constructed with reinforced concrete and two eight-story towers built with structural
steel. The baseline workhours for the structural steel erection are 54,338 workhours.
© ASCE 4
Computing in Civil Engineering 2015 527
that operates in a sustainable way but without extra climate initiatives, having lower
population growth, rapid economic structure change toward service and IT. The
detailed descriptions and corresponding parameters can be found in IPCC's Fourth
Assessment Report (AR4) (Pachauri and Reisinger 2007).
Climate Model Selection. To control the scope of this research, two out of the
26 climate models archived by the Coupled Model Intercomparison Project Phase 3
(CMIP3) from various climate research centers around the world were picked to
obtain the climate projection data. This study used National Oceanic and Atmospheric
Administration's (NOAA's) Climate Model CM2.1 and Hadley Center’s Climate
Prediction's Coupled Model Version 3 (HADCM3). Both models can be considered
as robust models and are widely cited by a great number of research studies in
different areas (Abolhasani et al. 2008; Kjellstrom et al. 2009; Lewis et al. 2009). In
addition, the use of two models instead of relying on one model allows the
researchers to perform comparisons and cross-validation of the results.
Selected Locations for Study. Since the developed framework is suitable for
project-level simulation, project-location-specific climate data are needed. The
research effort selected a list of major cities around the world. This selection gave
special consideration to the countries that are most vulnerable to global warming,
including countries from Africa and Asia (EPA 2012), and BRIC (Brazil, Russia,
India, and China) countries that will experience the largest economic growth in the
21st century (Global Sherpa 2013). Finally, we selected Khartoum, Sudan; Delhi,
India; Melbourne, Australia; Brasilia, Brazil; Chongqing, China; Moscow, Russia;
and we also included a domestic city, Houston, Texas.
Climate Data Collection. All the projection data for scenarios A1B, A2, and
B1 between years 2090 to 2099 used for this study were obtained from the World
Climate Research Program (WCRP 2014) CMIP3's data portal. Historical
temperature and humidity data for the selected cities from 2001 to 2010 were
collected from Weather Underground (WU 2014) and were used as current climate
scenarios. Monthly average temperature and humidity data were collected for both
current and future scenarios.
Simulation of Temperature and Humidity Impact on Schedule
The simulation interface described earlier was used to simulate the
temperature and humidity effect on labor productivity and automatically adjust the
CPM schedule. By inputting the temperature and humidity data and manpower
information and setting project start dates for simulation, the interface can generate
start and finish dates for each task as well as the duration, and the man-hours required
for each task and the entire model project. For this study, we set the simulation
starting dates on the first day of each quarter of a year, i.e. January 1st, April 1st, July
1st, and October 1st, for each project location. This allows the model project to have
an equal probability of exposure to different temperature and humidity scenarios due
to seasonal changes.
RESULTS
Table 2 illustrates the average workhours required to build the model project
under future and current climate scenarios across the seven cities using two climate
models and shows the mean difference in workhours and statistical significance.
© ASCE 5
Computing in Civil Engineering 2015 528
Houston 60,349 60,456 59,052 56,196 4,153 0.001 4,259 0.001 2,855 0.001
Melbourne 54,338 54,338 54,338 54,504 (166) 0.001 (166) 0.001 (166) 0.001
Brasilia 57,043 56,942 56,359 54,634 2,409 0.001 2,308 0.001 1,725 0.001
Chongqing 55,275 55,221 55,231 55,615 (340) 0.025 (394) 0.014 (384) 0.001
Moscow 60,885 62,595 63,692 60,972 (86) 0.940a 1,623 0.272a 2,720 0.064a
Khartoum 60,638 62,324 58,709 57,852 2,786 0.001 4,473 0.001 858 0.063
NOAA CM2.1
Delhi 60,960 61,908 58,968 57,600 3,361 0.001 4,309 0.001 1,369 0.030
Houston 56,703 57,077 56,122 56,196 507 0.103a 881 0.010 (75) 0.779a
Melbourne 54,338 54,341 54,338 54,504 (166) 0.001 (163) 0.001 (166) 0.001
Brasilia 55,697 56,129 55,189 54,634 1,063 0.001 1,495 0.001 555 0.001
Chongqing 55,342 55,480 55,456 55,615 (273) 0.065a (134) 0.309a (158) 0.297a
Moscow 58,437 58,296 60,807 60,972 (2,535) 0.012 (2,676) 0.014 (165) 0.889a
a. Result is not statistically significant at the 95% confidence level
Difference in % Difference in %
-20% -15% -10% -5% 0% 5% 10% 15% -20% -15% -10% -5% 0% 5% 10% 15%
3.1% 4.8%
Khartoum A1b-Current (%) 5.9% Khartoum A1b-Current (%) 7.7%
0.3% 1.5%
A2-Current(%) A2-Current(%)
6.2% 5.8%
Delhi B1-Current(%) 6.7% Delhi B1-Current(%) 7.5%
4.8% 2.4%
7.4% 0.9%
Houston 7.6% Houston 1.6%
5.1% -0.1%
-0.3% -0.3%
Melbourne -0.3% Melbourne -0.3%
-0.3% -0.3%
4.4% 1.9%
Brasilia 4.2% Brasilia 2.7%
3.2% 1.0%
-0.6% -0.5%
Chongqing -0.7% Chongqing -0.2%
-0.7% -0.3%
-0.1% -4.2%
Moscow 2.7% Moscow -4.4%
4.5% -0.3%
a. Model HADCM3 b. Model NOAA CM2.1
Figure 2. Mean Workhour Difference in Percentage between Future and
Current Conditions
growth in workhours required to build the model project. Melbourne and Chongqing
might not experience much difference. Climate model differences also accounted for
the discrepancy in simulation results. The largest discrepancy was observed for
Houston. With the projection data of the Model HADCM3, Houston would require
5.1 to 7.4% more man-hours to build the model project by the 2090s compared to
current conditions. As opposed to the case of using projections of Model NOAA
CM2.1, Houston could experience a change from -0.1 to 1.57% depending on the
emission scenario. The opposite trend was observed for Moscow as a result of using
© ASCE 6
Computing in Civil Engineering 2015 529
two different climate models. For the HADCM3 model, on average the workhours
required to build the model project decreased by 0.1% under A1B scenario, and
increased by 2.7% under A2 and 4.5% under B1 scenario. However, those
differences are not statistically significant. Regarding the Model NOAA CM2.1,
Moscow could experience productivity gain, ranging from 0.3 to 4.4%; however, the
result with emission scenario B1 is not statistically significant at the 95% confidence
level.
CONCLUSIONS AND RECOMMENDATIONS
© ASCE 7
Computing in Civil Engineering 2015 530
REFERENCES
Abolhasani, S., Frey, H. C., Kim, K., Rasdorf, W., Lewis, P., and Pang, S.-h. (2008).
"Real-world in-use activity, fuel use, and emissions for nonroad construction
vehicles: a case study for excavators." Journal of the Air & Waste
Management Association, 58(8), 1033-1046.
Benjamin, N. B. H., and Greenwald, T. W. (1973). "Simulating Effects of Weather on
Construction." Journal of the Construction Division, 99(1), 175-190.
Committee on America's Climate Choices (2011). America's Climate Choices,
National Academy Press.
Environmental Protection Agency (2012). " International Impacts & Adaptation:
Climate Change." <http://www.epa.gov/climatechange/impacts-
adaptation/international.html>. (Mar. 22, 2013).
Global Sherpa (2013). "BRIC Countries – Background, Latest News, Statistics and
Original Articles." <http://www.globalsherpa.org/bric-countries-brics>. (April
8th, 2013).
Kjellstrom, T., Kovats, R. S., Lloyd, S. J., Holt, T., and Tol, R. S. (2009). "The direct
impact of climate change on regional labor productivity." Archives of
Environmental & Occupational Health, 64(4), 217-227.
Koehn, E., and Brown, G. (1985). "Climatic Effects on Construction." Journal of
Construction Engineering and Management, 111(2), 129-137.
Leiserowitz, A., Maibach, E., Roser-Renouf, C., and Smith, N. (2011). "Climate
change in the American mind: Americans’ global warming beliefs and
attitudes in May 2011." Yale University and George Mason University. New
Haven, CT: Yale Project on Climate Change Communication. Accessed
December, 9, 2012.
Lewis, P., Rasdorf, W., Frey, H. C., Pang, S.-H., and Kim, K. (2009). "Requirements
and incentives for reducing construction vehicle emissions and comparison of
nonroad diesel engine emissions data sources." Journal of Construction
Engineering and management, 135(5), 341-351.
Lu, J., Vecchi, G. A., and Reichler, T. (2007). "Expansion of the Hadley cell under
global warming." Geophysical Research Letters, 34(6), L06805.
National Electrical Contractors Association (1974). "The effect of Temperature
Productivity." National Electrical Contractor Association, Inc. , Washington,
D.C., 1974.
Pachauri, R., and Reisinger, A. (2007). "IPCC fourth assessment report." IPCC
Fourth Assessment Report.
Richardson (2013). "Richardson Construction Estimating Standards."
<http://www.costdataonline.com/Richardson.htm>. (Mar. 2nd, 2013).
Shan, Y., and Goodrum, P. (2014). "Integration of Building Information Modeling
and Critical Path Method Schedules to Simulate the Impact of Temperature
and Humidity at the Project Level." Buildings, 4(3), 295-319.
World Cimate Research Program (2014). "WCRP CMIP Multi-Model Data ",
<https://esgcet.llnl.gov:8443/index.jsp>. (Dec. 8, 2014).
Weather Underground (2014). "About Our Data."
<http://www.wunderground.com/about/data.asp>. (Dec. 6, 2014).
© ASCE 8
Computing in Civil Engineering 2015 531
Abstract
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 532
BACKGROUND
© ASCE
Computing in Civil Engineering 2015 533
Design Progression
Construction Options
Design Process
The technological challenge has been overcome as BIM has been adopted in
the AEC industry. The advanced technology, as one of integrated design methods,
relies on the data-rich, object-oriented modeling technique and allows building
product information to be easily shared among project stakeholders, forming more
reliable decisions and enhancing project value delivered (Smith, 2007). It also opened
the opportunity of an automated, rule-based constructability checking of a design
model to push construction information towards early design thinking (Jiang and
Leicht, 2014). A preliminary experiment review of an educational facility design has
shown the structural design model can be automatically evaluated with significant
construction constraints for construction methods, formwork systems selection.
Building on previous efforts, the current work examines the interdependencies
between product design information and construction information by aligning
construction information with associated design attributes to support the automated
constructability checking.
METHODOLOGY
© ASCE
Computing in Civil Engineering 2015 534
other structural materials, CIP concrete is often delivered to the construction site in an
unfinished state and subsequently formed into the desired shape. Thus, concrete
designers and contractors have ultimate control in determining how a structure is
formed and built. Considering that formwork costs accounts for 40% - 60% of the
structural frame cost (Hanna, 1998), the constructability of a CIP concrete building
design is fairly important.
In the case study, an on-going CIP concrete building project was selected as
the case project, enabling easier access to project resources. Both the structural
engineer for record and the project manager for concrete construction were
interviewed, to capture the structural design process with information progression and
the process of narrowing down the construction options based on design
configurations and construction constraints. The hierarchical model of product
information for the case project is then presented and aligned with the formwork
options for product realization, indicating opportunities of using associated modeling
information to perform an automated constructability review.
CASE STUDY
Case Study Project. The case project is a new educational facility at Coppin
State University in Baltimore, MD. It is a four-story building plus a basement and a
penthouse, with an approximate total area of 135,000 ft2. Typical spread footings and
foundation walls are structurally designed to support the four-story CIP concrete
superstructure. The penthouse applies braced structural steel frame with metal
decking instead. The contractor was awarded a GMP (a.k.a. guaranteed maximum
price) contract to construct the building, indicating the importance of constructability
and construction planning in this project. BIM was leveraged for both structural
engineering and construction management.
© ASCE
Computing in Civil Engineering 2015 535
information of structural components and elements were developed into more details
as design moves to later stages (Figure 2).
© ASCE
Computing in Civil Engineering 2015 536
Gravity
Column Location Material Dimension Reinforcing Section Details
Columns
Structural Super- Gravity
System Structure System
Floor
Design Information Progression
Drop
Panel Location Material Dimension Reinforcing Section Details
Misc.
Misc. Members
Reinforcing
& Items
CIP Concrete Bldg. Area Misc. Steel/Concrete
Two-way Slab
Load Assump. Bldg. Height w/ drop panels for Façade Support
Metal System
(Aluminum) (Aluminum)
(Aluminum)
Conventional Conventional
Conventional
Metal System Metal System
Metal System
(Steel) (Steel)
(Steel)
Wood Wood
Wood
Forming Forming
Forming
Joist-Slab Joist-Slab
Forming Forming
Pan/Dome Pan/Dome
Forming Forming
Flying Flying LEGEND:
Structural System,
Form Form Sub-system,
Component, and
Element
Column- Column- Structural
Forming Forming
Design Progression
Steel Panel Design Iteration
Forming Interdependencies
between Design
Information and
Construction Options
Figure 2: Narrowing down construction options by aligning design information with formwork decision-making
© ASCE
Computing in Civil Engineering 2015 537
The lower part of the diagram shows the formwork options to build the project
(Figure 2). Compared with the expanding pattern of design information, the
construction options are reduced based on available design information. For example,
as the type of floor system is determined, the initial ten available horizontal formwork
options would be narrowed to four feasible options; formwork systems, which are
used for other system types such as one-way joist slab and waffle slab, are ruled out
at this point (Figure 2). As the system is designed with more detailed component
information, such as slab depth and column spacing at DD phase, contractors can
calculate and plan the formwork layout, labor and equipment use, and construction
cost accordingly (Hurd, 2005), in order to determine the more cost-efficient approach.
In this case a conventional aluminum formwork system, which consists of plywood
panels and aluminum beams and scaffolding-type shoring system, was selected for
the case project (Figure 2). In this way, the construction-related design information is
identified and aligned with the decision-making of construction means and methods.
DISCUSSION
© ASCE
Computing in Civil Engineering 2015 538
designers refine the system configurations and improve the constructability of the
design. The mapping of the interdependencies between design and construction
information (Figure 2) helps to understand the integrated design constraints, leading
to the thinking of not only “what” to design but “how” to realize throughout design
process the construction process, and facilitating the decision-making on a “best for
project” basis ("Integrated Project Delivery", 2007).
CONCLUSION
REFERENCES
© ASCE
Computing in Civil Engineering 2015 539
© ASCE
Computing in Civil Engineering 2015 540
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 541
product (GDP) (Chong et al., 2003). In the United States, the Federal Highway
Administration estimates the value of U.S. transportation infrastructure to be on the
order of trillions of dollars (FHWA 2013). Visual inspection is the predominant
method for evaluating the structural integrity of almost all infrastructure systems, and
often results in subjective and highly variable inspection reports (Phares et al. 2004).
Making the inspection process more effective in regards to the type and quality of
data collected is a major research need.
Modern remote sensing technologies have enabled the generation of accurate
3D geometric models that can be used to augment and improve the inspection process
by reconstructing the inspection environment as a photorealistic virtual model. While
there are many methods for creating such models, terrestrial laser scanner (TLS) is
the most widely used. Photogrammetric reconstruction via the Dense Structure-from-
Motion (DSfM) algorithm, a process that converts 2D images into 3D point clouds, is
often considered a low cost alternative to TLS. In conventional DSfM, a naïve
reconstruction is performed in which all 2D images are processed simultaneously.
Many variables affect DSfM model accuracy, including camera imaging parameters,
camera placement, scene complexity, surface texture, environmental conditions and
scene size. Another limitation of DSfM is the relatively high amount of noise along
poorly textured and flat surfaces (Musialski et al. 2013).
The field of research on the use of these technologies for infrastructure
inspection is growing rapidly. Jáuregui et al. (2006) evaluated the feasibility of
photogrammetric techniques and determined that these methods provided more
accurate measurements of bridge geometry in comparison with traditional hand
measurements. In Riveiro et al. (2012), a photogrammetric methodology for
calculating the minimum bridge vertical underclearance was developed. Researchers
have also compared the capabilities of photogrammetric and TLS methods for 3D
reconstruction of civil infrastructure (Golparvar-Fard et al. 2011; Dai et al, 2012;
Riveiro et al. 2013; Klein et al. 2012). In these studies, TLS methods provided denser
and more accurate point clouds in comparison with conventional DSfM methods.
To date, neither TLS nor conventional DSfM has been shown to be capable of
producing point clouds accurate enough to resolve structural flaws on the order of
0.1mm, the minimum dimension of a hairline crack, while simultaneously capturing
the overall geometry and scale of a structure. The main objective of this study was to
develop a hierarchical, multi-stage DSfM reconstruction approach that improved
upon conventional DSfM methodologies to produce point clouds with the necessary
resolution for fine-scale infrastructure inspection. The accuracy, adaptability and
feasibility of the developed method were compared to both TLS and conventional
DSfM methods.
The following comparative criteria were used: (i) localized point cloud density,
(ii) model discrepancies via Nearest Neighbor point cloud comparison, and (iii) the
rendering of pre-defined and fine-scale inspection targets. Experimental findings
indicate that the developed hierarchical technique produces point clouds capable of
resolving 0.1mm details, an order of magnitude improvement over existing methods,
while also generating denser point clouds then either TLS or conventional DSfM.
© ASCE 2
Computing in Civil Engineering 2015 542
METHODOLOGY
Imaging Network
Initial Dense Point Cloud: Initially, images from both the Geometry and
Intermediate networks are used to generate an initial dense point cloud. This point
cloud is not designed for representing high-resolution information. Therefore, image
resolution can be reduced to minimize the computational cost of dense point cloud
generation. A 25% reduction is the suggested maximum reduction. The resulting
cloud captures the overall geometry of the structure.
Individual Denser Point Clouds: High-resolution point clouds are then
generated for each individual High-fidelity network using the maximum resolution of
the images. For example in this study, the localized point cloud density of each of the
individually generated point clouds was at least 4 times larger than the initial dense
point cloud.
Global Registration and merging: After all point clouds are generated, the
individual clouds are merged using a combination of the Iterative Closest Point (ICP)
algorithm (Besl and McKay 1992) and Generalized Procrustes Analysis (GPA).
Embedding GPA in an ICP framework efficiently minimizes the alignment error and
provides a robust approach (Toldo et al. 2010). The result is a unique final point
cloud that represents the overall scale of the structure while maintaining high-fidelity
representations of fine details.
By utilizing this hierarchical reconstruction approach, the proposed method is
able to orient and match large datasets of images in the dense reconstruction
© ASCE 3
Computing in Civil Engineering 2015 543
© ASCE 4
Computing in Civil Engineering 2015 544
HPCG
Table 2 summarizes the results for the conducted experiments using TLS and
DSfM, respectively. Localized point cloud density was measured by computing the
number of points inside a 30 mm sphere centered on each point.
The comparative results illustrate the improvement in generating point clouds
based using the HPCG method even on datasets with challenging features. In the
Engineering Building experiment, possibly due to the non-conventional image
capture strategy, the PMVS method failed to reconstruct the corners of the façade. In
addition, the 3D point clouds based on TLS and PMVS contained excessive noise
near surfaces such as windows and steel framed doors due to object reflectivity or a
lack of surface texture. However, HPCG successfully captured the entire west façade
including building elements such as windows, doors, and façade corners.
Table 2 Details of resulting point clouds
Method PMVS TLS HPCG
No. of
Engineering images/scans 232 5 232
School
No. of points 964,131 32,304,139 74,097,645
Building(west
Avg. local
façade) 6x104 1x106 1.7x106
point density
No. of
646 7 646
images/scans
Timber
No. of points 13,124,457 52,527,209 111,610,341
Bridge
Avg. local
2.1x106 5.4x106 7.2x106
point density
No. of
656 6 656
images/scans
Steel
No. of points 16,939,806 1,755,472 123,072,551
Sculpture
Avg. local
2.7x107 1.7x106 4.5x108
point density
With regards to the timber bridge, a lack of adequate point density near
critical details such as fungus decay, crushing, delamination, and loose connections
was observed in the TLS and PMVS based point clouds. These details were resolved
in HPCG point clouds. Similar results were observed for the steel sculpture.
Localized point cloud density for the Engineering building and timber bridge
are compared in Figure 1 and Figure 2. Figure 1 illustrates the significantly higher
point cloud density available through the HPCG approach. For the timber bridge, TLS
provided a more homogeneous point cloud density, but provided an overall less dense
cloud. The HPCG method created a model with high-fidelity details and local point
density up to 107 points near critical locations.
© ASCE 5
Computing in Civil Engineering 2015 545
Figure 1 Local point density for Engineering Building. Low to high density ranges from
blue to red (from left to right: PMVS, TLS and HPCG).
Figure 2 Local point density for timber bridge. Low to high density ranges from blue to
red (from left to right: PMVS, TLS and HPCG).
In order to compare the HPCG method to conventional DSfM algorithms, the
deviations between image-based point clouds and TLS-based models were calculated.
While TLS data is not an absolute reference, it provided a standardized reference for
relative comparisons. The highly dense point clouds allowed deviations from the TLS
models to be measured using Nearest Neighbor Distance (NND). Figures 3 and 4
show the cloud to cloud distances for the Engineering Building and timber bridge
tests. The results for the steel sculpture were similar. Overall, the HPCG method
indicated a higher level of agreement with the TLS models when compared against
the PMVS algorithm.
Figure 3 Comparison between the TLS and photogrammetric point clouds. Low to high
deviations ranges from blue to red (left: PMVS, right: HPCG)
Figure 4 Comparison between the TLS and photogrammetric point clouds Low to high
deviations ranges from blue to red (left: PMVS, right: HPCG)
© ASCE 6
Computing in Civil Engineering 2015 546
Figure 5 Snapshot of the rendered point cloud at the location of the crack gage placed
on the steel sculpture (left: PMVS, right: HPCG)
CONCLUSIONS
ACKNOWLEDGEMENTS
This material is based upon the work supported by the National Science
Foundation under Grant No. CMMI-1433765. The authors would also like to
acknowledge the support made by NVIDIA Corporation with the donation of a Tesla
K40 GPU used in this research. Any opinions, findings, and conclusions or
recommendations expressed in this publication are those of the authors and do not
necessarily reflect the views of the NSF or NVIDIA.
REFERENCES
Besl, Paul J.; N.D. McKay (1992). "A Method for Registration of 3-D Shapes". IEEE
Trans. on Pattern Analysis and Machine Intelligence (Los Alamitos, CA, USA:
IEEE Computer Society) 14 (2): 239–256.
Chong, K. P., Carino, N. J., & Washer, G. (2003). “Health monitoring of civil
infrastructures.” Smart Materials and structures, 12(3), 483.
Dai, F., Rashidi, A., Brilakis, I., & Vela, P. (2012). “Comparison of image-based and
time-of-flight-based technologies for three-dimensional reconstruction of
© ASCE 7
Computing in Civil Engineering 2015 547
© ASCE 8
Computing in Civil Engineering 2015 548
Abstract
The AEC/FM industry has benefited from the innovative integration of information
technologies and industry-wide processes in different lifecycle stages of facilities. Building
Information Modeling (BIM), as one of these innovations, is fast becoming a key approach to
virtually integrate the required information for facility design, construction, and management. So
far, applications and benefits of using BIM tools and processes in building design and
construction have been documented in research. However, landscape design and construction
practice is underrated in current BIM developments and in integrated design-construction
practices, and it has not benefited from the advantages BIM provides to the industry at different
scales. This could result in a critical challenge, as BIM implementation and information
modeling are becoming mandatory in many projects in public and private sectors, and the gap
still exists in the processes of collaboration and information exchange between the landscape
design and construction practice and other disciplines. As an early step to mitigate this challenge,
this study shows that recent advances in BIM, COBie, information-exchange schemas (e.g. IFC),
and taxonomies such as OmniClass have shortcomings in addressing landscape and hardscape
elements and attributes. This challenge limits asset-management capabilities, and leads the
practice to inefficient operations, more manual processes, and costly knowledge development
and exchange. These findings have important implications for revising and updating existing
taxonomies to support more automated information development and exchange processes.
INTRODUCTION
The AEC industry has benefited from innovations in the integration of information
technologies and design/construction processes to mitigate the industry-wide challenges,
including low productivity, huge waste, and inefficiencies in facilities operation (Smith and
Tardif, 2009). Building Information Modeling (BIM), as one of these innovations, is fast
becoming a key approach to virtually integrate the required information for design, construction,
operation, and management of facilities (Aouad et al., 2011; Eastman et al., 2011). So far, many
researchers have documented applications and benefits of BIM tools and processes in building
design and construction (Eastman et al., 2011; Giel and Issa, 2013). However, landscape design
© ASCE
Computing in Civil Engineering 2015 549
practice is underrated in current BIM implementations, and it does not benefit from the
advantages BIM has provided to the industry (Ahmad and Aliyu, 2012; Flohr, 2011; Nessel,
2013; Zají ková and Achten, 2013). Nessel (2013) indicated that the landscape architecture
profession has acted passively towards information modeling, and it has a slow movement to
develop, improve, and adopt information modeling tools. Lu and Wang (2013) confirmed that
the industry has rarely used BIM for Landscape Information Modeling (LIM). Furthermore,
Ahmad and Aliyu (2012) highlighted that these challenges may “remove landscape architects
from the supply chain.” This could be an extensive gap in the industry-wide practice and
information exchange, as BIM implementation is becoming mandatory in many countries.
Moreover, Zají ková and Achten (2013) reported that LIM would play an indispensable role in
the practice as it facilitates development of comprehensive information models, and integration
of BIM models, urban models, and landscape models for an inclusive built environment
simulation. Hence, the extension of BIM concepts to the landscape and hardscape design practice
is critical. Similarly as building design and construction practices, this could facilitate smooth
information exchange through information models, and enhance value-adding collaboration
among project participants from different disciplines and trades (Lu and Wang, 2013; Zají ková
and Achten, 2013). However, Flohr (2011) stated that existing BIM platforms are incompatible
with the workflow of landscape architects, and they have drawbacks in parametric landscape
modeling. Consequently, there is a strong need in the industry for new developments in BIM-
related standards and platforms as well as laws and regulation to resolve the current challenges.
Although some research has brought up the need for LIM platforms in the AEC/FM
industry, there has been little theoretical or empirical research on developing comprehensive
schemas for defining required objects, information, and attributes for LIM. To fill this gap, this
paper intends to take a fundamental step to highlight the areas where existing industry taxonomy
systems and BIM interoperability standards (e.g. Industry Foundation Classes-IFC) have
shortcomings to support landscape and hardscape elements. In the second step, through a case
study research, this paper reveals how best practices in capital facilities management deal with
these challenges in exchanging landscaping information. The findings will clarify the current
status of advances/shortcomings in landscape information modeling. The research presented in
this paper is expected to set the stage for improvements in LIM in both conceptual and
technological aspects.
The evolution in CAD tools and information technologies has resulted in two important
advances in the industry, including (1) parametric modeling and (2) object-based information
modeling (Hubers, 2010). The parametric aspect of BIM addresses quantifiable characteristics of
graphical entities in a model to control physical behaviors and relationships between
components. Model generation using algorithms and parameters, real-time auto-adjust features of
model elements, intelligent generation of views and documents, and defining and maintaining
physical (e.g. hosting) or virtual relationships are several parametric modeling capabilities
(Autodesk, 2005; Eastman et al., 2011). The object-based aspect of BIM goes beyond intelligent
objects, as it deals with attributes and information required for design, construction, and
management of facilities. Hence, the major concern in BIM implementation is to create and
exchange these information between different trades at different stages (Hubers, 2010).
© ASCE
Computing in Civil Engineering 2015 550
Industry Foundation Class (IFC) is a schema developed to facilitate the data exchange on
building models with a smooth interoperability among BIM tools. By using a neutral exchange
format, IFC supports the concept of “Platform-to-Tool” exchange as the most promising type of
interoperability (Eastman et al., 2011). IFC consists of predefined concepts of object classes,
types, and information entities (properties). This information is not limited to geometric
properties; it addresses data required for design, analysis, construction, operation, and
management of the building components (Eastman et al., 2011; Jianping et al., 2014). Accuracy,
sufficiency, and reliability of information exchange is essential because poor quality models
would negatively impact all project processes and facility operation stages (Crotty, 2012;
Eastman et al., 2011). Therefore, it is essential to study whether IFC, as a standard for
information exchange in the industry, supports landscape and hardscape elements or not.
Zají ková and Achten (2013) indicated that a landscape information model should consist
of (1) information about the site, and (2) information about landscape objects. In landscape
architecture, information on climate, land and soil, water, plant species, animal species,
topography, landscape character, and species assemblages builds up the required data for
landscape planning (Özyavuz, 2012; Waterman, 2009). In addition to natural characteristics of
the context, landscape information modeling has to address the “hardscape,” which includes land
uses, exterior spaces, and built landscape function areas such as street and roads, parking spaces,
pedestrian paths, infrastructure/irrigation systems, and green/water/roof areas (Gómez et al.,
2013). Landscape objects and exterior furnishings include any equipment that facilitates exterior
activities of people and transportation systems. These categories of objects are essential for
information modeling but they should be systematically developed for information exchange.
As prior research suggests, the first step in developing knowledge sharing platforms is to
create an ontology consisting of standardized and formalized taxonomy of concepts, their
attributes, and the definition of the relationships or associations between them (Corcho et al.,
2007). National BIM Standard (NBIMS) has addressed this requirement in attempts to identify
required information and exchanges for IFC implementation (Hietanen and Final, 2006). Thus,
OmniClass as one of the most commonly used taxonomy systems is encouraged for BIM-based
information exchange, as it provides highly detailed classifications of the elements and their
properties in the AEC industry (National Institute of Building Sciences, 2007).
RESEARCH METHODOLOGY
This paper intends to investigate the existing taxonomy and information exchange schemas
in regard to their level of support for landscape information modeling. First, the authors review
the landscape architecture literature for identification of technical taxonomies for landscaping
elements/information. Then, OmniClass classifications and IFC schema will be analyzed to show
how they address landscape-related taxonomies and information, and to demonstrate the gaps in
their developments. In the second step, through a case study research, the authors investigate
how current practitioners deal with landscape and hardscape elements in information exchanges.
© ASCE
Computing in Civil Engineering 2015 551
OmniClass-Table 23 lists landscaping related products and elements under the “Site
Products” classification (OCCS Development Committee Secretariat, 2012). This table shows
that OmniClass classifications in the landscape domain are not as detailed as classifications of
building components. In this paper, due to page limits, only few categories are presented for
clarification. For instance, in the case of different window types, Omniclass Table 23 provides a
long list of different window types, including fixed windows, single hung, double hung, triple
hung, awning, casement, etc. However, for landscaping elements like seating and table furniture,
OmniClass only provides headlines of ‘Exterior Seating and Exterior Tables’ without any
detailed classification. This is in contrast to the classifications landscape design literature
suggests. Different landscape spaces and zones require different types of exterior seating and
furniture as they could impact the landscape functions. The ideal classification in this specific
example should be more detailed (e.g. functionally), and it may include benches, seat wall,
movable/attached chairs, eating tables, low tables, etc., as Main and Hannah (2010) suggest.
© ASCE
Computing in Civil Engineering 2015 552
structures or systems” (UGA, 2013). Additionally, it lists some landscape and hardscape
elements for tracking/assigning in Construction Operations Building information exchange
(COBie), including “Area Wells/Grating, Equipment Curbs, Building Pads, Planting, Sidewalks,
Parking Stripes, Roads, Property lines, and Topography.”
The ideal and productive workflow for developing COBie information is to create data in a
physical file format supported by IFC, and to extract (export) the required information in a
spreadsheet format to use at the operation level (National Institute of Building Sciences).
However, in many cases, the spreadsheet should be filled manually due to the aforementioned
shortcomings in the taxonomies and technology issues. This raises a question how the project
participants meet such requirements when the existing standards and developments have
shortcomings in addressing landscape and hardscape elements.
© ASCE
Computing in Civil Engineering 2015 553
building systems. However, as of late 2014, the information for managing campus landscape and
hardscape elements is still manually collected, formatted, and added to a GIS platform. The GIS
platform represents a 2D top-view layout of the campus in which landscape and hardscape
elements, their locations, and attributes are traceable. As a result, no 3D model is available for
site elements to inform users about geometric characteristics of these elements, although a photo
snapshot is linked to each element in the GIS system. The taxonomy UW uses for asset
classification in the CMMS does not support landscape and hardscape elements, although a few
MEP elements including light poles, manholes, fuel tanks, and trash compactors on the site are
supported. For COBie implementation, OmniClass has been used as the basis for classifying and
coding elements, and linking information to the CMMS. However, information on landscape and
hardscape elements is not requested in COBie handover process, and the campus engineering and
operations offices still manually collect data on these elements. Although the baseline taxonomy
for these elements is OmniClass, as we discovered through Step 1 and interviews, the taxonomy
has shortcomings in addressing site elements. The attributes for each element are mostly defined
by the in-house team, and they do not follow any pre-defined taxonomy. Table 2 presents the
landscape elements UW manages through its GIS platform. This table shows that some elements
and space types suggested by Table 1 are not supported by this system.
Table 2: Landscape and Hardscape Elements and Attributes in the GIS platform
Elements Attributes
Benches • Memorial (yes, no) • Material (undefined, concrete, metal, stone, wood, other)
• Art (yes, no) • Condition (undefined, excellent, good, fair, poor)
Bike Racks • Building Served (Name of Building) • Surface (undefined, asphalt, brick, concrete, dirt, gravel)
• Floor (First, Second, …) • Type (undefined, bike room, custom, DeroSS, DS-BB, Hanger)
• Capacity
Bollards • Ownership • Sleeve condition (undefined, excellent, good, fair, poor)
• Maintained By • Required Maintenance (undefined, paint, replace, straighten)
• Removable (yes, no) • Condition (undefined, excellent, good, fair, poor)
• Fixed in ground (yes, no) • Type (aluminum, concrete, log, PVC, rock, steel, tree, wood)
• Maintenance area • Location (general description of location)
Trees • Diameter at breast height (inch) • Condition (Score between 0 and 100)
• Number of stems • Diameter at breast height of multi-branch tree (inch)
• Height of the tree (ft) • Location
• Tree Type • Grow Space (building, path, street, tree grate, unrestricted)
• Deciduous or Coniferous • Land Use (container, lawn, parking lot, patio/deck, planting,
• Species Type roof garden, street tree, unmanaged)
• Memorial Tree (Yes, No) • Inoculated (Yes, No)
• Monetary value of tree
The present study was designed to clarify challenges to address landscape and hardscape
elements in information exchange, facility operations, and asset-management. This study found
that systematic collection of information on landscape, hardscape, and site elements for object-
based asset-management is relatively new and growing in the AEC/FM industry (UGA and UW
cases). Capital facilities owners have attempted to improve their asset-management practice
through these knowledge development processes. However, a few challenges are currently
dominant. First, the existing taxonomies, such as OmniClass, have shortcomings in addressing
landscape elements, their types, and attributes. As a result, owner organizations have to custom-
define classifications of elements and attributes through time-consuming internal processes.
© ASCE
Computing in Civil Engineering 2015 554
Second, automated information exchange is still dependent on these taxonomies (e.g. IFC,
OmniClass), and for elements/attributes that are not supported by them, data formatting and
exchange requires cumbersome manual processes. Third, a comparison between UGA
requirements (cited in Step1) and UW BIM and COBie implementation (Step 2) shows that there
is no consensus among the owners on the information type and exchange format (models,
spreadsheet, etc.) required for landscape and hardscape design, construction, and operation. UGA
requires some landscape and hardscape elements modeled in BIM; in contrast, UW does not
have such a requirement and non-modeled information in GIS satisfies UW requirements.
These findings corroborates the fact that landscape and hardscape information modeling is
underrated in technology developments (Ahmad and Aliyu, 2012; Flohr, 2011; Nessel, 2013;
Zají ková and Achten, 2013). We have used a different approach from previous research to show
that recent advances in BIM, COBie, information exchange schemas (e.g. IFC), and taxonomies
have shortcomings in addressing landscape and hardscape design, construction, operation and
management. This limits asset-management capabilities, and leads the practice to inefficient
operations, more manual processes, and costly knowledge development and exchange. Further
research should therefore be conducted to develop inclusive taxonomies for classifying
landscape and hardscape elements and their attributes. The existing taxonomies, which are used
as the basis for information exchange, should be revised and updated to support more automated
information exchange processes (e.g. to insert, extract, update, modify, and observe data).
ACKNOWLEDGEMENTS
REFERENCES
Ahmad, A. M., and Aliyu, A. A. (2012). The Need for Landscape Information Modelling (LIM)
in Landscape Architecture. Paper presented at the Digital Landscape Architecture Conference,
Bernburg, Germany.
Aouad, G., Wu, S., and Lee, A. (2011). Architecture Engineering and Construction. Florence,
KY, USA: Routledge.
Autodesk. (2005). Parametric Building Modeling: BIM's Foundation. (Accesses Sep 18 2014)
buildingSMART International. (2013a). IfcGeographicElement type for phisycal feature of 3D
terrain. Sep 27 2014, from http://jira.buildingsmart.org/browse/IFR-891
buildingSMART International. (2013b). Industry Foundation Classes Release 4 (IFC4).
Retrieved Sep 21, 2014, from http://www.buildingsmart-
tech.org/ifc/IFC4/final/html/index.htm
City Futures Research Centre. (2010). A Note on Cadastre; UrbanIT Research Project. AU: FBE,
UNSW.
Corcho, O., Fernández-López, M., and Gómez-Pérez, A. (2007). Ontological Engineering: What
are Ontologies and How Can We Build Them? In C. Jorge (Ed.), Semantic Web Services:
Theory, Tools and Applications (pp. 44-70). Hershey, PA, USA: IGI Global.
© ASCE
Computing in Civil Engineering 2015 555
© ASCE
Computing in Civil Engineering 2015 556
1
Associate Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA 30060. E-mail:
[email protected]
2
Associate Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA 30332. E-mail:
[email protected]
Abstract
Challenges encountered by facility managers include developing visual models in
their mind to corroborate the relationship between two dimensional (2D) as-built
drawings with the real world three dimensional (3D) elements, searching and
validating information. These challenges can be addressed by using Building
Information Model (BIM) in the O&M phase. Accessing the information using BIM
is a two-step process. The first step includes the identification and selection of the
appropriate 3D element from the digital model and second step includes the retrieval
of the information. Currently, the first step is accomplished by navigating the model
manually. This step becomes tedious depending upon the size and complexity of the
model. This identification selection process can be automated by integrating Quick
Response (QR) code technology with BIM. This paper discusses the feasibility of
developing an integrated BIM and QR code environment. It also discusses the pilot
study conducted to deploy this environment.
Keywords: BIM; 3D; QR code; O&M
INTRODUCTION
Traditionally information such as 2D as-built drawings, submittals, operation
manuals, and warranties that are transferred to owners for the operation and
maintenance (O&M) phase are unlinked and they exist as independent entities. Some
of the challenges encountered by facility managers during the O&M process include
interpretation of two dimensional (2D) drawings and information retrieval. Typically
a facility manger interprets the 2D as-built drawings and develops visual models in
their mind to corroborate the relationship between 2D elements with the real world
three dimensional (3D) elements. Once the elements are identified and validated more
2D as-built drawings and multiple documents are to be searched to retrieve the
required location and other information about the elements. In this process most of
the facility manager’s time is spent on non-value added tasks such as developing
visual models and searching and validating information. This unlinked information
reduces the facility manager’s ability to address the needs of the facility which in turn
increases O&M cost and time. Additionally, the possibility of misinterpretation of 2D
© ASCE
Computing in Civil Engineering 2015 557
as-built drawings can also cause delays in retrieving the information. The
interpretation and model development processing time during the O&M phase can be
reduced by using 3D models instead of 2D as-built drawings. However information
retrieval time cannot be reduced by just using 3D models since the retrieval time is
dependent on information management practices. The interpretation and information
retrieval challenges faced by facility managers can be addressed by providing access
to linked information through 3D models. This can be accomplished by using
Building Information Model (BIM) in the O&M phase. Accessing the information
using BIM is a two-step process. The first step includes the identification and
selection of the appropriate 3D element from the digital model and second step
includes the retrieval of the information. Currently, the first step is accomplished by
navigating the model manually. This step becomes tedious depending upon the size
and complexity of the model. Additionally manual errors in element’s selection
negatively affect the information retrieval process. Due to this manual selection
process, the effectiveness of BIM is compromised and is not being used to its full
potential. This identification and selection process can be automated by integrating
Quick Response (QR) code technology with BIM. Previous research work
demonstrated the applications of one dimensional barcode and radio frequency
identification (RFID) in various areas of construction industry (Navon & Berkovich,
2005; Bell & McCullouch, 1988; Li & Becerik-Gerber, 2011). With the increased
usage of smart devices, potential of QR code technology in construction industry is
being explored. For example, QR code was used for developing automated facility
management system (Lin et al. 2014); for developing BIM based shop drawing
automated system (Su et al. 2013); and for tracking and control of engineering
drawings, reports and specifications (Shehab and Moselhi, 2005). Saeed et al. 2010,
illustrated the use of QR code technology to access information about buildings and
other artifacts for pedestrians. This paper augments potential applications of QR code
technology in operation and maintenance phase by integrating with BIM. The goal of
the study is to develop BIM+QR environment that provides a seamless flow of
information from real world object to BIM. The following sections discuss about
BIM+QR environment components and automated information flow among them.
BIM+QR ENVIRONMENT
In this environment, when the user reads the QR code through smart device, the
respective 3D element will be highlighted in BIM automatically and provides access
to retrieve the information. It provides dynamic linkage between the real world object
and BIM by synchronizing with the user input. The integrated QR code and BIM
environment provides a seamless flow of information reducing manual errors and
improving information retrieval efficiency. The different components of BIM+QR
environment includes: BIM repository, Object hyperlinking, and Interactive display
unit.
BIM Repository
© ASCE
Computing in Civil Engineering 2015 558
This section discusses the methodology adopted for the development of a repository
through BIM. The two steps in the development process include: (a) three
dimensional (3D) model development and (b) integration of information to the 3D
model elements. The ease of integration depends on the availability and type of
parameters in the BIM software (Goedert & Meadati, 2011). The information
associated with the 3D model elements can be retrieved through parameters of the
elements. These parameters establish the links between respective files and elements
in digital format. The information needed can be collected through paper format and
digital format from various sources. Since BIM needs the information in digital
format, the paper-based information has to be converted into digital format by
scanning.
Object Hyperlinking
The process of extending the internet to real world objects is called object
hyperlinking (Wikimedia, 2014). This can be achieved by attaching tag with URL to
the real world object. The QR code with URL can be used as a tag. QR code stands
for Quick Response Code. It was invented by Denso Wave in 1994 (Wikimedia,
2014). It is a two dimensional bar code. It can be read by using smartphone or touch
pad or computer camera. QR code is used to store text, URL, and contact
information. When the user reads the QR code depending upon the type of
information stored, it may display text, opens URL, and saves the contact information
to address book. QR codes are now used for wide range of applications such as
commercial tracking, product marketing, product labeling, and storing organizational
and personal information. QR code can be static or dynamic. In static QR code the
initially created information cannot be changed, whereas in dynamic QR code the
information can be edited after creating the code. With the advent of smart phones
QR codes became popular as each smartphone can read the QR code with appropriate
QR reader app. An object hyperlinking system involves four components shown in
the Figure 1. QR code tagged Object: QR code with URL is tagged to the object;
Smart device: A device which has means to read the QR code and display the
information; Open Wireless network: An open wireless network such as 3G or 4G
network for communication between the smart device and server containing the
information linked to tagged object; Server: A server to store the information related
to the real world object.
© ASCE
Computing in Civil Engineering 2015 559
The display system can be desktop display unit, laptop, or smart device display unit.
Desktop display unit includes monitor and can be made interactive by using mouse
and keyboard. Laptop unit includes non-interactive screen or interactive screen. The
non-interactive screen can be made interactive by using mouse and keyboard. A smart
screen can also serve as an interactive screen for the laptop. Smart device display unit
devices include touchpad or tablet or smartphone screen.
PILOT STUDY
A pilot study has been conducted by deploying, the above proposed BIM+QR
environment for O&M of Construction Management Department’s class rooms at
Southern Polytechnic State University. The above proposed integrated BIM+QR
environment framework was developed by using QR codes and Autodesk’s Revit
© ASCE
Computing in Civil Engineering 2015 560
2014. The study includes three steps: (1) hyperlinking real world objects; (2) BIM
repository development, and (3) automation of information exchange. The objective
of the study is to synchronize the user QR code input and provide access to the O&M
information thorough highlighting the component in BIM. Overview of the steps
involved in the process are presented below.
Real World
Object
Server
url asp
C# Application
3D Model
Information
Database
BIM Repository
In this step, webpages as shown in Figure 3a for different real world objects were
developed using html. These pages are hosted on a server. Various static QR codes
with URL for different objects were created. These QR codes were tagged to the real
world objects as shown in Figure 3b.
BIM repository was developed using Revit 2014. The steps included for the
repository development process were 3D model development and association of the
information to the 3D model elements. A 3D model of the Construction Management
Department’s class rooms was developed using existing families and creating new
families as shown in Figure 4.
© ASCE
Computing in Civil Engineering 2015 561
© ASCE
Computing in Civil Engineering 2015 562
This format is useful for establishing the link between the respective files and
components. The association of information to the model components is
accomplished by assigning the file paths of the information to the parameters. This
link between the documents through the path stored in the parameter allows easy
access to the required information.
In this step, the communication among real world object, server and BIM repository
was established. The tagged real world object establishes communication with the
server through the smartphone using the wireless network. When user scans the QR
code an html webpage opens and prompts the user to click the image. Once the user
clicks, an event is triggered to store the identification code of the element in a text file
on server. This data storage was triggered by using asp file. The communication
between Revit Architecture and server was established by using Revit APIs through
C# programming language. Once the communication is established the data from text
file is retrieved and corresponding 3D element gets selected and highlighted on the
interactive smart display unit. This element will in turn facilitate to query the
required information from the BIM repository. Figure 5, shows the steps followed by
the user to access the information using the BIM+QR environment. It also shows the
screen shot of the retrieved O&M manual, specifications, Performance Test Videos of
the door accessed from the BIM.
(a) Object
tagged with
Step-1: User scans the QR QR code
code using smart device
Step-2: 3D
element got
highlighted
and user
access the
information
© ASCE
Computing in Civil Engineering 2015 563
CONCLUSION
BIM+QR environment automates the element identification and selection process in
BIM. The BIM and QR code integration provides a seamless flow of information
between real world objects and BIM elements. This automation reduces the
identification and selection time, and reduces manual errors. BIM+QR environment
synchronizes the dynamic user input and increases the information retrieval
efficiency. As the number of smart devices users are on rise, BIM+QR environment
has the potential to make a paradigm shift in O&M. The methodology discussed in
this paper serves as an initial step to develop an integrated BIM+QR environment for
effective O&M applications.
REFERENCES
Bell, L.C. and McCullouch, B.G. (1998). “Bar code applications in construction,” J.
of Constr. Engrg and Mgmt, 114(2), 263-278.
East, W, E., (2007). “Construction Operations Building Information Exchange
(COBIE).” < http://www.wbdg.org/pdfs/erdc_cerl_tr0730.pdf> (March 5,
2011).
Goedert, J.D., and Meadati, P. (2008). “Integration of construction process
documentation into Building Information Modeling.” J. of Constr. Engrg and
Mgmt, 137(7), 409-516.
Li, N. and Becerik-Gerber, B. (2011). “Performance-based evaluation of RFID-based
indoor location sensing solutions for the built environment.” Advanced
Engineering Informatics, 25(3), 535–546.
Lin, Y. C., Su, Y. C. and Chen Y. P. (2014). “Developing Mobile BIM/2D Barcode-
Based Automated Facility Management System.” The Scientific World
Journal, 2014.
Navon, R. and Berkovich, O. (2005). “Development and on-site evaluation of an
automated materials management and control model,” J. of Constr. Engrg and
Mgmt, 131(12), 1328-1336.
Saeed, G., Brown, A., Knight, M., and Winchester, M. (2010). “Delivery of
pedestrian real time location and routing information to mobile architectural
guide.” Autom. Constr, 19 (4), 502 - 517.
Shehab, T. and Moselhi, O. (2005). “An Automated Barcode System for Tracking
and Control of Engineering Deliverables.” Proceeding of Construction
Research Congress, April 5 - 7, San Diego, CA
Su, Y. C., Hsieh, Y. C., Lee, M.C., Li, C.Y. Lin, Y. (2013). “Developing BIM-Based
shop drawing automated system integrated with 2D barcode in construction.”
Proceedings of the Thirteenth East Asia-Pacific Conference on Structural
Engineering and Construction (EASEC-13), September 11-13, 2013, Sapporo,
Japan.
Wikimedia (2014). “Object hyperlinking.”
<http://en.wikipedia.org/wiki/Object_hyperlinking > (April 15, 2014).
© ASCE
Computing in Civil Engineering 2015 564
Abstract
© ASCE 1
Computing in Civil Engineering 2015 565
INTRODUCTION
© ASCE 2
Computing in Civil Engineering 2015 566
RESEARCH DESIGN
In order to extend from the theory point of view to practice, the experiment in
this study was conducted in four steps: experiment design, data collection, data
analysis, and domain experts’ evaluation elaborated as follows:
© ASCE 3
Computing in Civil Engineering 2015 567
alternatives were similar in terms of geometry and window position, but the texture
and color of materials were varied. The participants were provided with graphic
representation of the facade, from different views of the building (North, South, East,
and West). Four image slideshows were considered in this experiment, each image
contained four different designs (Figure 2). The images were counterbalanced to
control for order effect (although cannot be seen here).
Data Analysis: The collected data through eye tracking and questionnaires
were analyzed statistically to compare the user satisfaction of a design during the
design phase as well as test the hypothesis that design alternatives with high level of
users’ satisfaction attract more visual attention.
© ASCE 4
Computing in Civil Engineering 2015 568
PILOT STUDY
© ASCE 5
Computing in Civil Engineering 2015 569
The number of fixations and the fixation duration spent by the participant on
each design alternative was compared with the data provided from the questionnaire.
The results demonstrated that the average percentages of time spent by the
participants on designs A to D were 28%, 30%, 19% and 23%, respectively. Based on
the questionnaire results, design A was selected as the best design alternative with
relative average score of 39% (average score/sum of the average scores), followed by
design D (24%), C (22%), and B (15%). The results of this comparison, which is
discussed later in this section, failed to reject the null hypothesis that particiants spend
more time looking at designs with higher score. There are two possible reasons for
these results: Not only does alternative with high level of participants’ satisfaction
(e.g. design A) attract more attention, but alternative with low level of participants’
satisfaction (e.g. Design B) also attracts more attention. Second, attention does not
necessarily equate to someone’s satisfaction. Although design alternatives with warm
colors (e.g. designs A and B) attract more attention, the combination of building’s
geometry and texture can be an important matter for the participants. In order to
address this issue, wireframe desing alternatives (without surfaces and color) were
used for the second question about the fixation durations spent on the parts that were
found interesting by the participants.
To analyze the end-user satisfaction during the design phase, a randomized
complete block design was employed in which the data collected from the first step
were sorted into homogeneous design alternatives, or blocks, and eye movement and
questionnaire data were then assigned within the blocks. The visual attention is
measured based on the fixation duration spent on each design alternative. The
summary of analysis of variance is articulated in Table 1.
The research model was established based on the idea that there is a relation
between the user satisfaction and their visual attention. There were no statistically
significant differences between participants’ responses and visual attention (fixation
durations spent on each design recorded by the eye tracker), as determined by the
analysis of variance (p =0.119). The residual plots, including normal probability plot,
histogram, residuals versus fits and order, and residuals versus order are shown in
Figure 4. The normal probability plot shows a fairly linear pattern which is an
indication of population normality (i.e. the distribution of residuals is normal).
© ASCE 6
Computing in Civil Engineering 2015 570
An additional interesting result was high fixation values for the parts that were
found interesting by the participants. While the average area for these parts contained
8.9% of the total area, the participants spent 19.8% of their time on these areas of
interests. As shown in Table 2, the results of a paired t-test between the means of the
“most interesting” areas and the fixation durations spent on those areas show that the
participants spent considerable higher visual attention to the parts they consider
attractive. Due to the p value (t-value= 4.804; p<0.001), we can reject the null
hypothesis and accept the alternative hypothesis that there are statistically significant
differences between the means. Thus, it is concluded that the users’ satisfaction of
design variations (e.g. building compments) is related to their visual attention.
In this study eye-tracking technology was utilized during the design phase of a
building to compare the recorded eye movement data from the eye-tracker device
with collected data through the questionnaire. The null hypothesis states that there is
no difference between the participants’ satisfaction and the time spent on each design
alternative. We failed to reject the null hypothesis at the 5% level. Also, the
participants were asked to determine the part of the design that is most appealing to
them. The analysis of fixation durations shows that the participants spent about 20%
of their time on these areas of interest; however, these parts cover less than 9% of the
total area. Therefore, we can conclude that higher visual attention is paid to the parts
the participants consider attractive.
© ASCE 7
Computing in Civil Engineering 2015 571
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 572
Abstract
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 573
associated with a spatial program and visibility are efficiently managed throughout
the design and construction phases.
Manually collecting diverse requirements and applying them to a design process are
generally burdensome for an architect. To satisfy a great deal of requirements, the
predesign phase has traditionally relied heavily on the competence and the dexterity
of an architect. However, because of the massive amount of criteria, manually
applying these requirements to a building design is tedious for an architect.
Unfortunately, as a result of this situation, the architect may neither properly address
nor adhere to the requirements until late in the construction process, which leads to
additional time and expense. Another major problem is that iterative exchanges of a
building design among project participants throughout the design and construction
phases can unintentionally edit the design, which may compromise conformity to
requirements (Solihin et al. 2014). Thus, when sending and receiving a building
design, domain professionals need to validate it iteratively according to the
predefined requirements (Lee et al. 2014). These repetitive evaluations, however, are
laborious, particularly in complex building projects. For example, a hospital design
may specify thirty or more requirements for only a single patient room. Thus,
satisfying all of the requirements of a proposed building design is significantly
demanding for project participants.
© ASCE
Computing in Civil Engineering 2015 574
descriptions of the possible requirements for accessibility and visibility that should be
managed in the design phase. Accessibility checking can consist of three scenarios:
direct access, indirect access, and passing through between spaces. These
accessibility conditions should be satisfied if a hospital or clinic design is to maintain
circulation and division between secure space such as restricted areas. In addition,
visibility checking helps an architect design a well-organized layout; for example, a
hospital layout should ensure that a nurses’ station affords a clear view of patient
rooms because nurses must maintain constant vigilance of patient conditions.
Guaranteeing visibility between a nurses’ station and patient rooms improves the
efficiency of patient monitoring and reduces workplace stress (Seo et al. 2010).
Visibility checking
Users can define diverse types of rule sets on the SMC application using rule
templates and checking parameters that can increase the flexibility of rule definitions.
The scope of values that parameters support, however, is currently limited at defining
rules such as ones for a spatial program and visibility (Ding et al. 2004). For instance,
even though the SMC provides a variety of checking templates for users, it lacks
templates and required parameters for the validation of accessibility and visibility.
Within the programmed checking framework, which offers only vendor-defined
parameters and prohibits users from generating checking rules by using the existing
SMC libraries, users are limited when implementing various evaluations of a BIM
model for their own use. Thus, to analyze the domain-specific knowledge of a BIM
model that the SMC does not support, we extended the BERA language to address a
spatial program and visibility rule checking using the SMC libraries and references.
In particular, we added necessary parameters such as selecting a door or a window as
a start and end points and designed algorithms to developing the suggested checking
© ASCE
Computing in Civil Engineering 2015 575
approach. Figure 2 illustrates the extended checking algorithm for the validation of
accessibility and visibility. This checking algorithm analyzes the number of turns
required to reach a targeted place from a base space using predefined start and end
points. If the number of turns required between two spaces is zero, the checking result
shows that two spaces are directly accessed and visibly secured.
© ASCE
Computing in Civil Engineering 2015 576
IF
FC
Geometric
G BERA obbject
model model (B
BOM)
Requirementss of BERA executorr :
spatial prograam executing ruless
Mapping using Solibri
and visibilitty
libraries
Reequirement
BERA rulle sets
sets
Requirrements in a pro
oject Solibri Modell Checker BE
ERA language plug-in
A CASE
C STUD
DY
Usiing the extennded BERA language for accessibilitty checking algorithm, users
u can
deffine comman nds in the BE
ERA editor to
t execute thhe validationn. Table 1 shoows two
commmands thatt implement direct and inndirect accesssibility checcking as shoown in
Figgure 4 (a) and
d (b). These commands retrieve
r the BOM objectts translated from an
© ASCE
Computing in Civil Engineering 2015 577
Figgure 5 illustraates the resuults of visibillity checkingg implemented in the SM MC and the
BERA environm ment. Figuree 6 shows texxtual reportss for visibilitty checking generated
in the
t BERA co onsole vieweer. Similarlyy to the accesssibility validation, visibbility is also
asseessed accord ding to compputing the nuumber of turrns to reach the t end spot from the
starrt point. Table 2 includess the commaand for visibbility, which users can deefine in the
BERA editor. Figure
F 6 (a) represents
r thhat the nursee station in thhe [117] corrridor is
direectly visible to the [101] patent area.. Figure 6 (bb) shows the textual repoort that the
[1003] patient arrea is not vissible from thhe nurse statiion in the [117] corridor because
the number of turns
t requireed to reach thhe [103] patiient area from m the nurse station is
onee. The walkaable path, whhich require one more tuurn, prevent physical p obsservation of
patiients from a nurse station. Thus, thiss checking process
p help an architect ensure
satiisfying visib
bility among spaces in thhe building design
d duringg the predesiign phase.
Fig
gure 5. Visuualization of visibility rellationship beetween
a nurrse station annd patient roooms
© ASCE
Computing in Civil Engineering 2015 578
(b) Report for visibility passed (c) Report for visibility failed
Throughout the design and construction phases, these checking algorithms for
accessibility and visibility can be iteratively used to evaluate the conformity to the
predefined requirements of a building design and to maintain the quality of the design.
In addition, the BERA language can be extended to help architects and
nonprogrammers easily develop their own rules for specific assessment such as safety
and fire code exit path checking, sunlight and glare percentage analysis, and space
allocation. These possible rule language can be defined and executed by the extension
of the BOM in the BERA language and the checking libraries of the SMC application.
This opportunity will offer the value of greater regulatory predictability and
consistency, which reduce human errors throughout the design phase.
CONCLUSION
Project participants are constantly facing the challenging situation in which a building
model should be validated iteratively with regulations and complex requirements.
One of the primary benefits of BIM is an automated rule-based checking (Eastman et
al. 2008). This research addresses the rule sets and the checking algorithms of
accessibility and visibility and proves the effectiveness of automated rule-based
checking. In particular, through the extension of the BERA language, this research
primarily deals with the program requirements of a building design organized by
rooms or spaces where the rules apply and also on relationships between spaces. The
extended BERA language can be applied at the level of back-end extensibility: the
BERA language can be targeted and reused in other modeling platforms such as a
BIM authoring tool or a conceptual modeling application to evaluate its native model
according to the user-defined rule sets. In addition, the possible extensions of
dynamic BOM in the BERA language can help validate programming requirements,
© ASCE
Computing in Civil Engineering 2015 579
space layout and circulation, the fire and safety code, equipment placement, and diver
evaluation during the design phase. In the long term, these efforts of rule-based
checking will reduce the errors of a building design, minimize the risk of professional
liability, and facilitate the process of a design and its data exchange. In addition,
using rule-checking features, building professionals can ensure the document
submission for final approval through iterative confirming potential conflicts with
predefined regulatory requirements. The testing rule sets and algorithms also can be
re-used as a checking library throughout a building project because the fundamental
test-bed setup is kept. This automated validation process aims to encourage domain
experts using a 2D design to expedite the enhancement and the customization of BIM
application to comply with the IFC schema. The ultimate goal of the rule-checking
process, however, is not only to put in place an automated checking system, but also
to offer the impetus to gear up the architecture, engineering, and construction
industries towards greater interoperability through the secured deployment of IFC-
based BIM applications. To accomplish the concrete validation process for addressing
various design requirements, the development of a rule translator and a stand-alone
checking application that employ the BERA language will be performed.
REFERENCES
Ding, L., Drogemuller, R., Jupp, J., Rosenman, M. A., and Gero, J. S. (2004)
“Automated Code Checking.” CRC for Construction Innovation, Clients
Driving Innovation International Conference, Surfers Paradise, Qld.
Eastman, C. M., Teicholz, P., Sacks, R., Liston, K. (2008). “BIM Handbook: A Guide
to Building Information Modeling for Owners, Managers, Designers,
Engineers and Contractors.” John Wiley & Sons, Inc., New Jersey.
Eastman, C. M., Lee, J. M., Jeong, Y. S., Lee, J. K. (2009). “Automatic rule-based
checking of building designs.” Automation in Construction 18, 1011-1033.
Lee, J. K., Eastman, C. M., Lee Y. C. (2014). “Implementation of a BIM Domain-
specific Language for the Building Environment Rule and Analysis.” Journal
of Intelligent & Robotic Systems, 1-16.
Lee, Y. C., Eastman, C. M., Lee, J. K. (2014). “Validations for Ensuring the
Interoperability of Data Exchange of a Building Information Model.”
Automated In Construction, Under Review.
Solibri Inc., Solibri Model Checker. http://www.solibri.com/solibri-model-
checker.html (accessed 7.11.14).
Seo, H. B., Choi, Y. S., and Zimring, C. (2010). “Impact of Hospital Unit Design for
Patient-Centered Care on Nurses’ Behavior.” Environment and Behavior 2011
43: 443
Solihin, W., Eastman, C. M., Lee, Y. C. (2014). “Toward Robust and Quantifiable
Automated IFC Quality Validation.” Advanced Engineering Informatics.
Under Review.
© ASCE
Computing in Civil Engineering 2015 580
Abstract
© ASCE 1
Computing in Civil Engineering 2015 581
INTRODUCTION
Building and transportation are essential parts of all the reports on energy
consumption trends and projections for energy demand, and other energy related
topics. This is not surprising, because building and transportation sectors account for
approximately 75% of greenhouse gas (GHG) emissions. Energy use in the building
sector is defined as the energy consumed in residential and commercial buildings,
where people reside, work, or buy services. The transportation sector represents
energy use for moving people, materials and goods by different transportation modes
(e.g. highway, rail, and pipeline) (EIA 2013). Given the magnitude of this statistic,
many studies have been directed towards the issues of energy use and carbon
emissions of the built environment. Most of these studies, focused only on either
buildings or transportation systems. To analyze the dynamics of energy use associated
with buildings and transportation systems, it is essential to explore the interaction
between these two sectors in a single comprehensive model.
Previous studies have shown that understanding the importance of occupant
behavior is a crucial factor in the long-term success of energy efficiency measures in
different households (Lindén et al. 2006). However, building occupants are the group
that is sometimes neglected in energy efficiency design. This is probably the main
reason why the actual energy consumption in buildings is higher than those calculated
or projected. According to the results of an experimental study conducted over 3
years in multi-family buildings in Switzerland, Branco et al. (2004) found that the
real energy use was 50% higher than the estimated energy use due mainly to the real
conditions of utilization and actual weather conditions.
Transportation systems are usually modeled as a network of nodes (e.g.
buildings) which are connected by edges (e.g. highways). These nodes are locations
and facilities that have the capacity to generate traffic flow, and edges are trails, roads
and other types of transportation modes along which energy is consumed. The
transportation network is developed by knowing the number or frequency of trips in
each origin-destination (O-D) pair and used to underestand the relationships between
the various socio-economic and demographic factors, as well as other characteristics
of the built environment, and GHG emissions (Jia et al. 2009).
There are limited studies on the transportation energy expanse of buildings.
Recently, Wilson and Navaro (2007) have written an article entitled “Driving to
Green Buildings: The Transportation Energy Intensity of Buildings” which refers to
“the amount of energy associated with getting people to and from that building.” The
building’s transportation energy intensity has been used in this paper as a metric to
measure building performance. The authors claimed that for an average new office
building, transportation accounts for more than twice as much energy use as building
operation. According to a report prepared by U.S. Department of Transportation,
transportation energy use for average office building is 381 kWh/m3 year, while
operating energy use for average office building is 293 kWh/m3 year (Davis et al.
2007). These reasons justify extensive literature on the relationships between the built
environment and transportation-related energy use and GHG emissions, along with
implications for factors such as economic growth and quality of life (Porter et al.
2013).
© ASCE 2
Computing in Civil Engineering 2015 582
OBJECTIVE / SIGNIFICANCE
The uniqueness of this study rests on the notion of the built environment
model that integrates energy use in buildings and transportation infrastructures,
occupant behavior, and travel behavior into a dynamic network. In this network,
buildings (nodes) are connected by highways and roads. Geographic Information
Systems (GIS) is used as the platform for data acquisition and implementation, and
dynamic linkage between spatial (e.g. location of buildings) and attribute (e.g.
building’s carbon footprint) data. Moreover, the study is of particular significance
since it accurately reflects household and individual interactions with daily activity-
travel patterns. Due to lack of information regarding the dynamics of activities and
travel behavior of households, it is not possible to evaluate non-capital improvement
strategies associated with transportation energy. The objective of this study is to
bridge this gap through the application of activity-based modeling of travel demand,
which will enable decision-makers to understand the interactions between travel
choices and schedule of activities in terms of time and space. Based on the unit of
analysis, travel models can be categorized into trip-based and activity-based models.
In trip-based travel models, the travel demands are derived from the individual travel
behavior (e.g. individual person trip), but activity-based models use the need and
desire to participate in activities as the fundamental units of analysis.
The proposed model is developed using GIS and Building Information
Modeling (BIM) to identify the current trends in energy use associated with people
behavior and infrastructure. BIM is used as a life cycle inventory to model and collect
building-related information and material quantities, and GIS is used to define geo-
referenced locations, storing attribute data, and displaying data on maps. The
integration of BIM and GIS has been used successfully to solve spatial-related
problems in construction management. Examples of these applications include digital
modeling of building and landscape-level components (Karan et al. 2014),
construction site layout planning (Sebt et al. 2008), and facility management supply
chain (Karan and Irizarry 2014). This integration makes it possible to integrate
physical (e.g. buildings and roads) and social (e.g. people activities) features of the
built environment to determine building and transportation-related energy use. By
linking individual activity patterns and transportation infrastructure, this study will
promote sustainable energy policies to reduce GHG emissions from buildings and
transportation.
The research methodology is divided into three main steps: data collection,
development and calibration of the built environment model, and evaluation. A time-
use survey is conducted in the first step to gather travel information from individuals
and their activities. In the second step, an integrated BIM-GIS model is developed to
integrate physical (e.g. buildings and roads) and social (e.g. activities) features of the
built environment. Finally, the interaction between building and transportation
systems is explored in the third step.
© ASCE 3
Computing in Civil Engineering 2015 583
© ASCE 4
Computing in Civil Engineering 2015 584
possible to measure the distance between any pair of origin-destination nodes for
different travel modes.
The building module of energy simulation considered both the building’s
occupant behavior and the physical components of the building (e.g. HVAC systems,
doors and openings, insulation and etc.) for evaluation of the energy use patterns.
Knowing the energy use category of each participant (i.e. austerity, average standard,
or high energy consumer) and the occupant schedules from the data collection step,
the dynamic model evaluates energy use patterns associated with the occupant
behavior.
BIM is used as a life cycle inventory to model and collect building-related
information and material quantities. eQuest was selected as the energy simulation tool
because of its robust and highly respected simulation engine and its parametric and
graphical reports. Figure 1 shows an example of the energy consumption map for
both building and transportation systems. The vehicle information (e.g. mile per
gallon), origin and destination locations, and the trip duration are the basic inputs of
the GIS model. The average speed is then calculated using the shortest distance
between the origin and the destination and the travel time for the trip. The energy (or
fuel) consumption is dependent on the vehicle information and the average speed (e.g.
the fuel consumption increases below or above the optimal speed). By applying the
fuel-efficiency rates provided by the U.S. Department of Energy, various fuel types
and transportation modes are taken into consideration and consequently fuel
consumption and energy use associated with the transportation system are calculated
within the built environment model. The GHG emissions of the building and
Figure 1. Spatial visual presentation of the energy use data associated with
case study building and transportation systems.
© ASCE 5
Computing in Civil Engineering 2015 585
transportation sectors are combined into one integrated GIS map. The results can be
further refined to reflect the individual’s activities.
© ASCE 6
Computing in Civil Engineering 2015 586
Figure 3 shows the percentage of CO2 emission reduction for each scenario
based on benchmark values. To make comparisons among different CO2 emission
results, units are specified as percentages of the roof area (for skylights and PV
system), total power density (for equipment power), total number of car (for electric
cars), and total days in a week (for telecommuting). For instance, installing PV panels
to 10% of the roof produce approximately 11,500 kWh of electricity annually, about
1.8% of the energy consumption of the benchmark. According to the energy model,
equipment such as computers and tools in classrooms and labs (e.g. wood production,
metal, and computer labs), accounts for about 46% of the energy used in the case
study, which puts this component at the top of the list of building components with
high potential to save energy.
Since electric cars use electricity as their primary fuel, replacing a fraction of
total vehicles with electric cars is identified as one of the scenarios with the greatest
potential for CO2 emission reduction. However, the source of electricity and scope of
electricity supply should be taken into consideration. For instance, if we replace 10%
of all vehicles in the study area (electricity source and supply: Pennsylvania) with
electric cars, we can reduce CO2 emissions from transportation by about 5.7%. It
should be noted that the assessment of the time and cost associated with each scenario
is not included in the analysis.
CONCLUSION
The proposed model is one of the first attempts to combine the energy
consumption in buildings and transportation together to identify the current and future
trends in energy use and CO2 emissions. The model incorporates the linkage amongst
activities and travel for each individual. The real benefit of this research will be more
useful in large-scale application, for example, energy planning of a university campus.
The results of the case study show the current trend in energy use. In order to explore
future trends in energy use and carbon emissions associated with buildings and
transportation systems, population dynamics should be incorporated in the model. In
addition, the system does not consider the time or cost required to implement energy
© ASCE 7
Computing in Civil Engineering 2015 587
saving scenarios. Future work should include an analysis with accurate costs and
relevant schedule specific to the study area.
REFERENCES
Boarnet, M., and Crane, R. (2001). "The influence of land use on travel behavior:
specification and estimation strategies." Transportation Research Part A:
Policy and Practice, 35(9), 823-845.
Branco, G., Lachal, B., Gallinelli, P., and Weber, W. (2004). "Predicted versus
observed heat consumption of a low energy multifamily complex in
Switzerland based on long-term experimental data." Energy and Buildings,
36(6), 543-555.
Davis, S., Diegel, S., and Boundy, R. (2007). "Transportation Energy Data Book:
Edition 26. Oak Ridge National Laboratory." ORNL, 6978.
EIA (2013). "International Energy Outlook 2013 With Projections to 2040." U.S.
Energy Information Administration.
Ewing, R. H., and Anderson, G. (2008). Growing cooler: the evidence on urban
development and climate change, ULI Washington, DC.
Frank, L. D. (2000). "Land use and transportation interaction implications on public
health and quality of life." Journal of Planning Education and Research,
20(1), 6-22.
Jia, S., Peng, H., Liu, S., and ZHANG, X. (2009). "Review of transportation and
energy consumption related research." Journal of Transportation Systems
Engineering and Information Technology, 9(3), 6-16.
Karan, E., Sivakumar, R., Irizarry, J., and Guhathakurta, S. (2014). "Digital Modeling
of Construction Site Terrain using Remotely Sensed Data and Geographic
Information Systems Analyses." Journal of Construction Engineering and
Management, 140(3), 04013067.
Karan, E. P., and Irizarry, J. (2014). "Developing a Spatial Data Framework for
Facility Management Supply Chains." Construction Research Congress 2014,
D. Castro-Lacouture, J. Irizarry, and B. Ashuri, eds., American Society of
Civil Engineers, Atlanta, GA, 2355-2364.
Lindén, A.-L., Carlsson-Kanyama, A., and Eriksson, B. (2006). "Efficient and
inefficient aspects of residential energy behaviour: What are the policy
instruments for change?" Energy policy, 34(14), 1918-1927.
Polzin, S., and Chu, X. (2007). "Exploring long-range US travel demand: A model for
forecasting state level person miles and vehicle miles of travel for 2035 and
2055." Center for Urban Transportation Research, University of South
Florida.
Porter, C. D., Brown, A., Vimmerstedt, L., and Dunphy, R. T. (2013). "Effects of the
Built Environment on Transportation: Energy Use, Greenhouse Gas
Emissions, and Other Factors."
Sebt, M. H., Karan, E. P., and Delavar, M. R. (2008). "Potential Application of GIS to
Layout of Construction Temporary Facilities." International Journal of Civil
Engineering, 6(4), 235-245.
Wilson, A., and Navaro, R. (2007). "Driving to green buildings: the transportation
energy intensity of buildings." Environmental Building News, 16(9).
© ASCE 8
Computing in Civil Engineering 2015 588
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 589
Walls and Smith, 1998). However, the existing LCCA approaches (e.g., Zhang et al.,
2013) have certain limitations for network-level cost analysis. First, they assume that
the timing, type and amount of future costs are deterministic, fixed values. However,
infrastructure networks include dynamic and uncertain interactions between the
environmental conditions, availability of funding, deterioration of assets, user
behaviors, and agency’s decision processes and priorities, all of which affect the
likelihood, timing, and amount of future costs (Batouli and Mostafavi, 2014). Second,
in the existing optimization-based cost analysis methodologies, costs are only taken
into consideration if they occur within the planning horizon. In reality, however, this
assumption is inconsistent with the continuous nature of service in infrastructure
networks; hence, using the existing optimization-based methods will lead to shifting
cost burdens beyond the planning horizon, defying the principles of sustainability
(Batouli and Zhu, 2014). For instance, in the example shown in Figure 1, preservation
activity AM4 is scheduled to be implemented during the final years of the planning
horizon. If the preservation activity is deferred, it will lead to cost reduction over the
planning horizon. However, this practice is not consistent with the principles of
sustainability. In order to resolve the cost-deferring tendency of the existing
optimization-based approaches, all life cycle costs of individual assets, even those
that fall beyond the planning horizon, should be taken into consideration. To this end,
an appropriate methodology for cost analysis in networks of infrastructure should be
capable of modeling the long-term costs beyond the planning horizon by considering
the dynamic interactions and uncertainties. This study proposes a simulation
framework for this purpose.
Legend:
: Cost of reconstruction of asset A (B) : Cost of maintenance/ rehabilitation
© ASCE 2
Computing in Civil Engineering 2015 590
Budget
Step 1: Simulating user/asset/agency
Asset 2
Life cycle 1
Demand/Pressure
User Asset Life cycle 2
Service
Sociotechnical Environmental Asset n
Condition Condition Life cycle 1 Life cycle 3
Step 4: Aggregation of cost annuities
Asset 1
Probability Distribution of Input variables
Asset 2
Distribution of annuity costs
Third, the cash flows related to individual asset costs are converted into their
equivalent annual worth (i.e., annuity). This is because individual assets have
different life cycles from each other and from the planning horizon. Hence, using
annual worth conversion, the annual equivalent costs of each asset are determined
and aggregated to determine the network level annual equivalent costs over the
planning horizon (Newman 2004). Fourth, the variables and parameters affecting the
agent-network-user interactions are inherently uncertain. For example, the uncertainty
related to the level of funding, deterioration of assets, and the future preservation
costs affect the uncertainty in the cost cash flows, and hence, annual network-level
costs. In step 4, Monte-Carlo simulation is used to determine the mean and variance
of network-level costs. This will enable selecting strategies that lead to lowest
network costs with the greatest likelihood.
NUMERICAL EXAMPLE
© ASCE 3
Computing in Civil Engineering 2015 591
numerical case study is limited to the costs incurred to the agency; hence, the user
costs and the influencing user behaviors are excluded from the analysis in this case
study.
Table 1. Characteristics of the Case Network.
(1)
In Eq. 1, denotes the initial value of PSR for a given link right after
construction. This value is 4.5 according to Chootinan et al. (2006) and Lee et al.
(1993). Cumulative Equivalent Single Axle Loads per day (CESAL) and STR
(existing structure of pavement) capture the impact of traffic load and structural
design of the pavement, respectively. An adjustment factor (A.F.) was used to capture
the effect of climate conditions. Finally, a,b,c and d are empirically-based coefficients
whose values depend on the type of pavement (Lee et al. 1993).
The performance of pavement assets is also affected by the M&R activities.
Four types of M&R activities were considered in this case study: routine maintenance,
surface treatment, overlay, and rehabilitation. Each of these activities leads to a
certain level of improvement in performance depending on the age of the pavement
(Chootinan et al., 2006). The timing and type M&R activities depend upon the
decision-making processes of the administrative agency that modeled using agent-
based modeling. The main variables in the agent-based model include the
© ASCE 4
Computing in Civil Engineering 2015 592
performance conditions of assets and the level of funding. The decision rules of the
administrative agency follow a “worst-first” strategy in which the roads with lowest
performance are prioritized for allocation of M&R funding. A maintenance and
rehabilitation (M&R) activity is implemented if it can restore the pavement to an
excellent condition; otherwise, if an adequate funding is not available for the required
M&R, repair activities are deferred to the next period. The details related to the
agent-based modeling of the agency decision processes and user behaviors can be
found in Batouli and Mostafavi (2014). The outcomes of this simulation model
determine the performance conditions of pavement assets, the service life of each
assets, and the type and timing of M&R activities. Figure 3 depicts the simulated
performance condition of the pavement assets in the network. The service lives of
pavement assets are determined based on the threshold values of PSR to determine
the need for reconstruction. These threshold values were considered to be 2.2 and 2
for urban and rural roads, respectively (Elkins et el. 2013). Once a road reaches this
threshold PSR value, it is considered to be irremediable by maintenance activities,
and hence, it should be reconstructed.
PS
R
Year
© ASCE 5
Computing in Civil Engineering 2015 593
© ASCE 6
Computing in Civil Engineering 2015 594
leads to an average performance of 3.37 at the network level over the 40-year
planning horizon.
CONCLUSION
REFERENCES
Batouli, M., and Y. Zhu. (2014). Using Accrual Accounting Life Cycle Assessment as
an Indicator of Urban Sustainability. In Computing in Civil and Building
© ASCE 7
Computing in Civil Engineering 2015 595
© ASCE 8
Computing in Civil Engineering 2015 596
ABSTRACT
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 597
analyzing and communicating progress deviations, it will be useful to have each image
referenced against and compared with BIM and one another.
To aid image-vs.-BIM comparison, several methods have been introduced that
tie in all images together in 3D using standard Structure-from-Motion (SfM)
procedures. The generated point clouds are then superimposed with the 4D BIM,
resulting in back-projection of BIM on all images that were successfully registered
through the SfM procedure. However, often site images exhibit wide baselines and thus
are not successfully registered with BIM. To address current limitations, this paper
builds on (Karsch et al. 2014) and presents a method that leverages BIM as a priori to
initiate the SfM procedure.
By interactively guiding BIM into one or a few images that have significant
overlap with the rest, the proposed BIM-assisted SfM procedure results in more
accurate and complete point clouds and also generate more accurate BIM overlays on
site images. In the following the state-of-the-art in BIM-vs.-image registration and their
limitations are introduced.
Build Differences
Behind Ahead of
Schedule Schedule
Figure 1. A collection of site images, BIM, and back-projection of BIM into one of
the non-referenced images.
RELATED WORK
Over the last few years, several techniques have been introduced that register
site images with BIM. A dominant method involves collecting time-lapse images from
fixed camera viewpoints to document the work-in-progress. These images are either
compared with one another (Abeid et al. 2003; Bohn and Teizer 2009) or against a 4D
BIM (Golparvar-Fard and Peña-Mora 2007; Golparvar-Fard et al. 2009a; Ibrahim et al.
2009; Kim and Kano 2008; Rebolj et al. 2008; Zhang et al. 2009) which represents the
expected state of construction progress. To highlight deviations in construction
progress, several visualization methods are also proposed that color code construction
elements based on the metaphor of traffic light colors (Golparvar-Fard et al. 2007).
Figure 1 illustrates an example where 4D BIM is superimposed on a time-lapse photo
for progress monitoring purposes. Based on the metaphor of traffic light colors, the
elements behind or ahead-of-schedule are color-coded with red and green colors
respectively.
Another line of work automatically generates 3D as-built point cloud models of
the ongoing construction using SfM procedures and then compares the documented as-
built model to underlying 4D BIM. Golparvar-Fard (2009b, 2010, 2011) and Brilakis
et al. (2011) conduct research on SfM-based 3D as-built documentation. Golparvar-
Fard et al. (2012) improve the density of these 3D as-built point clouds by adopting a
© ASCE 2
Computing in Civil Engineering 2015 598
pipeline of Multi-View Stereo and voxel coloring algorithms, and present a method for
superimposing point cloud models with BIM through a set of corresponding feature
points between the as-built point cloud and BIM. While these methods have produced
promising results, yet they still exhibit the following limitations:
While BIM can provide a strong a priori for initiating the SfM procedure and
allows for scalability of a solution for BIM-vs.-image registration, yet its application
has remained unexplored for most part. In a recent study, (Karsch et al. 2014) leverages
BIM as a priori and presents a constrained-based procedure to improve the image-based
3D reconstruction. Their results show that the accuracy and density of image-based 3D
reconstruction and back-projection of 3D BIM on unordered and un-calibrated site
images can be improved compared to the state-of-the-art. The following provides an
overview of the method and its functionalities.
METHOD OVERVIEW
© ASCE 3
Computing in Civil Engineering 2015 599
interact with and explore the rich temporal data from the collection of the images and
the underlying BIM. The following presents an overview of each part of the method.
Registration of BIM into the anchor camera. To begin the registration process,
the user chooses an anchor camera from the collection (an image that has significant
overlap with majority of the images) and then selects 2D locations in the image and
corresponding 3D points on the mesh model (See Figure 2). The developed interface
facilitates this selection by allowing the users to quickly navigate around BIM. Given
at least four corresponding points, the six-parameter extrinsic parameters of the camera
– three rotation (R) and three translation parameters (T) – are derived in a Perspective-
n-Point, or PnP problem setting where the re-projection error is minimized using
Levenberg-Marquardt algorithm. Here, the intrinsic parameters are fixed to have no
radial distortion, and the focal length is obtained either from EXIF data of the anchor
camera.
argmin project ℙ , 𝑋 (𝑡 ) − 𝑢
ℙ\ℙ ,
∈
+ ‖project(ℙ , 𝑋 ) − 𝑢‖
∈ \
This formulation provides better estimates since the model is constrained by
accurate camera parameters. Because it has fewer parameters to optimize over, the
efficiency of optimization is improved and variance in the estimates are reduced
© ASCE 4
Computing in Civil Engineering 2015 600
respectively. Figure 3 shows examples from the outcome of running the constrained
Bundle Adjustment for registration of all images in the collection using the anchor
cameras shown in Figure 2.
© ASCE 5
Computing in Civil Engineering 2015 601
Reasoning about Occlusions. The method also reasons about occlusions and
re-adjusts the image by comparing BIM and point cloud geometry from the camera
view point (Figure 6a and b). By projection points in the BIM, it is predicted whether
or not this point is in front of the model. Because the back-projected point clouds are
typically sparse, the binary occlusion predictions can be flooded by superpixels and
finally smoothened by using a cross-bilateral filter. Figure 6c shows the result of
superimposing the occluded area over the BIM overlay.
Figure 6. Depth map vs. BIM overlay and reasoning about occlusion.
© ASCE 6
Computing in Civil Engineering 2015 602
Figure 7 shows an example of how the constraints provided through the anchor
camera registration help prevent drift (left) (Karsch et al. 2015). They also results in
more accurate and more complete point clouds. (right: due to higher rate of image
registration).
This paper presents a method together with experimental results that leverages
BIM as a priori to initiate the SfM procedure. BIM-assisted SfM begins with registering
a BIM with images using an anchor camera manually registered with a BIM, which
drives remaining 3D reconstruction and registration processes. This step is carried on
throughout the project duration, generating as-built models at different times. The
outcomes are visualized in a photorealistic and time-lapse like environment where a
user can navigate in a 4D environment. The user can see part of or whole building in
different states of progress.
The open research challenges that still needs to be addressed are as follows: 1)
choosing an anchor camera with insufficient overlaps with the remaining images may
lead to failure; 2) analysis and improvement on computational time needs to be
conducted; and 3) developing a client-server architecture needs to be studied for
making this system scalable to smartphones for onsite personnel.
ACKNOWLEDGEMENT
This material is in part based upon work supported by the National Science
Foundation under Grant CMMI-1360562. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
REFERENCES
Abeid, J., Allouche, E., Arditi, D., and Hayman, M. (2003). “PHOTO-NET II: a
computer-based monitoring system applied to project management.” Automation
in construction, Elsevier, 12(5), 603–616.
Bae, H., Golparvar-Fard, M., and White, J. (2014). “Image-Based Localization and
Content Authoring in Structure-from-Motion Point Cloud Models for Real-Time
Field Reporting Applications.” J. of Comp. in Civ. Eng. 637–644.
© ASCE 7
Computing in Civil Engineering 2015 603
Bohn, J. S., and Teizer, J. (2009). “Benefits and barriers of construction project
monitoring using high-resolution automated cameras.” Journal of Construction
Engineering and Management, ASCE, 136(6), 632–640.
Brilakis, I., Fathi, H., and Rashidi, A. (2011). “Progressive 3D reconstruction of
infrastructure with videogrammetry.” Auto. in Const., Elsevier, 20(7), 884–895.
Golparvar-Fard, M., and Peña-Mora, F. (2007). “Application of visualization
techniques for construction progress monitoring.” Comp. in Civ. Engin. (2007),
216–223.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2009a). “Monitoring of
Construction Performance Using Daily Progress Photograph Logs and 4D As-
Planned Models.” Comp. in Civ. Engin., 53–63.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2009b). “Interactive Visual
Construction Progress Monitoring with D4AR --4D Augmented Reality --Models.”
Construction Research Congress 2009, 41–50.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2010). “D4AR--4 Dimensional
augmented reality-tools for automated remote progress tracking and support of
decision-enabling tasks in the AEC/FM industry.” Proc., the 6th Int. Conf. on
Innovations in AEC.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2011). “Integrated Sequential
As-Built and As-Planned Representation with Tools in Support of Decision-
Making Tasks in the AEC/FM Industry.” J. of Const. Engin. and Mgmt.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2012). “Automated Progress
Monitoring Using Unordered Daily Construction Photographs and IFC-Based
Building Information Models.” J. of Comp. in Civ. Eng, 147–165.
Golparvar-Fard, M., Sridharan, A., Lee, S., and Peña-Mora, F. (2007). “Visual
representation of construction progress monitoring metrics on time-lapse
photographs.” 25th Anniversary of Const. Mgmt and Economics, 2007, 239–256.
Han, K., and Golparvar-Fard, M. (2014). “Automated Monitoring of Operation-level
Construction Progress Using 4D BIM and Daily Site Photologs.” CRC 2014.
Ibrahim, Y. M., Lukins, T. C., Zhang, X., Trucco, E., and Kaka, A. P. (2009). “Towards
automated progress assessment of workpackage components in construction
projects using computer vision.” Adv. Engin. Informatics, Elsevier, 23(1), 93–103.
Karsch, K., Golparvar-Fard, M., and Forsyth, D. (2014). “ConstructAide: analyzing
and visualizing construction sites through photographs and building models.”
ACM Transactions on Graphics (TOG), ACM, 33(6), 176.
Kim, H., and Kano, N. (2008). “Comparison of construction photograph and VR image
in construction progress.” Automation in Construction, 17(2), 137–143.
Rebolj, D., Babič, N. Č., Magdič, A., Podbreznik, P., and Pšunder, M. (2008).
“Automated construction activity monitoring system.” Adv. Engin. Informatics,
Elsevier, 22(4), 493–503.
Wu, C. (2013). “Towards linear-time incremental structure from motion.” Proceedings
- 2013 Int Conference on 3D Vision, 3DV 2013, 127–134.
Zhang, X., Bakis, N., Lukins, T. C., Ibrahim, Y. M., Wu, S., Kagioglou, M., Aouad,
G., Kaka, A. P., and Trucco, E. (2009). “Automating progress measurement of
construction projects.” Automation in Construction, Elsevier, 18(3), 294–301.
© ASCE 8
Computing in Civil Engineering 2015 604
Abstract
Both the use of Building Information Modeling (BIM) technology and lean
philosophy provides gains in the application of the lean principles. This paper
presents the results of the application of an integration framework for the use of BIM
with Last Planner SystemTM. These results were obtained from a real world using of
BIM technology in two residential projects in the city of Curitiba (Brazil). Firstly, it
was held a planning and control processes diagnostic; after this, the LPS
implementation and the development of the BIM models were done; and finally, the
use of the integration framework of BIM with Last Planner SystemTM. The use of
BIM model in production management assisted in the decision-making on short and
medium term planning and control. It was implemented and electronic
communication (online or offline) for visualizations of work package process status
and 4D visualization of construction schedules. The implementation of BIM and LPS
in these two projects demonstrates some specific interactions between BIM and LPS
proposed in literature.
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 605
BIM
Sacks et al. (2010) list researches that show a strong synergistic interaction
between BIM and lean construction. The use of BIM brings the need for revising the
organizational processes and definition of deliverables that lean construction seeks.
The same authors say that BIM brings the opportunity of definition and revising the
core processes. Therefore, you can use BIM features to achieve lean principles and
even pull workflows. The same authors proposed an array with 56 synergistic
interactions between BIM and lean construction.
© ASCE 2
Computing in Civil Engineering 2015 606
Sacks et al. (2013) use a system called KanBIM. This system uses BIM and
visual management of work packages to pull flows along the Last Planner System.
Lean principles are achieved with the pulled flow.
Bhatla and Leite (2012) propose a theoretical framework of interaction
between BIM and Last Planner SystemTM, where BIM features are used to remove
restrictions, along Lookahead, and coordinate meetings for MEP clash detection.
In the Lookahead plan (CAN phase of LPS), the BIM model suggests
demands for information (RFI). These RFI turn to constraints and are forwarded to
© ASCE 3
Computing in Civil Engineering 2015 607
the members of the management team. Data collected on site bring a production
feedback, suggesting or not a planning update.
In the weekly meeting (WILL phase of LPS), the management team divides
the Zero-constrained activities into smaller tasks, sequence and discuss them, setting
the weekly schedule. All demands for information are systematically registered in the
BIM model, in their respective WBS code. Past the productive period, it is necessary
to manage the tasks performed and the not concluded ones. Quality management is
applied and the information of performed tasks is feedback in the BIM model.
METHODOLOGY
Two case studies were carried out in this work on residential construction sites
in Curitiba (Brazil). The data were collected through participant observation, direct
observation, semi-structured interviews and documentary analysis. In both projects,
the procedure adopted by the researchers is the one presented in figure 2.
Before the weekly meetings for LPS planning, the researchers took a period
for collecting data of the undergoing activities, aiming to understand and diagnose the
planning and control processes and establish an action plan. In the first weekly
meetings, the researchers conducted (one week) short-term plans with the
management team and the Last Planners (three foremen). After this, they began also
the medium-term plans, for six weeks in advance (Lookahead). In the last phase, after
the routinely use of LPS, the research BIM model was introduced both for short and
medium-term discussions. All data were kept in a database with online access at
construction site, and updated after the weekly meetings.
The first project consists of four residential building towers with nine floors
each, a basement and boulevard on reinforced concrete structures. The fourth tower
also has a second underground and commercial rooms.
At the beginning, the researchers collected the project WBS and CAD designs
from the organization to develop the BIM model. The master plan was produced in
meetings with engineers, foremen, supervisors and trainees. The master plan in this
case study was developed without the use of 4D BIM. It was exposed in a whiteboard
on the site office.
© ASCE 4
Computing in Civil Engineering 2015 608
The work packages were pulled to the Lookahead plan, with a “view window”
of six weeks forward, and its restrictions raised and communicated to the
management team. The Lookahead plan was held in spreadsheets, without the use of
the BIM model. However, the 3D visualization was used to leveling the
communication in the meetings, because it took the understanding on which work
packages were in the Lookahead more easy.
The researchers followed the restrictions on short meetings with the
management team. As the work packages were with zero restrictions their status was
changed in the BIM model (color number 2, see figure 5). These packages were in
production stock, to be discussed at the weekly meeting. After the weekly meeting,
work packages that had been completed assumed the status “Check quality” (color
number 4, see figure 5), so the trainee could check quality.
In this project, the restrictions, the sequencing and planning updates were not
managed in the BIM model. Due to the project being in the final stage of work, the
BIM model assumed a role only in phases Will and Did of the LPS. Thus, the work
package only took the status “Check quality”, “Not finished” and “Rework”. There
was a lot of problems with the completion of all the reworks. Due to reduced time to
completion, it was necessary to know exactly what were the activities for completion
and what was the rework to do. A specific production team was mobilized only to
solve these problems.
The BIM model managed the delivery of work packages with quality and
completion, from previous months, which were neglected, or currently on work. The
status of the work packages in the fourth tower is shown on the figure 3 below (see
colors convention in figure 5). It is possible to see completion problems distributed
throughout the tower (yellow status).
© ASCE 5
Computing in Civil Engineering 2015 609
CA
ASE STUDY
Y: FIFTEEN
N FLOORS RESIDENT
TIAL BUIL
LDING PRO
OJECT
F
Figure 4 – Status of eacch work pacckage by apaartments: frrom “restriiction” to
“ffinished”.
RE
ESULTS
These real
r world applications
a demonstratee how to usse BIM techhnology for
elecctronic comm munication (both
( online and offline)) allowing thhe visualizatiion of work
pacckage contro ol status annd 4D visuaalization of constructionn schedules. The BIM
funnctionalities help the teaam managemment implem ment some leean principlees, such as,
© ASCE 6
Computing in Civil Engineering 2015 610
reduce production cycle durations, reduce inventory of work packages, and pull the
production.
A summary of the applied procedure, using the status and colors of figure 5,
is: while the work package is in the Lookahead plan its restrictions and demands for
information are managed by the management team and its status is “Restriction”
(color 1). When managers remove all the restrictions, the work package can be
scheduled, assuming the status “No Restriction” (color 2). If it is scheduled for the
current week, its status changes to “Weekly plan” (color 3). When it is concluded, its
status changes to “Check quality” (color 4), demanding quality verification. The
responsible for this verification decides, on the situation of the work package: order
completion, status “Not finished” (color 5); order rework, status “Rework” (color 6)
or approve and close the package, status “Finished” (color 7). In the time of labour
payments, the work packages with status “Finished” can be paid, and after this, its
status changes to “Paid” (color 8).
In both case studies, the team of researchers expended nearly four months to
planning and control processes diagnostic. After this, four months to LPS
implementation; along with these four months developing the BIM model; and in the
final phase, four months attending the weekly meetings using the BIM model. Only
after the processes diagnostic and the LPS implementation, it was possible to use the
integration framework.
CONCLUSION
© ASCE 7
Computing in Civil Engineering 2015 611
find out the real status of each package. The BIM model was the information
centralizer of reliable information. To make it possible, the entire management team
needed to be involved: from foremen (Last Planners), supervisors responsible for
quality and measures for payments, manager (engineer) and the chief-engineer on site.
The results in this paper show the following features of BIM stated by Sacks
et al. (2010) were performed: single information source, automated clash checking,
and visualization of process status and online communication of product and process
information. This features worked supporting lean principles of reduce variability
(flow variability), reduce the production cycle durations, reduce inventory of tasks,
reduce the batch sizes, use pull system and use visual management with the use of
technology.
REFERENCES
Ballard, G. (2000). “The Last Planner System of Production Control.” Thesis (Doctor
of Philosophy) – School of Civil Engineering, Faculty of Engineering.
University of Birmingham, Birmingham.
Bhatla, A.; Leite, F. (2012). “Integration Framework of BIM with the Last Planner
System.” In: Annual Conference of the International Group for Lean
Construction, 12, Proceedings.... San Diego, United States.
Dave, B., Boddy, S., & Koskela, L. (2011). “Visilean: designing a production
management system with lean and BIM.” In: Annual Conference of the
International Group for Lean Construction, 11, Proceedings.... Lima, Peru.
Eastman , C.; Teicholz, P.; Sacks, R.; Liston, K. (2011) “BIM Handbook: A guide to
Building Information Modeling for Owners, Managers, Designers, Engineers
and Contractors.” 2nd ed. John Wiley & Sons, Inc.
Formoso, C. T. (Org.). (2001). “Planejamento e controle da produção em empresas de
construção.” Núcleo Orientado para a Inovação da Edificação. Universidade
Federal do Rio Grande do Sul. Porto Alegre, Brazil.
Mendes Junior, R.; Scheer, S.; Garrido, M. C.; Campestrini, T. F.; (2014).
“Integração da modelagem da informação da construção (BIM) com o
planejamento e controle da produção.” In: Encontro Nacional de Tecnologia
do Ambiente Construído, 15, Proceedings... Maceió, Brazil.
Sacks, R.; Koskela, L.; Dave, B. A.; Owen, R. (2010). “The interaction of lean and
building information modeling in construction.” Journal of Construction
Engineering and Management, ASCE, p. 1307-1315.
Sacks, R.; Barak, R.; Belaciano, B.; Gurevich, U.; Pikas, E. (2013). “KanBIM
workflow management system: prototype implementation and field testing.”
Lean Construction Journal, p. 19-35.
Smith, D. K.; Tardif, M. (2009). “Building Information Modeling: a strategic
implementation guide for architects, enginneers, constructor and real estate
asset managers.” 1nd e. John Wiley & Sons, Inc.
Wang, W.; Weng, S. Wang, S. Chen, C. (2014). “Integrating building information
models with construction process simulations for project scheduling support.”
Automation in Construction, Elsevier B.V. v. 37, n. 5, p. 68-80.
© ASCE 8
Computing in Civil Engineering 2015 612
Abstract
INTRODUCTION
Background. Fire safety design in buildings in many countries around the world are
shifting from prescriptive-based codes to performance-based codes. A performance-
based code allows the use of any design which satisfies compliance with the safety
goals of the code. Those goals are explicitly spelled out in the code, as are means that
can be used to demonstrate compliance. The performance-based design approach
improves design flexibility by establishing clear code goals and leaving the means of
achieving these goals to the designer. As a result, the codes are more functional, less
complex and easier to apply. Another advantage of performance-based design is that
they will permit the incorporation and adoption of the latest building and fire
research, data and models. These models are used as the tools for measuring the
© ASCE
Computing in Civil Engineering 2015 613
performance of design alternatives against the established safety levels. The optimum
design would be achieved to meet the code safety objectives.
For performance based designs, the engineer is also tasked with deciding if
the building is designed with enough protection to allow the occupants to evacuate
before incapacitation occurs. In a constantly evolving building environment, fire
safety engineering depends greatly on knowledge a better understanding of the fire
phenomena, the behavior and response of the building occupants to the fire. As the
complexity and innovation in modern building design increases, the historical
methods to determine the evacuation performance features in safety design are
becoming insufficient. There are normally two potential ways trying to solve this
problem: one is to conduct fire drills or fire emergency evacuation experiments to get
more accurate information; another one is to perform compute evacuation modeling
to evaluate the evacuation performance. The fire drills or evacuation experiments tend
to be carried out very rarely due to the cost involved in planning them and the loss of
work hours in conducting them. Moreover, an evacuation drill concentrates on a
specific scenario and does not provide a comprehensive evacuation performance with
all the possible routes from different starting positions and different possible fire
locations. While, evacuation modeling is increasingly becoming a part of
performance-based analyses to assess the level of life safety provided in buildings, it
should be applied conservatively because this type of validation may not actually
simulate occupant behavior in a real situation. Some of the behavioral models will
perform a qualitative analysis on the behaviors of the population. However this is
problematic since occupant behaviors are difficult to catch in fire drills.
The problem in performance-base design for fire safety is the human behavior
factors, since large amounts of real human evacuation data are needed for developing
the evacuation simulation model. Therefore, the proposed solution is a building
information modeling (BIM)-based Immersive Serious Gaming (BIM-ISG)
environment, which provides entertaining immersive emergency gaming
environments to collect and contemporaneously store the behavior decisions made by
the players while they are playing the game.
BACKGROUND
Evacuation Models. There are many reasons for performing evacuation simulations
for a building. Depending upon when the fire protection engineer is brought into the
project, evacuation models can be used during different stages of the design phase of
the building. Evacuation models are key in allowing the engineers and designers to
answer “what if” questions about the building under design. If the model is used early
enough in the design phase, it can aid in identifying possible solutions to heavy
congestion points inside the building, therefore improving safety performance. It is
most likely, however, that an engineer is brought into a project when the design is
near completion and a problem has been identified. If the project has reached the
detailed design phase, adding new stairs, exits, or extending means of egress may be
an impossibility. In this case, the models can be used to make small, but important
changes to the building, and assess the results of such changes.
© ASCE
Computing in Civil Engineering 2015 614
For performance based designs, the engineer is also tasked with deciding if
the building is designed with enough protection to allow the occupants to escape
before incapacitation occurs. The engineer can use evacuation models to simulate
several different egress scenarios in order to evaluate the evacuation results from a
certain building. Input variables for egress scenarios include building characteristics,
such as number of floors and floor layouts, and occupant characteristics, such as
number of occupants, location of the occupants, speed, and body size. Bounding
evacuation results is important because many different fire scenarios can cause
different results, and human behavior in different fire situations are difficult to
predict. Through bounding, the designer attempts to anticipate different types of
emergencies and check if the building and occupants will reach targeted safety level
in a reasonable amount of time. The egress results are then compared with fire
modeling results for the building in order to establish whether or not the occupants
have a sufficient amount of time to escape before they are faced with hazardous
conditions, such as toxic products from smoke. In particular to map the human factors
onto computer models is a challenge, because each person’s singular behavior is
based on individual decisions and parameters and is not deterministic like the spread
of fire and smoke, which can be modeled and simulated based on natural principles.
So, according to Santos and Aguirre (2004) for an evacuation simulation, three
analytical dimensions need to be considered: the built environment (physical
location); the management of this environment (signage, escape routes); and the
social psychological and social organizational characteristics of the occupants.
Tavares and Galea (2009) noted that an evacuation simulation model must consider
four interactions: occupants–structure; occupants–occupants; occupants–fire (in case
of fire events) and fire–structure.
© ASCE
Computing in Civil Engineering 2015 615
defined rules in the Agent-based Model (ABM) considered individual agent behavior,
interactions among agents and group behaviors (Pan et al. 2007). Rüppel and Schatz
(2011) provided a comprehensive literature review in the ABM evacuation
simulation.
Serious Game. Serious Games refer to video games which are focused on supporting
activities such as education, training, health, advertising, or social change. To train
for improving personal fire safety skills, a few studies have tried to use serious game
technology to develop an immersive emergency environment, where the goal of the
game is to survive the fire and player’s survival is strictly dependent on choosing the
right actions in evacuating the building and taking as less time as possible to complete
the evacuation (Ribeiro and Almeida 2013). To succeed and progress in the game,
users would need to improve their decision making in fire situations by learning to
avoid common occupants’ errors. Rüppel and Schatz (2011) introduced the concept
for a BIM-based Serious Human Rescue game. The work combined BIM and serious
gaming technology into the building safety application. They made it possible to
extend BIM applications to integrate human factors. Rüppel and Schatz believed that
their work can bridge technology between the real and virtual world in the game
(Tizani and Mawdesley 2011).
PROPOSED APPROACH
The proposed approach consists of integrating BIM for serious game design, and
cloud computing technology, and prototyping a game application.
BIM Module Input. Games are designed with BIM Module of the designed building
as an input. The BIM module serves the purpose of providing geometric and non-
geometric information. The architectural software package uses BIM technology to
enable information exchange with other applications or plug-ins. Such information
may include the size, type, material, location, and fire rating of floors, doors, walls,
and ceilings, for example. BIM provides the physical geometry information about the
building for occupants’ escape path options, and the building structure material
properties to allow for an estimate about the fire propagation rate. The prototype
© ASCE
Computing in Civil Engineering 2015 616
described here uses Autodesk® Revit® Architecture as the tool for constructing the
BIM module, although other similar BIM software packages could be used as well.
Cloud-based Game. Cloud computing technology is adopted for the game design. It
provides the entire game experience to the users remotely from a data center. The
player is no longer dependent on a specific type or quality of gaming hardware, but
can use common devices. The end device only needs a broadband internet connection
and the ability to display High Definition (HD) video.
System Architecture Deployment. A user first logs into the system via a web
browser portal server, which is built with the BIM of the designed building. As shown
in Figure 1, the users’ behavior and attributes are collected by the web server for the
terminal devices. The attributes collected from the players includes their age, gender,
height and weight, this information would be collected from the data that players
enter to create the character which represents themselves, otherwise these attributes
are set using statistical average data by default. While the user is playing the game,
their reaction to the emergency situation would be recorded by the human behavior
data collector and send to the cloud.
Data definition and Collection. In the BIM-ISG environment game scenarios are
pre-set by an administrator before game playing is started The building designer or
safety engineer can be the administrator, who is in charge of the BIM model, and set
fire load and fire location(or set them randomly distributed) if immersive fire
scenarios are needed. Human factors are pre-set by default using the existing data
from other human libraries, or statistical data. Game players are encouraged to set
© ASCE
Computing in Civil Engineering 2015 617
their human factors (e.g. travel speed, body size, gender and age) if available before
playing the game.
DISCUSSION
© ASCE
Computing in Civil Engineering 2015 618
Limitation and Future Studies. This paper proves that it is feasible to implement the
online immersive gaming environment for collection of building emergency
evacuation performance data. This framework does not evaluate the game quality and
performance of the cloud-based online gaming environment and the server
requirement for massive access to the gaming server. During the implementation
phase, these issues should be considered in more details. In addition, the player’s
gaming experience in an immersive gaming environment, such as lack of comfort in
an immersive game, ways to help end users adjust in uncomfortable game situation
have not yet been explored. Even though immersive gaming technology has been
shown to provide a good feeling of presence, the game players feeling about this
proposed game has not be validated yet. Some pilot study players’ feedback about the
gaming environment will be necessary. Finally, to make the game fun to play, an
effective reward system should be developed as an incentive for players.
CONCLUSION
This paper discussed a new approach to collect human behavior and performance data
in emergency evacuations from players in a serious game. A framework was
proposed to use current technology integrating cloud gaming, immersive gaming, and
BIM to solve the problem of the capturing of real human behavior in emergency
situations. BIM is used in the game design phases to create the building environment
and evacuation routes information. Cloud computing is proposed to solve the problem
of accessibility by players and make it possible to have vast numbers of connected
game devices for human behavior collection. An immersive game is proposed to help
solicit the gamers’ real behavior in the gaming environment. The human egress data
collected from the game is expected to greatly benefit future emergency evacuation
simulation studies and facilitate performance-based fire safety design for building
evacuation.
REFERENCES
Burstedde, C., Klauck, K., Schadschneider, A., and Zittartz, J. (2001). "Simulation of
pedestrian dynamics using a two-dimensional cellular automaton." Physica A:
Statistical Mechanics and its Applications, 295(3), 507-525.
Du, J., and El-Gafy, M. (2012). "Virtual Organizational Imitation for Construction
Enterprises: Agent-Based Simulation Framework for Exploring Human and
Organizational Implications in Construction Management." Journal of
Computing in Civil Engineering, 26(3), 282-297.
Eubanks, J. J., Hill, F., and Casteel, A. (1998). "Pedestrian Accident Reconstruction
and Litigation. Lawyers & Judges Publishing Company." Inc.
Gwynne, S. (2011). "Employing Human Egress Data." Pedestrian and Evacuation
Dynamics, Springer, 47-57.
Liu, R., Du, J., & Issa, R. R. (2014). "Human Library for Emergency Evaluation in
BIM-based Serious Game Environment." In Proceedings ICCBE/ASCE/CIBW078
2014 International Conference on Computing in Civil and Building Engineering.
© ASCE
Computing in Civil Engineering 2015 619
Liu, R., Du, J., & Issa, R. R. (2014). "Cloud-based deep immersive game for human
egress data collection: a framework." Journal of Information Technology in
Construction, 336-349.
Pan, X. (2006). "Computational modeling of human and social behaviors for
emergency egress analysis." Stanford University.
Pan, X., Han, C. S., Dauber, K., and Law, K. H. (2007). "A multi-agent based
framework for the simulation of human and social behaviors during
emergency evacuations." Ai & Society, 22(2), 113-132.
Ribeiro, J., Almeida, J. E., Rossetti, R. J., Coelho, A., & Coelho, A. L. (2013).
Towards a serious games evacuation simulator. arXiv preprint
arXiv:1303.3827
Rueppel, U., and Stuebbe, K. M. (2008). "BIM-based indoor-emergency-navigation-
system for complex buildings." Tsinghua Science & Technology, 13, 362-367.
Rüppel, U., and Schatz, K. (2011). "Designing a BIM-based serious game for fire
safety evacuation simulations." Advanced Engineering Informatics, 25(4),
600-611.
Santos G, Aguirre B.E. (2004). "A critical review of emergency evacuation
simulation models." In: Proceedings of the workshop on building occupant
movement during fire emergencies,Gaithersburg, Maryland. p. 27–52.
Shi, L., Xie, Q., Cheng, X., Chen, L., Zhou, Y., & Zhang, R. (2009). "Developing a
database for emergency evacuation model." Building and Environment, 44(8),
1724-1729
Smith, S. P., and Trenholme, D. (2009). "Rapid prototyping a virtual fire drill
environment using computer game technology." Fire Safety Journal, 44(4),
559-569.
Tavares, R. M. and Galea, E. R. (2009). "Evacuation modelling analysis within the
operational research context: A combined approach for improving enclosure
designs." Building and Environment, 44(5), 1005-1016.
Tizani, W., and Mawdesley, M. J. (2011). "Advances and challenges in computing in
civil and building engineering." Advanced Engineering Informatics, 25(4),
569-572.
Van den Berg, J., Lin, M., and Manocha, D.(2008) "Reciprocal velocity obstacles for
real-time multi-agent navigation." Proc., Robotics and Automation, 2008.
ICRA 2008. IEEE International Conference on, IEEE, 1928-1935.
Varas, A., Cornejo, M. D., Mainemer, D., Toledo, B., Rogan, J., Muñoz, V., and
Valdivia, J. A. (2007). "Cellular automaton model for evacuation process with
obstacles." Physica A: Statistical Mechanics and its Applications, 382(2),
631-642.
Wang, Y., Zhang, L., Ma, J., Liu, L., You, D., and Zhang, L. (2011). "Combining
building and behavior models for evacuation planning." Computer Graphics
and Applications, IEEE, 31(3), 42-55.
Zhang, L., Wang, Y., Shi, H., and Zhang L.(2012). "Modeling and analyzing 3D
complex building interiors for effective evacuation simulations." pFire Safety
Journal,53, 1-12.
© ASCE
Computing in Civil Engineering 2015 620
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 621
LITERATURE REVIEW
© ASCE 2
Computing in Civil Engineering 2015 622
PROPOSED FRAMEWORK
A framework has been created in this study to pair and match EBS and WBS
to facilitate the automation of construction scheduling. Figure 1 illustrates the
framework and its workflow. The framework consists of four phases: A) Extract the
object information from the BIM models, B) define the WBS lists and relations of
each element type, C) pair the defined WBS lists and objects of the BIM model, and
D) generate the schedule.
This framework has three prerequisites. First, the designed BIM model should
follow a common model hierarchy, such as the story-based model hierarchy of
Industry Foundation Classes (IFC) and Graphisoft ArchiCAD, which is critical to the
whole process of this method. For example, if the IFC model is used for this process,
the user should design the BIM model based on an IFC model hierarchy. Thus, the
wall elements in the IFC model should be drawn up for each story and use an IfcWall
entity. Second, the materials should be assigned to each object, which is critical. The
names of the assigned materials are later used to identify each object in pairing them
with the WBS lists. Third, the WBS directory and sequence rules should be defined
by the user. Many construction companies and government agencies have already
established and use their own manuals for the scheduling and management of
construction projects. For instance, the New Jersey Department of Transportation
(NJDOT) provides a construction scheduling manual for designers and contractors
(NJDOT 2013).
A) Extract the Object Information from the BIM Models. For the
automatic schedule generation method, the most important task is to extract relevant
information from objects in the BIM model. The BIM model continues to establish
basic elements, such as walls, columns, beams, and slabs, using current commercial
BIM authoring software (e.g., Autodesk Revit or Graphisoft ArchiCAD) in
accordance with the aforementioned prerequisites. Therefore, if the user uses
Autodesk Revit to design a BIM model, object information from the BIM model can
be extracted for this method, as shown in Figure 2.
Figure 2 presents an example of data extraction from a column object in a
BIM model that is a 40-cubic-foot concrete column and located in the Level 1 area.
Similarly, data from other objects can also be extracted and grouped through a
© ASCE 3
Computing in Civil Engineering 2015 623
repetition of this step, and the extracted data in this step will be used in section C
(pair the defined WBS lists and objects of the BIM model).
B) Define the WBS Lists and Relations of each Element Type. This task
defines WBS lists and the relations between each element type as basic rules for the
tasks in sections C and D. Once all the basic rules are defined, the user is able to use
the defined rules consistently for the generation of construction project schedules
based on BIM. For example, if the user continues this task for regarding reinforced
concrete work and uses Autodesk Revit to draw the BIM model, the final output of
this step can be described, as shown in Table 1.
Table 1. Example of Defined WBS Lists and Relations of each Element Type.
Element Built-In Placement
Material Level Related WBS Lists Relations
Type Category Level
01 Formwork
02 Reinforcement 01 FS
Wall Walls Concrete n n
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
02 Reinforcement 01 FS
Column Columns Concrete n n
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
Structural 02 Reinforcement 01 FS
Beam Concrete n n+1
Framing 03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
02 Reinforcement 01 FS
Floor Floors Concrete n n+1
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
© ASCE 4
Computing in Civil Engineering 2015 624
used. In addition, if the user uses Autodesk Revit for their BIM tasks, the beam and
floor elements should be placed in the n+1th floor level if the user draws an nth floor
structure. On the other hand, if the user uses Graphisoft ArchiCAD as BIM authoring
software or other software packages, these rules can vary in accordance with the
element hierarchy of the BIM software used. Therefore, there are many possible ways
to define relations regarding the same tasks depending on the BIM authoring software
used and the BIM model hierarchy.
C) Pair the Defined WBS Lists and Objects of the BIM Model. The aim of
this step is to pair each BIM model object and WBS list to generate a schedule with
scheduling software. This process has several steps, as shown in Figure 3. Most of the
construction tasks are repeated on each floor. Therefore, the total number of levels in
a BIM model is detected as a first process. This is one more than the total stories in
the actual building. For instance, if the user draws a two-story building, the ceiling of
the second story should be placed in the third story using a floor object. Thus, the
detected total number of levels is reduced by one. In the next step, the WBS lists
defined in section B are duplicated with the name of each level and the paired BIM
model object groups. For example, the formwork activity in Level 1 can be paired
with the walls and columns assigned concrete material and placed in Level 1 and the
beams and floors assigned concrete material and placed in Level 2. If these pairing
tasks are completed, all the processes are ready to export output to the scheduling
software. Assuming that the user has designed a two-story reinforced concrete
structure BIM model using Autodesk Revit, the final output of this step can be
described as shown in Table 2. Following this procedure, although the BIM model
contains three levels, the total number of levels in the BIM model has been
categorized by two levels: Level 1 and Level 2. In addition, the reinforced concrete
work activities have been duplicated, and relevant BIM model object groups have
been paired.
Figure 3. Procedure during Pairing WBS Lists and BIM Model Objects.
Table 2. Example of Pairing Defined WBS and BIM Model Object Group
BIM Object Placement
Level WBS Lists Relations
Group Level
Wall, Column Level 1
01 Level 1 Formwork
Beam, Floor Level 2
Wall, Column Level 1
02 Level 1 Reinforcement 01 FS
Beam, Floor Level 2
Level 1
Wall, Column Level 1
03 Level 1 Pouring Concrete 02 FS
Beam, Floor Level 2
Wall, Column Level 1
04 Level 1 Formwork Removal 03 FS
Beam, Floor Level 2
© ASCE 5
Computing in Civil Engineering 2015 625
D) Generate the Schedule. After all the tasks are completed, the output can
be produced. The generated preliminary schedule is exported to the format of the
commercial scheduling software (e.g., Microsoft Project). The activities are grouped
and defined by story, and each activity list includes the story level, a default 1-day
duration (assuming a 5-day and 40-hour workweek), the start date, the end date, and
predecessors. By exporting the output to commercial scheduling software, the user
can utilize other functions in the scheduling software and develop the project
schedule easily.
In order to test the proposed framework, a simple BIM model has been created
using Autodesk Revit, and a prototype system has been developed using Autodesk
© ASCE 6
Computing in Civil Engineering 2015 626
Revit API. The BIM model in the case study consists of two levels, and each level
includes six columns, six beams, two walls, and one floor, as shown in Figure 4. The
materials of all the objects have been assigned a “concrete, cast-in-place gray” type.
The generated schedule tasks have been progressed in relation to reinforced concrete
work, as in the previous examples. Tables 3 and 4 present the results of the generated
schedule and paired WBS lists and BIM model objects. Eleven tasks (eight summary
tasks and three subtasks) have been generated, while the default duration of each
generated activity is 1 day. Thirty objects have been paired with the defined WBS
lists. Therefore, it can be concluded that the proposed method in this research has
been validated.
© ASCE 7
Computing in Civil Engineering 2015 627
REFERENCES
Cherneff, J., Logcher, R., and Sriram, D. (1991). “Integrating CAD with
Construction-Schedule Generation.” J. Comput. Civ. Eng., 5(1), 66-84.
De Vries, B., and Harink, J.M.J. (2007). “Generation of a construction planning from
a 3D CAD model.” Autom. Constr., 16(1), 13-18.
Fischer, M. and Aalami, F. (1996). “Scheduling with Computer-Interpretable
Construction Method Models.” J. Constr. Eng. Manage., 122(4), 337-347.
Hartmann, T., Gao, J., and Fischer, M. (2008). “Areas of Application for 3D and 4D
Models on Construction Projects.” J. Constr. Eng. Manage., 134(10), 776-785.
Kim, H., Anderson, K., Lee, S., and Hildreth, J. (2013). “Generating construction
schedules through automatic data extraction using open BIM (building
information modeling) technology.” Autom. Constr., 35, 285-295.
Koo, B., and Fischer, M. (2000). “Feasibility Study of 4D CAD in Commercial
Construction.” J. Constr. Eng. Manage., 126(4), 251-260.
Mikulakova, E., König, M., Tauscher, E., Beucke, K. (2010). “Knowledge-based
schedule generation and evaluation.” Adv. Eng. Inf., 24(4), 389-403.
Moon, H., Kim, H., Kamat, V., and Kang, L. (2013). “BIM-Based Construction
Scheduling Method using Optimization Theory for Reducing Activity
Overlaps.” J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000342 ,
04014048.
New Jersey Department of Transportation (NJDOT). (2013). “BDC13T-02 -
Construction Scheduling Manual, 2013.” NJDOT, <http://www.state.nj.us/
transportation/eng/documents/BDC/pdf/CSM20130430.pdf> (Nov. 26, 2014).
Wang, W., Weng, S., Wang, S., and Chen, C. (2014). “Integrating building
information models with construction process simulations for project
scheduling support.” Autom. Constr., 37, 68-70.
© ASCE 8
Computing in Civil Engineering 2015 628
ABSTRACT
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 629
efforts have been made over four decades to improve the steps needed for a proper DES
modeling in Architecture, Engineering, Construction and Facility Management
(AEC/FM) industry. In an effort to facilitate simulation study and application,
Visualization, Information Modeling, and Simulation (VIMS) committee—one of the
technical committees under the ASCE’s Technical Council on Computing and
Information Technology—identifies and discusses three challenging areas in computer
simulation (Lee et al. 2013), which include: 1) realistic simulation modeling; 2)
applicability of simulation models to the industry; and 3) academic and educational
obstacles. The focus of this paper is on addressing the first two challenges by proposing
a framework that formalizes DES modeling process to support rapid generation of data-
driven DES models based on ACDs.
Thanks to the latest sensor technology advancements, several researchers have
recently introduced a concept of near real-time simulation which adapts real-time data
using tracking technologies such as global positioning systems (GPS), radio frequency
identification (RFID), and ultra wide-band (UWB) to update model parameters (mainly
activity durations) if and when necessary (Song and Eldin 2012; Akhavian and
Behzadan 2013; Vahdatikhaki and Hammad 2014). However, these previous studies
for near real-time simulation require a priori model which is not discovered from the
collected data but defined by modelers. Simulation model discovery based upon real
operation data is essential to achieve realistic simulation modeling and improving the
applicability of simulation models to the industry gaining more credibility of
simulation study.
Therefore, the objective of this study is to formalize a framework (Figure 1) of
autonomous data-driven DES model generation process that transforms sensor-based
real world data to DES models from scratch. Such a framework requires designing a
classification schema (i.e., taxonomy) of the level of model detail that categorize
individual activities with respect to type of resources, and scope of intended analysis
and modeling it in such a way that enables a computer system to leverage the real
operation data to infer status of activities correctly. An activity ontology hierarchy is
designed to specifies various activities and their relations at different abstraction levels
to formally define activity types and hierarchies for a given construction operation.
Using the ontology-based framework, we can systemize the procedure of simulation
modeling, which entails selecting the abstraction level of simulation model, identifying
the list of activities that need to be inferred from data to create activity logs, and
determining the type and resolution of data to be collected. The proposed framework is
demonstrated using an earthmoving operation example.
METHODOLOGY
Activity log. The proposed framework uses the idea of process mining to
discover real processes (i.e., not assumed processes) by extracting knowledge from
activity logs (Table 1), which are the lists of sequences of activities per cycle in terms
of resources. The discovered processes are then represented as ACDs at intended
abstraction levels. Since ACDs are a natural means for representing three-phase AS
simulation models (Martinez and Ioannou 1999), we can use the ACD as a blueprint
© ASCE
Computing in Civil Engineering 2015 630
for construction simulation systems such as CYCLONE (Halpin and Riggs 1992) and
STROBOSCOPE (Martinez 1996).
ACD consists of alternating circles and rectangles connected with links. The
rectangles are called activities and represent tasks performed by one or more resources.
The circles are called queues and represent inactive resources in specific states.
However, the traditional description of ACD does not provide a formal rational of
sequential process to develop models so that computer systems can use to connect
between the elements to create a model. Hence, the in-depth definition of ACD
elements is essential to represent the role of activity log in computer simulation system.
© ASCE
Computing in Civil Engineering 2015 631
© ASCE
Computing in Civil Engineering 2015 632
a real world operation data into a ACD model which can replicate the underlying
activity logs.
© ASCE
Computing in Civil Engineering 2015 633
Identify data needs. The goal of activity recognition from data is to answer
state identification questions from activity class attributes (Table 2). The answers can
© ASCE
Computing in Civil Engineering 2015 634
be delivered by determining what to measure and how to collect data. The proposed
framework uses active resource (e.g., equipment) class attributes and methods (Table
3) to systemize in choosing data collection methods. For example, a simulation model
at the highest abstraction model in Figure 4 only requires to identify start and end states
of Activiy Load. In the process of finding an answer to a state identification question
for each status from Table 2, “how do we know when truck is empty or fully loaded?”,
we can decide a data collection system we need. Data is a finite set of records of static
or dynamic attributes of resources listed in Table 3. The type of sensor can be selected
by equipment class methods. In order to check if the weight of truck is changed with a
method of isWeightChanged, load cell data with timestamp can fully answer the given
questions.
© ASCE
Computing in Civil Engineering 2015 635
REFERENCES
© ASCE
Computing in Civil Engineering 2015 636
ABSTRACT
INTRODUCTION
© ASCE
Computing in Civil Engineering 2015 637
language based on ACDs can express the logic of complex simulation models very
effectively (Martinez and Ioannou 1999). However, even though computer simulation
has been broadly researched and practiced over the past four decades, simulation
models are still often expensive and time-consuming to develop. Some challenges to
the successful completion of a simulation modeling (Law and Kelton 1991) includes:
(1) inappropriate level of model detail; (2) failure to have people with knowledge of
simulation methodology and statistics on the modeling team; and (3) failure to collect
good system data. Such challenges have been a bottleneck in achieving realistic
simulation model and improving the applicability of the simulation models to the
industry (Kandil et al. 2013).
With the recent advent of cheap, reliable remote sensing technologies,
researchers in construction engineering domain explored automated data collection
platforms including global positioning systems (GPS), radio frequency identification
(RFID), and ultra wide-band (UWB) and processing techniques to infer activities by
using rule-based inference systems and K-means clustering (Song and Eldin 2012;
Akhavian and Behzadan 2013; Vahdatikhaki and Hammad 2014). These recent
studies aim at Near Real-Time Simulation (NRTS) to generate realistic simulation
models that are responsive to changes in the real construction operation system during
the execution phase. However, the current NRTS-based studies require the initial
structure of a simulation model, which already defines and fixes the level of model
abstraction. Thus, the adaptive model updating and refinement can only be made on
model parameters such as activity durations, not on model structures themselves.
To cope with these limitations, this paper proposes an approach to learning the
simulation model structure from data by using process mining techniques as
illustrated in Figure 1. We introduce process mining techniques for discovering
workflow models at various abstraction levels from activity log data, which are time-
ordered records of all the activities performed by different types of machines (i.e.,
resources) during a given construction operation. Thus, the activity logs can be used
to construct a process specification which adequately models a corresponding activity
cycle diagram (ACD). In this study, we employ a refined -algorithm to extract a
process model from such log data and represent it as an ACD-based DES model. This
paper demonstrates the proposed method in the context of earthmoving operations
and shows that it can successfully mine the workflow process of the earthmoving
operations represented by an ACD.
Figure 1. Process mining: ACD-based DES model discovery from activity log
© ASCE
Computing in Civil Engineering 2015 638
METHODOLOGY
© ASCE
Computing in Civil Engineering 2015 639
ordering relations (Definition 2) which can be derived from the log: L , L , # L , and
L .
b L2 c in two different activity logs (i.e., L1 and L 2 from two different resource-
type members, e.g., L Truck and L Excavator ), then it appears that c needs to synchronize
a and b (AND-join pattern). Figure 2(c) describes this pattern, i.e.,
EnterLdArea L Truck DumpSoilToTruck, SwingLoaded L Excavator DumpSoilToTruck.
The logical counterpart of the AND-join pattern is the AND-split pattern shown in
Figure 2(d). If a L 1 b, and a L 2 c in two different activity logs (i.e., L1 and L 2
from two different resource-type members, e.g., L Truck and L Excavator ), then the logs
suggest that after the occurrence of a , both b and c should occur.
© ASCE
Computing in Civil Engineering 2015 640
(5) YL R { ( A, B ) X LR | ( A , B) X L R A A B ( A, B ) ( A , B) }
(6) QL R { q( A, B ) | ( A, B ) YL R } { iq L R , oq L R }
(7) TL R { ( q( A, B ) , b) | ( A, B ) YL R b B } { ( iq L R , a ) | a AI R }
(8) FL R { ( a, q( A,B ) ) | ( A, B ) YL R a A } { ( a , oq L R ) | a AO R }
(9) ( L) (QR , AR , TR , FR )
Figure 2. Typical process patterns in ACD and the footprints they leave in the
activity log
© ASCE
Computing in Civil Engineering 2015 641
1000
L1 [ Load ]
945 50
L2 [ Load , Haul , Dump, Return , Load , Haul , Dump, Re turn, Stop , Return ,
5
Load , Haul , Dump, Return, Repair , Return ]
A L1 AI 1 AO1 { Load } , X L 1 Y L1 , Q L1 { iq L 1 , oq L 1 }
TL1 { ( iq L1 , Load ) } , FL 1 { ( Load , oq L 1 ) }
( L1 ) (Q L 1 , AL 1 , TL 1 , FL 1 )
© ASCE
Computing in Civil Engineering 2015 642
© ASCE
Computing in Civil Engineering 2015 643
This paper has introduced the refined -algorithm with the definition of ACD
structure. This algorithm is an extended -algorithm, which is modified for ACD-
based DES model generation. However, the refined -algorithm has problems with
noise, infrequent/incomplete behavior, and complex routing constructs. Nevertheless,
it provides a good introduction into automated data-driven DES model generation.
The -algorithm is simple and many of its ideas have been embedded in more
complex and robust techniques. We will use the algorithm as a baseline for discussing
the challenges related to process discovery for automated data-driven simulation
model generation and introducing more practical algorithms in the future.
REFERENCES
© ASCE
Computing in Civil Engineering 2015 644
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 645
relevant data from multiple dispersed sources. These all contribute to occasional
inadequate inspection of a bridge (FHWA, 2001).
To abstract correct and relevant information, on the other hand, is not a
straightforward task. It requires formal semantic-rich representation of condition
information, alongside integrating that information with spatial contextual
information from a bridge information model. Such an integrated model, when
maintained over multiple inspections also offers an opportunity to reason temporally
with condition data. This in turn supports a critical aspect of bridge condition
assessment, i.e. to understand the evolution of defects over time (Jahanshahi & Masri,
2013). Maintaining, of course, requires storing condition information from successive
inspections in the same model and creating relationships between them.
In this paper, we identify ways to leverage IFC based model to integrate
condition information and specifically analyze the extent to which IFC-Bridge can
represent inspection-related information and how it can be extended to support
condition assessment in bridge inspections. The authors also describe a case study
done on a steel beam concrete bridge in support of the discussions in this paper.
BACKGROUND STUDY
CASE STUDY
The case study that we focused on this research is a steel beam bridge with a
suspended deck passing over other roadways in Boston. One of the goals of this case
© ASCE 2
Computing in Civil Engineering 2015 646
study was to collect necessary data to investigate the extent to which current
standards support condition assessment of bridges. This bridge was known to have
many locations that suffer from deterioration and it also contains instances of
different defect types, such as section loss, concrete spalling, exposed reinforcement,
etc. Therefore, this bridge is an appropriate candidate for our preliminary study.
As part of the case study exercise, we gathered information sources
pertaining to the case-study bridge such as bridge as-built drawings, bridge inspection
reports, point clouds using a ground laser scanner, bridge condition photos and hand
measurements of various bridge elements and defects present on them. Using as-built
drawings, we built a preliminary 3D model of the bridge using Autodesk Revit. On
the basis of observations made using the collected point cloud information and bridge
condition photos, we realized that as-built drawings had missing details and also,
some member additions were not updated in the drawings. Thus, they don’t reflect the
actual current conditions on the bridge. Therefore, we used the point cloud data to
perform deviation analysis on the 3D model and updated it. Then, we corroborated
the changes with available photos and hand measurements. In the inspection reports,
spanning several pages, we found information about bridge location and meta
information, site inspection details, and then, condition ratings, subjective condition
descriptions, defect sketches and photos of various elements in the bridge. Thereafter,
we embedded this information into the 3D model to create an integrated information
model that contains condition information and semantic relationships between
different spatial and physical elements of the bridge.
Typically, an inspector has to look for information, similar to what was
embedded in the integrated information model, in several 2D artifacts and mentally
integrate details, such as measure extents of the defects, previous condition ratings
and previous repair actions, to make an assessment of existing conditions. Overall,
condition assessment for a typical bridge involves comparing complete condition
information from preceding inspections to be able to assess changes over that
duration. Also, completeness in the condition information means integrating it with
bridge spatial and meta information, on-site inspection, repair action, spatial
component, and physical element information (See grey shaded box in Figure 1).
INTEGRATION APPROACH
© ASCE 3
Computing in Civil Engineering 2015 647
Figure 2. Representation
On-site inspection information of bridge spatial and meta information
On-site inspection can be represented in IFC-Bridge schema by creating an
IfcTask-object. As it is derived from IfcObject, all its related information, such as
inspection date, total inspection hours, current deck rating and current superstructure
© ASCE 4
Computing in Civil Engineering 2015 648
© ASCE 5
Computing in Civil Engineering 2015 649
(AASHTO, 2010, FHWA, 2001 and Jahanshahi & Masri, 2013), and interactions with
bridge inspectors in Boston and Pittsburgh, an important aspect of condition
assessment is to understand how the defect has evolved over time. Hence, there is a
need to formally specify the temporal context of defects to be able to capture
evolution of defects over successive inspections. IfcRepresentationContext class has
provision to allow IfcObject have multiple representations in different contexts
through attribute ContextType (Akinci & Boukamp, 2003). In ContextType attribute, a
unique inspection-specific reference such as an inspection date could be used by
casting it to an IfcLabel. Inspection date is unique as it is unlikely that a same type of
inspection will be performed on the same bridge on the same day. Therefore, by using
ContextType, we can distinguish geometric representations of various defects in a
temporal sense and it can be used for the purpose of understanding defect evolution
during condition assessment. Other geometric properties that are static with respect to
different inspections, such as location origin of the defect, or presence of defect on a
fracture critical member, can be related to IfcObject-object using
IfcRelDefinesByProperties relationship (See Figure 6)
CONCLUSION
© ASCE 6
Computing in Civil Engineering 2015 650
ACKNOWLEDGEMENTS
The project is funded by a grant from the National Science Foundation (NSF),
#1328930. NSF's support is gratefully acknowledged. Any opinions, findings,
© ASCE 7
Computing in Civil Engineering 2015 651
REFERENCES
© ASCE 8
Computing in Civil Engineering 2015 652
Abstract
INTRODUCTION
1
© ASCE
Computing in Civil Engineering 2015 653
level. Resource-time tradeoff options are generally considered at this level. With
available resource supply, the execution mode of individual activities is selected
among available options in order to shorten activity and project durations. Each
option is associated with one unique combination of activity duration and resource
demand as per the construction method. As such, the schedule for field operation is
formulated in consideration of time-dependent resource availability constraints and
resource-time tradeoff options. Ideally, the simultaneous minimization on three
criteria –namely: project completion time, activity completion times, and resource
supply– are desirable for scheduling and resource management at both project and
workface levels.
At project level
Resource usage
At workface Minimize:
level Project
completion
time
Resource
supply
Time
0 tp T
Resource-time
∗ ∗ tradeoff
The software system, Primavera P6, serves as the primary scheduling tool in
current practice. Primavera P6 adopts critical path method (CPM) to formulate
schedules. To facilitate both project and workface level planning, three levels of
detailed schedules are formulated (Siu, Lu and AbouRizk 2014). At the project level,
a summary schedule shows major milestones and work packages. At the workface
level, detailed working schedules are formulated to provide the fine granularity of
weekly and daily activities, and the periodic crew resources that are available are
defined. Although Primavera P6 allows for varied resource availability limits for
different time periods (Harris 2013), we found it does not provide functionalities to
address the critical decision processes for project and workface planning, as
elaborated below:
(i) The determination of resource supply quantities for particular time periods is
largely dependent on the experience of project managers and schedulers,
2
© ASCE
Computing in Civil Engineering 2015 654
LITERATURE REVIEW
3
© ASCE
Computing in Civil Engineering 2015 655
simulation model is case-dependent, tedious to build and update, and not oriented
towards reaching theoretical optimums.
In short, the aforementioned previous research largely assumed that the
resource availability remains unchanged throughout the project period. However, it is
not economical or feasible to allocate the maximum quantity of available resources to
execute the project. Developing an analytical technique to formulate and visualize the
optimum resource-loaded schedule, which features the best selection of activity
resource-time tradeoff options, the shortest project duration and the leanest resource
supply under time-dependent resource constraints, is desirable. This motivates us to
propose a crew-job allocation methodology consisting of mathematical formulations
and a crew-job interaction table to represent the scheduling results. The resulting
optimum solution provides decision support in terms of (i) fixing resource supply in
meeting resource demand over particular project time periods; (ii) selecting activity
execution modes; (iii) planning for individual resources’ workflow.
MATHEMATICAL MODELING
T T
minimize f = (txn,t m ) + (txet ) + ( Rrtp ) (1)
n m t =0 t =0 r tp
T T
( txn,t m,i ) ≤ ( t − ds,m ) xn,t m, j , {i, j} ∈ PS (2)
m t =0 m t =0
T T
xn,t m = xet = 1 (3)
m t =0 t =0
t + d a,m
( rr,t n, m xn,t m ) ≤ Rrtp , t ∈ T (4)
n m t=t
4
© ASCE
Computing in Civil Engineering 2015 656
A small case adapted from a classic textbook example (Ahuja et al. 1994) is
given to illustrate the application of the proposed mathematical model. The example
project consists of nine activities and involves two types of resources (Resource A
and Resource B). At the project level, the project can be completed within 30 time
units. The resource supply is considered with respect to three time periods. The
resource supply is expressed as a range of lower and upper bounds (Table 1). At the
workface level, the technological constraints are observed according to the activity
precedence relationships, which are indicated by the arrows depicted in the
activity/work-package network (Figure 3). Resource-time tradeoff options are
available for executing particular activities. The resource demand as per each
resource-time tradeoff option is summarized in Table 2. For example, Activity A can
be executed in Mode 1 or Mode 2. It can be completed in 2 time units by assigning 1
unit of Resource A, or completed in 1 time unit by allocating 2 units of Resource A;
Activity C can be executed by use of either Resource A or Resource B, where
Resource B acts as a substitute resource for Resource A.
5
© ASCE
Computing in Civil Engineering 2015 657
As such, the shortest project duration is 20 time units. The optimum supply of
resources is determined as: (i) 3 units of Resource A and 2 units of Resource B
should be allocated from Time 0 to Time 6; (ii) 2 units of Resource A and 2 units of
Resource B should be allocated from Time 6 to Time 10; (iii) 2 units of Resource A
and 3 units of Resource B should be allocated from Time 10 to Time 20. Table 2
(bolded values) shows the selected resource-time tradeoff options for site operations.
Figure 4 depicts the formulated optimum schedule with the corresponding resource
usage histogram. The results assist the site superintendents/schedulers in determining
the resource supply over the project period. As such, under-supply or over-supply of
the resources can be avoided.
6
© ASCE
Computing in Civil Engineering 2015 658
study. The horizontal axis denotes project progress time, and the vertical axis denotes
each individual resource of the crew. This crew-job interaction table assists the
superintendents/foremen to present the detailed individual resource workflows.
CONCLUSION
7
© ASCE
Computing in Civil Engineering 2015 659
REFERENCES
8
© ASCE
Computing in Civil Engineering 2015 660
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 661
METHODOLOGY
© ASCE 2
Computing in Civil Engineering 2015 662
through the combination of intervals when more precise information related to errors
is not available.
(Equation 1)
In the work presented in this paper, the model of interest is the damage state
model (Table 1), while each measurement (predicted value) is an image of a RC
column surface. From this image, the damage parameters which are necessary in
order to determine the damage state are extracted. Table 2 shows these damage
parameters and their associated measurement and modelling errors. Each
© ASCE 3
Computing in Civil Engineering 2015 663
measurement error is directly related to the accuracy in the respective machine vision
method for retrieving the damage parameter. The modelling error accounts for the
conservative nature of the damage state model. These are user-introduced systematic
errors that intended to be typical of how experts have dealt with modelling errors in
the past. For example, when confronted with an inequality such as those contained in
Table 1, experts round off calculations up to 10% in order to provide a safe
classification. This is equivalent to a systematic error that is represented in the last
column of Table 2.
The IMS is established by varying each of the damage parameters across their
potential values and combining the six values. A total of 4400 combinations of the
damage parameters are possible. In addition, these 4400 combinations represent the
images (predicted values) in the model falsification methodology.
Using the errors presented in Table 2, thresholds are defined for each
individual damage parameter since each parameter is a measurement type. The model
instances which are within the bounds defined by the errors are temporarily
considered to be candidate models; all others are falsified (Figure 1). Finally, each
model instance which is a candidate model on all six plots (for the six damage
parameters) is then taken to be a final candidate model, and the damage state index is
determined for each of these model instances (Figure 2).
RESULTS
The uncertainty in the damage state model was evaluated by carrying out the
error-domain model falisification approach described in the previous section for each
of the 4400 images (predicted values). Consider the example outlined in Figures 1
and 2. The RC column surface image has the following predicted measured values:
the column face pictured is the flexural face, both longitudinal and flexural cracks
exist, shear cracks exist in the lower 1/3rd of the column but do not exceed the limit
defined in Table 1, and spalling is present in the lower portion of the column with
longitudinal reinforcement exposed but not buckled.
Based on the thresholds (in red) which are clculated from the measurement
and modelling errors (Table 2), several other values are equally possible (candidate
models – marked in blue – in Figure 1). Candidates which appear in all of the six plots
are candidate models. Those that appear on five or less are falsified. The damage state
© ASCE 4
Computing in Civil Engineering 2015 664
for each of the candidate model sets is then determined using each model in the CMS.
Figure 2 displays all candidate models for this specific input image. The automated
machine-vision based method would have estimated the column’s damage state as
DS6. However, the uncertainty associated with each of the parameters shows that the
damage state could actually be DS6, DS7, DS8, DS9 or DS10.
© ASCE 5
Computing in Civil Engineering 2015 665
CONCLUSION
REFERENCES
© ASCE 6
Computing in Civil Engineering 2015 666
© ASCE 8
Computing in Civil Engineering 2015 667
ABSTRACT
KEYWORDS
RESEARCH BACKGROUND
© ASCE
Computing in Civil Engineering 2015 668
lives. It is reported that the utility lines in the United States were hit every minute
during construction activities (Su et al. 2013).
Several underground utilities management system have been proposed to
achieve better management of utility information. For example, Du et al. (2006)
proposed a system to integrate AEC software and database management system to
generate 3D utility lines from 2D drawings. Mendez et al. (2008) used a similar
approach to build 3D models from 2D drawings with help of height information.
Cypas et al. (2006) proposed a system to store utility information in 3D dimension.
However, they only focused on generation of 3D geometry from 2D drawings. None
of them have considered the updating of geometry and semantic information of
underground utilities, making the proposed systems inflexible against changes. In
addition, those systems did not support design and constructability checking for users
from the architecture, engineering and construction (AEC) industry.
In this paper, a framework that integrates Building Information Modeling
(BIM) with Geographic Information System (GIS) is proposed to help government
authorities to better manage utility information in 3D format. BIM provides flexible
model creation, modeling updating and semantic information updating, while GIS
provides the platform for visualization, model regeneration from CAD files and clash
detection. A data schema for storing underground utility information (UUI) was
proposed and implemented in the BIM-GIS integration in order to achieve seamless
data translation. The proposed framework supports model regeneration from 2D files,
model updating using BIM, and automatic design and constructability checking using
clash detection in GIS.
METHODOLOGY
© ASCE
Computing in Civil Engineering 2015 669
BIM-GIS Integration
Engine
Figure 1. Overview of the proposed BIM-GIS underground utility management framework
BIM and GIS data formats. BIM aims to store and manage the data about a
building or a facility throughout its lifecycle. Most BIM software, such as Autodesk
Revit, have powerful editing features that allow users to easily modify both geometry
and semantic information in models. Information from BIM also covers a wider
range than that in a GIS model. In the proposed BIM-GIS underground utility
management framework, Industry Foundation Classes (IFC) is used as the data
format to carry data from BIM and used in BIM-GIS integration.
GIS is a system to store, manipulate and analyze data in a geographic context.
Nowadays, 3D GIS models are becoming widely used in many applications. The
commonly used GIS platforms, such as ArcGIS, provide functions such as 3D model
visualization and multi-patch intersection analysis that could be further developed to
a clash detection module. CityGML is used in our framework to carry data from GIS
and used in the integration process.
UUI Data Schema. The schema design of the UUI system considers the
information available from both the BIM and GIS environments. The information
represented in the underground utility management system is divided into two
categories: geometry information and semantic information. The geometry
information contains both 2D lines from CAD raw data and 3D multi-patches
generated by the system. The 3D geometry is mapped to 2D drawings in order to
ensure an accurate record. The semantic information is derived from UUI
documentations, possibly generated from BIM or on-site inspections and surveys.
The detailed UUI schema used in the proposed framework is shown in Figure 2.
It is noticeable that the UUI schema provides a guideline for generating
models and inspection records. For example, the contents of Inspection Records
contains Heights that records the height of utility at certain locations and Line
Geometry Type that specifies whether the line being inspected is a straight line or a
curved one. When surveyors are doing on-site surveying using tools such as ground
penetration radar (GPR), they have to both record the inspection location and height
© ASCE
Computing in Civil Engineering 2015 670
information together. Once the data are compatible with the UUI schema, it could be
imported to the framework and used for further analysis.
Underground Utility Information
UUI Documentation
+SourceID: String
+SourceID: String +Owner: String
+RecordDate +Inspection Date: String
0..1 0..1 0..*
+Documentation Type: String +Utility Type: String
+GetSourceID(): String
+GetOwner(): String
+GetInspectionDate(): String
Semantic +GetUtilityType(): String
Inspection Records
+SurveyDate: String
0..1 +Surveyor: String 0..1
0..*
+LineGeometryType: (Line or Curve) UUI Geometry
+Heights: (location, height) 0..* +UUI Creation Date: String
+Width
+UUI Created by: String
Geometry 0..1
UUI 3D Multipatch
UUI CAD RAW DATA UUI 3D Geometry
0..1 +X: double
+CAD Creation Date: String +p1: UUI 3D Multipatch
0..1 +Y: double
+CAD Created by: String +source: String
+Z: double
+CAD Geometry: Polyline +intersects(input: List<Surface): boolean +Cosine: double
© ASCE
Computing in Civil Engineering 2015 671
Wrong
Get polylines
Data
Import Iterate
Read height
Data
Figure 3. The process of regenerating 3D models from 2D drawings and surveying records
A hard clash where the new line A soft clash where the new line (top) hits
(vertical) hits an existing one the buffer zone (in white), still unsafe
Figure 4. An illustration of hard clash and soft clash
Model Updating. BIM provides a rich information pool for the lifecycle of
buildings and facilities, such as construction documentations, as-built 3D models and
inspection records. Moreover, while the model editing features in GIS software are
weak, BIM software is often dedicated to sophisticated design process and has a
powerful editing function. The gap of information richness and editing functions
between BIM software and GIS software has motivated a model updating module for
utilities using BIM and GIS integration. This module aims to enrich and modify 3D
utility models in GIS environment using BIM software and update the models in the
background. When the model updating process is initiated, the 3D utility models are
exported as CityGML files along with a spreadsheet that maps some semantic
© ASCE
Computing in Civil Engineering 2015 672
information to objects. The CityGML file goes through the BIM-GIS integration
engine and is converted to IFC files that could be directly imported to most BIM
software. Users could make modifications to the IFC models. The modified BIM file
could then go back to the BIM integration engine and be converted to CityGML,
which could be viewed in the ArcGIS environment.
Two processes are critical in the model updating module. First, data
conversion between GIS models (CityGML) and BIM files (IFC) is performed
seamlessly using the BIM-GIS integration engine. The engine was developed based
on the techniques developed in our previous BIM-GIS integration frameworks
(Cheng et al. 2013; Cheng et al. 2015). The engine could achieve a bi-directional
mapping between CityGML and IFC files with no information loss. Second, the
semantic information from BIM is mapped to the UUI schema, as shown in Figure 5.
The information in these entities is mapped to items in the UUI schema so that it
could be stored in the GIS environment for 3D underground utility management.
IfcOwnerHistory
Underground Utility Information
+OwningApplication: IfcApplication[1..1]
+State: IfcStateEnum[0..1] +SourceID: String
+ChangeAction: IfcChangeActionEnum +Owner: String
+LastModifiedDate: IfcTimeStamp[0..1] +Inspection Date: String
+LastModifyingUser: IfcPersonAndOrganization[0..1] +Utility Type: String
+LastModifyingApplication: IfcApplication[0..1]
+CreationDate: IfcTimeStamp[1..1] +GetSourceID(): String
+GetOwner(): String
+GetInspectionDate(): String
+GetUtilityType(): String
IfcElement
+Tag: IfcIdentifier[0..1]
0..1
0..* 0..*
UUI Documentation
Geometry
IfcFlowSegment
+OverallHeight: IfcPositiveLengthMeasure[0..1]
UUI 3D Geometry
+OverallWidth: IfcPositiveLengthMeasure[0..1] +p1: UUI 3D Multipatch
+source: String
+Name: IfcLabel[0..1]
+Description: IfcText[0..1]
0..1
UUI Geometry
IfcShapeModel +UUI Creation Date: String
+UUI Created by: String 0..1
+RepresentationIdentifier: IfcLabel[0..1]
+RepresentationTyper: IfcLabel[0..1]
The framework was implemented using ArcGIS as the GIS environment and
Autodesk Revit as the BIM environment. The functions in the framework were
developed using Visual Basic in ArcGIS with ArcObjects and C# in Revit with Revit
API. The BIM-GIS integration engine was developed using Java and packaged as a
stand-alone converter.
3D Model Regeneration. Figure 6 demonstrates the model regeneration
process using a CAD drawing and one set of on-site surveying records. It is
noticeable that the survey records should be complied with the UUI schema so that
the correct data could be imported to the GIS environment. The CAD drawings and
the inspection records were verified by the regeneration module using proximity
© ASCE
Computing in Civil Engineering 2015 673
check and then the 3D models were generated, along with some semantic
information from the inspection records. In the generating process, curved lines were
also generated using the curve reconstruction function.
Design Checking and Construability checking. The design checking
module was also tested on ArcGIS using newly designed utility lines against the
models generated from the regeneration process. In the test, the 3D models of new
lines were regenerated and tested against existing lines with buffering zoom. The
results show that there are 10 collisions between the planned new lines and the
existing lines (see Figure 7). The locations of collision could also be retrieved. The
design checking in 3D is automatic and more accurate than that of manual checking
using 2D drawing.
CAD
Regenerated 3D Models
Drawing
s
Inspection records in 3D
Surveying Records
model as attributes
Figure 6. Regeneration of 3D utility model
CONCLUSIONS
© ASCE
Computing in Civil Engineering 2015 674
UUI data schema will be tested to see its coverage and effectiveness in
communicating utility information.
REFERENCES
Candice, W. Y.-h. (2011). "Human factor domains utility safety in Hong Kong
construction industry. How should we contribute for the enhancement?"
Proc., the 2nd ICUMAS Conference, Hong Kong.
Cheng, J. C. P., Deng, Y., and Anumba, C. (2015). "Mapping BIM schema and 3D
GIS schema semi-automatically utilizing linguistic and text mining
techniques." Journal of Information Technology in Construction (ITcon), 20,
193-212.
Cheng, J. C. P., Deng, Y., and Du, Q. (2013). "Mapping between BIM models and
3D GIS city models of different levels of detail " Proc., the 13th
International Conference on Construction Applications of Virtual Reality,
London, United Kingdom.
Cypas, K., Parseliunas, E., and Aksamitauskas, C. (2006). "Storage of underground
utilities data in three-dimensional geoinformation system." Geodetski vestnik,
50(3), 481-491.
Du, Y., Zlatanova, S., and Liu, X. (2006). "Management and 3D visualisation of
pipeline networks using DBMS and AEC software." Proc., the ISPRS
Commission IV Symposium on Geospatial Databases for Sustainable
Development, Goa, India.
Mendez, E., Schall, G., Havemann, S., Fellner, D., Schmalstieg, D., and Junghanns,
S. (2008). "Generating semantic 3D models of underground infrastructure."
Computer Graphics and Applications, IEEE, 28(3), 48-57.
Su, X., Talmaki, S., Cai, H., and Kamat, V. R. (2013). "Uncertainty-aware
visualization and proximity monitoring in urban excavation: a geospatial
augmented reality approach." Visualization in Engineering, 1(1), 1-13.
© ASCE
Computing in Civil Engineering 2015 675
The Benefits of BIM Integration with Facilities Management: A Preliminary Case Study
Abstract
There is an increasing recognition that, to maximize the benefits of Building Information
Modeling (BIM), it should be deployed in such a way that it remains useful beyond the design
and construction phase of projects. This means that it should be utilized in the facilities
management phase of a constructed facility. This paper focuses on the identification of benefits
gained from the effective integration of BIM in Facilities Management. Drawing on literature
from previous research, it argues for the adoption of strategic approaches towards holistic
consideration of benefits. The paper also presents the findings of a preliminary case study based
on exploratory interviews with key personnel on a major university building project, and review
of project documents. Qualitative descriptions of benefits were obtained from these interviews
and are summarized in the paper. The paper concludes with a discussion of the benefits of
BIM/FM integration, with a view to a comprehensive determination of project gains within the
lifecycle of a project. The paper offers some conclusions that are aimed at the holistic
consideration of benefits through the project phases preceding handover and including those in
the lifecycle stage for a more wholesome assessment of the effectiveness of BIM implementation
in FM.
INTRODUCTION
The integration of BIM and Facilities Management (FM) is relatively new and offers huge
potential for cost savings and improved processes. The application of BIM in many facets of
building Operations and Maintenance (O&M) can result in more sustainable, efficient and well-
managed buildings (Becerik-Gerber et al., 2012). Areas identified for the application of BIM in
FM include improved energy and sustainability management; enhanced, real-time emergency
and space management; and visualization. The cost savings and efficiency improvements
associated with BIM/FM integration are based on increased accuracy of FM data and model,
automated generation of FM data , resulting in procedural expediency for daily operational
requirements, change-management proficiencies, and long-range planning (Sabol, 2013;
Becerik-Gerber et al., 2012; Arayici et al., 2012).BIM/FM integration is plagued by numerous
challenges. Talebi (2014) succinctly grouped these into procedural, social, technical and
associated cost facets. Table 1 below outlines the main challenges as summarized by Talebi
(2014).
© ASCE
Computing in Civil Engineering 2015 676
Table 1. BIM Adaptation and Implementation Challenges (Adapted from Talebi (2014).
The most crucial challenge is the issue of interoperability (Sabol, 2013); with origins in the
initial handover process from construction to operations in one instance, or from traditional FM
data management software to BIM. It is also taken for granted that proper recording of
equipment and assets is undertaken during construction. This may prove critical if the FM team
is not brought in early enough in the project conception stages to define their requirements and
intended end uses, and thus collaborate towards a strategy for achievement. For existing
buildings, the difficulty faced in the mapping out of facility components may well be a herculean
task especially in large establishments with a collection of facilities. This may be further
complicated by the out-datedness or unavailability of as-built records and facility data. BIM
proponents are therefore hard-pressed to prove the validity of their assertions on the advantages
of BIM adoption in FM (Stowe et al., 2014). The financial investment in BIM/FM integration
may be substantial. Inherent in this is the substantive need for procedural overhaul and its
resulting impact on organizational productivity and efficiency during the transition process.
Thus, there is the need for a clear justification of returns.
Becerik-Gerber & Rice (2010) highlighted the difficulties in determining the tangible benefits of
BIM in practice. Although this is crucial to financial decision-making and a valuable tool for
marketing, few investigations have actually captured the tangible benefits of BIM; those that
have attempted to do so have focused only on a narrow scope.
Case study approaches to investigating the benefits of BIM capture most in detail, but are limited
by sample sizes, lack of quantification and objectivity (Taylor et al., 2010). Country-specific
studies and industry-wide surveys have been undertaken; with many experienced organizations
using BIM developing varying methods for the identification of returns. While this is good, the
overall approach to determination of returns in BIM is plagued by a lack of quantification and
objectivity. Stowe et al. (2014) argue that more effort should be given to developing a structured
approach to measurement for the eventual standardization of key performance metrics for future
© ASCE
Computing in Civil Engineering 2015 677
benchmarking. Becerik-Gerber & Rice (2010) shared this opinion by further reasoning that BIM
benefits should be investigated conjointly with an added dimension of a narrower focus on
disciplines in addition to a broader, global focus with consistency geared towards comparison
and maturity over time.
Sulankivi (2004) investigated the benefits of BIM in multi-partner projects applying the concept
of Concurrent Engineering (CE). Her study comprised a holistic dimension which ran parallel
with those of Becerik-Gerber & Rice (2010) and Lakka et al. (2001), where benefits were
classified in one of three ways. They were quantifiable either in monetary terms or ‘other ways’
or they were more qualitative. Becerik-Gerber & Rice (2010) described them as tangible
(monetary), semi-tangible and intangible benefits, ‘intangible’ implying a more qualitative
identification of rewards. Sulankivi (2004) used this approach in three case studies where the
categorizations were linked in a matrix resulting in a step-by-step approach to detailing. The
qualitative benefits were each elaborated upon to produce quantitative descriptions, which were
in turn given monetary value following detailed analysis. Further extraction of the most
repetitive benefits were thus recorded - useful for future benchmarking assessment as suggested
by Stowe et al. (2014). The impact and importance of each category was investigated in addition
to the customization of each benefit according to user group (designers, engineers etc.).
Numerous studies and surveys have focused on various aspects of BIM benefits following
investment. A summary of indices may thus be obtained by an extraction of a hybrid of
quantitative considerations. A decrease in RFIs and Change Orders leading to budget and
schedule conformance (Giel & Issa, 2013) constitute the most common indicators, paired with a
variety of other semi-tangible considerations such as productivity, contingency provisions and
response latency (Gilligan & Kunz, 2007). Young et al. (2009) included improved employee
productivity and communication, advantages inherent in prefabrication (Khanzode et al., 2008),
positive marketing, improved information distribution resulting in fewer delays and disputes with
less design and field errors (Sulankivi, 2004), and higher levels of managerial performance
(Vaughn et al. (2013). Suermann & Issa (2009) presented six KPIs comprising safety, cost, cost
per unit and overall cost, units per man-hour, on-time completion, and quality control/rework to
participants for ranking. Dehlin & Olofsson (2008) weighed their groupings of capital,
operational and indirect costs against quantified project and procedural benefits for an
establishment of financial returns.
The above considerations are primarily geared towards BIM investment in the project phases
preceding operation and maintenance, with no focus on the lifecycle phase. Becerik-Gerber &
Kensek (2009) highlighted the dire need for the development of core data sets geared towards
facilities management with a holistic embodiment of common elements. Teicholz (2013) adapted
ROI variables in a progressive manner from qualitative descriptions to financially quantitative in
a sample calculation of ROI for BIM integrated FM. However, there is need for a more detailed
investigation of the benefits to be gained at the FM stage of a building’s lifecycle. As a first step
towards this, a preliminary case study of a sports facility at a major university has been
undertaken to identify some of the benefits in BIM/FM integration.
© ASCE
Computing in Civil Engineering 2015 678
The case-study approach, while limited in size and broad objectivity, was adopted in this
research with a narrow focus on disciplines towards the extraction of qualitative descriptions of
benefits. This follows the step-by-step approach as utilized by Sulankivi (2004) and Teicholz
(2014) to capture intrinsic considerations for further categorization into semi-tangible benefits,
and eventual tangible determination. Whilst larger-scaled studies would capture a wider range of
benefits, the broad sampling of considerations would be subject to generalizations, with unique
project details not comprehensively captured.
A preliminary case study was undertaken to establish the critical success factors for BIM
integration in FM, as part of a larger study on BIM-integrated FM. The case study project was
provided by Penn State University’s Office of Physical Plant (OPP). The project was OPP’s first
BIM-FM integrated project which was based on Integrated Project Delivery (IPD). The facility
comprises a 6,014-seat, 200,000-square foot multi-purpose arena which replaced the 1,350-seat
Ice Pavilion on the Penn State campus, with some of the best facilities provided for Division I
hockey. It was novel in its approach, being the first IPD project to be carried out in the
institution’s history. All processes were collaborative from conception through to handover and
involved OPP staff from the early stages, working with the entire project team.
Four of the OPP participants were selected for exploratory interviews, based on the phases and
extents of their involvement within the project timeline (Table 1). Three sets of interviews were
conducted with key personnel of the Office of Physical Plant (OPP) who were most involved in
the project. A number of project documents (such as the BIM Plan, BIM Implementation
Exchanges, BIM Contract Addendum and Technical Reports) were also reviewed. The purpose
of the interview sessions was to extract qualitative descriptions of benefits reaped from the
integration of BIM in FM in the context of the IPD project as observed by each party. Figure 1
below summarizes the main steps of the research.
Selection of Review of
Literature Interview Analysis of
Interview Project
review Sessions Results
Participants Documents
The descriptions outlined in Table 2 represent a cross-section of BIM benefits with a particular
focus on FM integration. These characterize a first step towards the extraction of critical success
factors, which could be further expounded upon to deduce quantitative descriptions that can be
assessed for monetary value. The categorization of benefits according to project phase and type
of involvement presents an opportunity to comprehensively capture benefits.
© ASCE
Computing in Civil Engineering 2015 679
The benefits listed above show increased advantage with the progression of project phases;
showing that benefits become more pronounced within the lifecycle of a facility. Being a sports
facility with a higher magnitude of optimized operations, safety and emergency-preparedness
priorities; it is natural that a Facility Performance Focus (Chotipanich & Lectariyanun, 2011) is
the main strategy for its management. This is evident in the Facility Manager’s responses,
detailing response time, data accessibility and retrieval and proactive maintenance all geared
© ASCE
Computing in Civil Engineering 2015 680
towards optimal performance and quick responses. This strategy varies widely from a ‘Value for
Money’ facility operating with economies of scale and a lean operation focus (Chotipanich &
Lectariyanun, 2011).
A goal-driven approach can be adopted, whilst focusing on the various disciplines involved in
the project in a bid to extract a broader range of considerations. The facility manager had more
benefits listed, thus attesting to the positive impact that the integration of BIM can make on FM.
In exploring further, one could speculate on whether open-ended statements of advantage such as
“more proactive maintenance” could infer longer equipment life and better maintainability.
Having “more accurate data” from the early project phases yielded long-term benefits in the
operational phase, expanding into “shorter response time, improved efficiency and productivity”
to name a few. This attests to the fact that an earlier application of BIM with an FM focus from
earlier project stages would yield increasing benefits in the long-term. The application of a
structured approach can thus yield an array of critical success factors which may or may not have
been considered in top-down planning but which open a multitude of possibilities for exploration
and determination of future key performance indicators. In addition, the Project Manager’s
description of “detailed and extensive design reviews” is evidence of benefits gained from
amendments which might have led to expensive change orders following latter FM inspections.
As an example, an observation was made by FM personnel following a 3D visualization session
in the CAVE (Cave Automatic Virtual Environment) laboratory which prompted preconstruction
changes that might have been costly following construction.
Literature from previous research was analyzed, uncovering the challenges of BIM/FM
integration, and exposing the lack of a detailed body of knowledge on which to base a
framework for the determination of benefits in BIM/FM integrated projects. It was noted that the
bulk of existing research into this area focused more on the conception through to construction
phase, with less attention paid to the broader lifecycle phase. The process for the determination
of benefits in the integration of BIM and FM thus lacks quantification and objectivity. A case
study approach comprised exploratory interviews with key FM personnel that participated in the
Case Study project, followed by a review of project documents. They highlighted a list of
benefits reaped from the integration of BIM with FM from early stages of the project lifecycle.
Qualitative descriptions of benefits were obtained from three viewpoints comprising the
Facilities Project Manager, BIM Group and FM personnel. Probing benefits by discipline
allowed for a more holistic extraction of opinions to capture every facet of the integration from
project inception through to present day FM. The facility manager noted that there was a definite
improvement to FM processes with the new implementation.
Based on the preliminary case study presented, this paper has identified the benefits of the
integration of BIM in FM. These include:
1. Noted improvements to collaboration
2. More detailed strategic planning with holistic considerations
3. More detailed and extensive design reviews towards more seamless lifecycle integration
© ASCE
Computing in Civil Engineering 2015 681
The above-listed benefits are evidence of the effectiveness of the application of BIM from
project conception to facilities management, with the potential for tangible returns on
investment. The paper argued for detailed analysis of the advantages gained from BIM/FM
integration, which are crucial for owner buy-in to the substantial investment of time and effort
needed. Strategic approaches towards holistic consideration of benefits were explored with due
consideration of varied purpose - by research focus (top-down, goal-driven or process-driven),
and approach (case studies and surveys); of which the case study approach was adapted for this
research with the aim of obtaining qualitative descriptions of factors necessary for success. The
extraction of benefits from the integration of BIM in FM from project conception to the lifecycle
phase has yielded a rich description of factors deemed critical for the success of projects. It has
been found that much is to be gained from the adaptation of a lifecycle focus from project
conception, with benefits of this registering continual increase in the long-term. The facilities
management phase yielded a longer list of benefits with potential for increase over time. It has
also been found to be necessary to include considerations of benefits to be gained from different
project phases in order to embrace a more holistic consideration of benefits.
Future research should be geared towards the identification of critical success factors derived
from a study of benefits, towards the development of a structured model for the investigation of
Return on Investment (ROI) in the lifecycle phases of projects. It is expected that this will yield a
more standardized and holistic determination of key performance metrics for determining the
gains from the integration of BIM in FM.
REFERENCES
Arayici, Y., Onyenobi, T. C., & Egbu, C. O. (2012). Building information modelling (BIM) for
facilities management (FM): The MediaCity case study approach. International
Journal of 3D Information Modelling, 1(1), 55-73.
© ASCE
Computing in Civil Engineering 2015 682
Becerik-Gerber, B., Ku, K., & Jazizadeh, F. (2012). BIM-enabled virtual and collaborative
construction engineering and management. Journal of Professional Issues in Engineering
Education & Practice, 138(3), 234-245.
Becerik-Gerber B, Rice S (2010) The perceived value of building information modeling in the
U.S. building industry, Journal of Information Technology in Construction (ITCon), Vol.
15, pg. 185-201, http://www.itcon.org/2010/15
Chotipanich, S. & Lertariyanun, V (2011) "A study of facility management strategy: the case of
commercial banks in Thailand", Journal of Facilities Management, Vol. 9 Iss: 4, pp.282 -
299
Dehlin S, Olofsson T (2008) An evaluation model for ICT investments in construction projects,
ITCon Vol. 13, Special Issue Case studies of BIM use , pg. 343-361,
http://www.itcon.org/2008/23
Giel, B. and Issa, R. (2013). ”Return on Investment Analysis of Using Building Information
Modeling in Construction.” J. Comput. Civ. Eng., 27(5), 511–521.
Gilligan B. and Kunz J. (2007). VDC use in 2007: Significant value, dramatic growth, and
apparent business opportunity, CIFE Homepage, Stanford University Center for
Integrated Facility Engineering.
Khanzode A., Fischer M., and Reed D. (2008). Benefits and lessons learned of implementing
building virtual design and construction (VDC) technologies for coordination of
mechanical, electrical and plumbing (MEP) systems of a large healthcare projects.
Journal of Information Technology in Construction (ITCon), Vol. 13, 324-342
Lakka, Antti, Sulankivi, Kristiina, and Luedke, Mary (2001). Measuring the benefits of CE-
environment solution in the multi-partner projects. In Proceedings, 2nd Worldwide ECCE
Symposium. June 6-8 2001, Espoo Finland.
Sabol, L. (2013). BIM technology for FM. BIM for Facility Managers, 1st Edition, New Jersey:
John Wiley & Sons, 17-45.
Stowe, K., Zhang, S., Teizer, J., and Jaselskis, E. (2014). "Capturing the Return on Investment of
All-In Building Information Modeling: Structured Approach." Pract. Period. Struct. Des.
Constr. , 10.1061/(ASCE)SC.1943-5576.0000221 , 04014027.
© ASCE
Computing in Civil Engineering 2015 683
Taylor, J. E., Dossick, C. S., and Garvin, M. (2010). “Meeting the Burden of Proof with Case-
Study Research”, Journal of Construction Engineering and Management, Vol. 137, No. 4,
303-311, ASCE.
Teicholz, P. (2013). BIM for Facility Managers. New Jersey, U.S.: John Wiley & Sons.
Vaughan, J., Leming, M., Liu, M., and Jaselskis, E. (2013). ”Cost-Benefit Analysis of
Construction Information Management System Implementation: Case Study.” J. Constr.
Eng. Manage., 139(4), 445–455.
Won, J., & Lee, G. (2010). Identifying the consideration factors for successful BIM projects. In
Proceedings of the International Conference on Computing in Civil and Building
Engineering, Nottingham (Vol. 30, pp. 143-148).
Young Jr, N., Jones, S., Bernstein, H., & Gudgel, J. (2009). The business value of BIM. New
York.
© ASCE
Computing in Civil Engineering 2015 684
ABSTRACT
The potential for automated construction quality inspection, construction
progress tracking and post-earthquake damage assessment drives research in
interpretation of remote sensing data and compilation of semantic models of
buildings in different states. However, research efforts are often hampered by a lack
of full-scale datasets. This is particularly the case for earthquake damage assessment
research, where acquisition of scans is restricted by scarcity of access to
post-earthquake sites. To solve this problem, we have developed a procedure for
compiling digital specimens in both pre- and post-event states and for generating
synthetic data equivalent to which would result from laser scanning in the field. The
procedure is validated by comparing the physical and synthetic scans of a damaged
beam. Interpretation of the beam damage from the synthetic data demonstrates the
feasibility of using this procedure to replace physical specimens with digital models
for experimentation and for other civil engineering applications.
Keywords: Computational procedure; Laser scanning; BIM; Damage assessment;
Change detection.
INTRODUCTION
If ‘as-designed', 'as-built' and 'as-is' BIM (Building Information Modeling) models
can be compiled to represent the design, construction and maintenance phases of a
building project, then comparison of the models can serve different use-cases, such
as:
- construction progress tracking and quality checking can be abstracted as
change detection between 'as-designed' and 'as-built' states of the building,
and
© ASCE
Computing in Civil Engineering 2015 685
© ASCE
Computing in Civil Engineering 2015 686
© ASCE
Computing in Civil Engineering 2015 687
and Yankelevsky 2008). Figure 1 also shows an 'as-damaged' BIM model that was
prepared to model the beam.
(a) (b)
Figure 1. (a) Photo of a damaged beam prepared at NBRI, (b) 'as-damaged'
BIM model of the beam.
(a) (b)
Figure 2. (a) Field scan, (b) synthetic scan of the damaged beam.
© ASCE
Computing in Civil Engineering 2015 688
Figure 3. Range difference image of the synthetic and the physical scan
Figure 4. Histogram of range difference between the synthetic and real scan
(background points in either scan are excluded)
The difference between the synthetic scan and the physical scan data is depicted in
the range difference image shown in Figure 3. The blue pixels reflect background in
both PCD sets. The range differences between the two datasets are represented in
grey scale. Perfectly matched areas are represented in black pixels, while areas with
extreme differences are shown in white pixels. The extreme difference (white pixels)
in the right side of Figure 4 results from modeling inaccuracy, because in the
© ASCE
Computing in Civil Engineering 2015 689
'as-damaged' model, the beam was broken into only three parts, whereas in reality the
right part is progressively bent and could be divided into more pieces. The more
detailed the model is, the more similar the model and physical specimen are.
Nevertheless, the resulting PCD are perfectly matched in the other parts, which
demonstrates the validness of the computational procedure. From the histogram,
shown in Figure 4, it is estimated that 95% of the points are well matched
(differences are less than 0.5mm, and background points are not taken into account).
Additional tests have been performed with models of two full-scale buildings,
yielding very satisfactory results (Ma et al. 2014).
By analyzing the model changes, the beam deflection can be derived as the projected
distance from the lowest point in the 'as-damaged' model to the bottom surface of the
'as-built' model. In this case the deflection was 182 mm. A dimensional comparison
of the system generated and user prepared 'as-damaged' model is shown in Table 1,
which reflects that using the heuristic algorithm for reconstruction of the
'as-damaged' beam model is very promising but still needs improved.
Table 1. Comparison of the system generated and user prepared 'as-damaged'
model (all dimensions are mm units).
Segment System generated model User prepared model
ID Length Width Height Length Width Height
1 1,378 234 292 1,099 240 300
© ASCE
Computing in Civil Engineering 2015 690
© ASCE
Computing in Civil Engineering 2015 691
© ASCE
Computing in Civil Engineering 2015 692
Abstract
INTRODUCTION
© ASCE 1
Computing in Civil Engineering 2015 693
over the value chain. Consequently, decisions made are often less than the optimal.
Another is the inherent wastes occurring in the construction delivery processes.
Meanwhile, insights gained from the transformation that has taken place in the
manufacturing, logistics, retailing and banking industries, suggest that change arises
through the adoption and continuous development of new technology (information
technology) and processes. Of the latter, for example, lean principles have resulted in
staggering improvement in productivity for some companies in the automobile sector
(Holweg 2007).
Thus, while BIM has gained increasing acceptance in the AEC industry, its
adoption must be accompanied by a transformation also in the processes surrounding
this technology. BIM can provide the technology to facilitate the sharing of
information, and provide the collaboration across organisation and phases. But still, in
many places it is perceived mainly as a design tool or at best a visualisation tool in
3D or 4D. In manufacturing though, the digital environment of product design has
moved beyond design to enable analysis, manufacturability, modularisation and
production and has also encompassed its supply chain. To realise the full potential of
the digital environment of an AEC project in the same way, the industry must
embrace the notion of Virtual Design and Construction (VDC) (Kam and Fischer
2004). In simple terms, it is a concept or approach to build, visualise, analyse, and
evaluate project performance virtually and early before a large expenditure of time
and resources is made.
The objective of this paper is to begin with the essence of VDC, and from
there set out the potential of VDC in delivering project value. The science of VDC is
described in the form of Intelligent Project Realization (IPR), whereby the principles
of achieving VDC are formalized. At the same time, it will briefly mention concepts
of lean thinking through Multiple Domain Matrices (MDM) in as much as it is
relevant to achieving project transformation.
© ASCE 2
Computing in Civil Engineering 2015 694
© ASCE 3
Computing in Civil Engineering 2015 695
Process Reengineering. The second perspective follows from the first, the
process viewpoint. This is where the process for IPR has to take advantage of the
potential that BIM technology provides, moving away from the mere use of BIM for
modelling and visualizing aspects of the project that warrants attention from various
stakeholders. The approach is to move towards integration where a single project
model is built by different disciplines and the relevant information extracted and
analyzed to evaluate the outcomes against project objectives or goals.
Consequently, the design of the process flow in IPR is governed by the
availability of information, and the use of the information. It is no longer necessary to
do things the traditional way. The workflow has to take advantage of the availability
of information. For example, with a large developer of public housing, a typical
design workflow takes over six weeks to develop the mass model and then manually
evaluate the model against a set of design criteria. Any change at this stage will mean
another equally long rework cycle. The approach adopted in our research was to
develop an intelligent design analyzer which facilitates rapid massing design with
information-embedded objects so that model performance can be incrementally
checked as design progresses. The objective is to significantly shorten the cycle and
eliminate rework. With availability of the right information, it is possible to evaluate
the model further upstream and stop deficient design from cascading.
Another important critical element is the metrics governing the performance
of the models. These measurement metrics provide the measures to compare alternate
designs. Examples of such metrics that had been devised include, buildability score,
constructability score, GreenMarkTM etc. The metrics determines the information that
© ASCE 4
Computing in Civil Engineering 2015 696
will be required and this in turn governs the workflow that is needed to ensure the
workflow supports model performance evaluation. An overriding principle in the
design of the workflow including checks, quality and decision nodes, is the
implementation of a lean approach which serves to reduce wastes.
Use of DSM and MDM in BIM. DSM and MDM have been used to analyze
various aspects of BIM in recent literature. Gu and London (2010) used DSM to
analyze the readiness of the Australian AEC industry for BIM adoption, and
identified several factors relating to the tool adoption, functional need and strategic
issues during the adoption process as obstacles.
© ASCE 5
Computing in Civil Engineering 2015 697
Product- Product-
Product Product-Process
Organization Dependencies
Organization- Organization-
Organization
Process Dependencies
Process-
Process
Dependencies
Legend:
Inter-
Dependencies/
DSM DMM Constraints
Hickethier et al. (2011) and Jacob and Varghese (2012) both employed MDM
and DSM to study the BIM development process improvement problem. They also
demonstrated some aspects of the dimensions of Product-Process-People, which
correspond to the POP elements of VDC within their papers. Hickethier et al. (2011)
focused on identifying discrepancies in the BIM Model through a visual analysis of
the “Should” and “As is” perspective of communication flows between the modelers.
Jacob and Varghese (2012) characterized the interactions between the process and
product domains during the BIM Execution Phase, using IFC data as the interface
between the domains.
Formalizing the Pillars of IPR from MDM Analysis Techniques. The BIM
development process within VDC exposes many dependencies between the
components of the BIM model, which were hidden in the traditional CAD process.
The consequence is that the VDC process creates a complex system which is often
tightly coupled, leading to multiple potential iterations and feedback loops. The three
pillars of IPR provide the mechanism for systematically decoupling the system.
The analysis techniques associated with MDM include: 1. Clustering
(Identifying groups of elements, particularly within DMMs), 2. Partitioning
(Sequencing of elements according to their logical ordering, particularly within
DSMs), 3. Tearing (Identifying interactions for temporary removal, after which the
DSM or DMM is re-clustered or re-sequenced). The following table demonstrates
how some of the aforementioned techniques are carried out to support the pillars of
IPR.
© ASCE 6
Computing in Civil Engineering 2015 698
Clustering of Organization-Dependency/
Organization-Process DMMs to identify optimal
Process Re-engineering:
organizational configurations to implement “Big
Designing the “Big Room”
Room” design strategies.
This paper has set out to explain reasons why the promise of BIM has not
fully materialised for the AEC industry. Consequently, we underscore the importance
of VDC, and reiterate that the industry should take steps towards embracing this
concept.
While VDC provides the methodology needed by the industry, it does not
provide adequate drivers to guide the adoption. To this end, we introduced two other
dimensions of Performance Models and Inter-dependencies to POP, to provide a
holistic perspective of the VDC process, called IPR. Moreover, the IPR framework
provides the industry with the science required to analyze and replicate successful
implementation of VDC, through the pillars of: Information Management, Process
Reengineering and Intelligence & Automation.
© ASCE 7
Computing in Civil Engineering 2015 699
REFERENCES
Aram, S., Eastman, C., and Sacks, R. (2013). "Requirements for BIM platforms in the
concrete reinforcement supply chain." Automation in Construction, 35(0), 1-
17.
Browning, T. R. (2001). "Applying the design structure matrix to system
decomposition and integration problems: a review and new directions."
Engineering Management, IEEE Transactions on, 48(3), 292-306.
Chua, D. K. H., Nguyen, T. Q., and Yeoh, K. W. (2013). "Automated construction
sequencing and scheduling from functional requirements." Automation in
Construction, 35(0), 79-88.
Danilovic, M., and Browning, T. R. (2007). "Managing complex product
development projects with design structure matrices and domain mapping
matrices." International Journal of Project Management, 25(3), 300-314.
Gu, N., and London, K. (2010). "Understanding and facilitating BIM adoption in the
AEC industry." Automation in Construction, 19(8), 988-999.
Hickethier, G., Tommelein, I. D., Hofmann, M., Lostuvali, B., and Gehbauer, F.
(2011). "MDM as a Tool for Process Improvement in Building Modeling."
DSM 2011: Proceedings of the 13th International DSM Conference, S. D.
Eppinger, M. Maurer, K. Eben, and U. Lindemann, eds.Cambridge,
Massachusetts, USA, 349-362.
Holweg, M. (2007). "The genealogy of lean production." Journal of Operations
Management, 25(2), 420-437.
Howard, R., and Björk, B.-C. (2008). "Building information modelling – Experts’
views on standardisation and industry deployment." Advanced Engineering
Informatics, 22(2), 271-280.
Jacob, J., and Varghese, K. (2012). "A Model for Product-Process Integration in the
Building Industry Using Industry Foundation Classes and Design Structure
Matrix." Construction Research Congress 2012, 582-590.
Kam, C., and Fischer, M. (2004). "Capitalizing on early project decision-making
opportunities to improve facility design, construction, and life-cycle
performance—POP, PM4D, and decision dashboard approaches." Automation
in Construction, 13(1), 53-65.
Song, Y., and Chua, D. K. H. (2006). "Modeling of Functional Construction
Requirements for Constructability Analysis." Journal of Construction
Engineering and Management, 132(12), 1314-1326.
© ASCE 8
Computing in Civil Engineering 2015 700
Crane Load Positioning and Sway Monitoring Using an Inertial Measurement Unit
Abstract
Crane operation is one of the most essential activities on construction sites. However, operating a
crane is a sophisticated job which requires the operator to have extensive skills and experience,
and most importantly a comprehensive understanding of crane motions. Besides typical crane
motions such as boom slew, hoist, and extension, monitoring and controlling the position of the
load is extremely important to avoid struck-by accidents caused by crane load, especially when
the load swings as a result of wind and inertia. Although typical motions can be captured by
some existing techniques, a reliable approach to position the load and monitor the load sway
remains missing. This study proposes an orientation-based approach for tracking crane load
position and monitoring load sway in daily lifting activities. This approach adopts an off-the-
shelf inertial measurement unit (IMU) module for measuring load orientation, and an efficient
algorithm for converting orientation measurements to load positions. The proposed approach was
tested in two load sway scenarios in a controlled lab environment. Test results indicate that the
proposed approach correctly converted orientation measurements to accurate load positions and
reconstructed the load sway trajectory in both linear and circular sway motions. Enabling
continuously monitoring of crane load motion, this approach augments the crane motion
information obtained by typical crane motion capturing systems. The findings in the research
will advance the safety practices in crane lifting activities.
© ASCE
Computing in Civil Engineering 2015 701
INTRODUCTION
Crane is one of the most extensively used heavy equipment in the construction industry. More
than 125,000 cranes are in operation every day on construction sites, responsible for lifting and
transporting materials, equipment, and personnel. However, due to their huge size and mass, any
mistake in crane operation could potentially result in catastrophic consequences, such as injuries
and fatalities. A report (McCann and Gittleman 2009) from the Center for Construction Research
and Training (CPWR) reveals that crane-related incidents in the US construction industry
between 1992 and 2006 led to 632 deaths in 610 cases, an average of 43 deaths per year. This
report further states that, in crane-related accidents, the second leading cause of death after
electrocutions (25%) was struck-by crane loads (21%). Many struck-by accidents are due to
crane operator’s failure of monitoring and controlling the crane load. Crane maneuvers, such as
high speed move and sudden stop, generate undesirable load sway as a result of inertia. In
addition, severe wind also contributes to load sway during lifting jobs. Crane operators must be
highly skilled and experienced to accurately assess the distance from the load to nearby
obstructions and to successfully maintain adequate clearance from them. However, sitting in the
cabin, operators have a limited field of view of the load and surrounding objects. In blind lift
scenarios, operators in most cases have even no direct line of sight to the load (Neitzel et al.
2001). Therefore, even skilled and experienced operator cannot completely eliminate load sway
with little information of the load position. The most obvious consequence of load sway is
colliding with surrounding objects which could lead to accidents involving equipment damage,
injury, and even fatality. Besides relying on human skill and experience, load sway-related
accidents can be reduced through the assistance of technology such as sensor systems that can
monitor crane load position, and detect and control load sway. The first step towards this
direction is to automatically monitor and detect load sway in real-time. Towards this direction,
this study proposes an orientation based approach for continuously monitoring crane load sway
and tracking load position. Load orientation is measured using an inertial measurement unit
(IMU) sensor and the estimated position and trajectory of the load are calculated by converting
the Euler angle measurements to Cartesian coordinates. The following sections introduce the
work related to this approach, the method used for converting the Euler angle measurements to
Cartesian coordinates, and the preliminary results in a controlled lab environment.
RELATED WORK
Much research has been focusing on acquiring the information of crane motions and different
approaches and technology have been investigated to achieve this goal. Zhang et al. (2011)
employed Ultra-wide band (UWB) technology to estimate the crane pose in near real-time. In
this system, UWB readers are deployed around the lifting scene and UWB tags are mounted on
different spots of crane boom and the lifted object. However, the system fails to reliably track the
load position because the tags on the crane load cannot be detected continuously due to serious
signal loss which is a common limitation of UWB technology. Lee et al. (2009) developed a
laser technology based robotic crane system to improve the crane lifting productivity. The laser
sensor, installed at the tip of the luffing boom, measures the vertical distance of a lifted object
using a laser beam reflected from the reflection board installed on the hook block. Nevertheless,
it was pointed out that the elevation measurement might not be accurate or even detectable due to
load sway. As another attempt to towards this direction, Lee et al. (2012) introduced an anti-
collision system for tower cranes which employs video camera and other sensors to monitor
© ASCE
Computing in Civil Engineering 2015 702
crane motions and check potential collision with surrounding objects. For load positioning in
particular, they added an encoder sensor, although less accurate, to compensate the unreliable
measurement performance from the laser sensor. However, both systems suffer from the low
reliability and they could only measure the vertical distance from the boom to the load but fail to
measure the three dimensional position of the load. Using cameras is another popular solution for
increasing the crane operators’ visibility and situational awareness. Shapira et al. (2008)
developed a vision system for tower cranes to increase the operators’ visibility to the load during
loading and unloading. This system consists of a high-resolution video camera, a wireless
communication module, and a central computer and monitor installed in the crane cabin. The
camera is mounted on the trolley and directed downward facing the load or the loading area with
the hook constantly located at the center of the image. It is claimed that this system is able to
increase the productivity in crane lifting tasks and subsequently save costs. However, the
capability of enhancing safety was not validated in the evaluation of such system. Although the
camera provides an alternative perspective for load monitoring, the accuracy of the position
estimation is still subject to the operator’s perception of depth. Furthermore, crane camera
system is sensitive to light condition and obstructions. In the commercial market, some
companies provide an integrated solution that adopts different kind of sensors to keep track of
the crane motion and load status. For instance, they use angular and linear sensors for inclination
and length measurement, and weight and pressure sensors for overload detection. Often the
system contains a display placed in the crane cabin to show the operator with critical
measurements such as load moment and boom angle/length, and if necessary, alert them with
dangerous situations such as overload. However, such systems don’t track the load position and
provide no assistance or warning if the load swings or in proximity to surrounding objects. This
load motion information is particularly important in blind lifting jobs where the operator doesn’t
have a direct line of sight on the load.
METHODOLOGY
Inertial Measurement Unit (IMU) is an electronic device that measures velocity, orientation, and
gravitational forces, using a combination of accelerometers and gyroscopes, sometimes also
magnetometers. IMU sensors were originally developed for the maneuver of aircraft and
spacecraft, such as unmanned aviation vehicle (UAV) and satellites, to report inertial
measurements to the pilot. Recently, IMU sensor was widely used as orientation sensors in the
human field of motion for sport technique training and animation applications. A typical IMU
sensor contains angular and linear accelerometers to keep track the changes in position and
gyroscopes for maintaining an absolute angular reference. Generally, an IMU sensor has least
one accelerometer and gyroscope for each of the three axes: pitch (nose up and down), yaw (nose
left and right) and roll (clockwise or counter-clockwise). When rigidly mounted to an object, the
IMU sensor measures the liner and angular acceleration and automatically calculates the
orientation of the attached object. In the particular case of load sway, it is assumed that the cable
length is known and the cable is rigid. Therefore, the load sway motions can be simplified to a
typical 3-dimentional (3D) pendulum motion (Figure 1a). Given the measured angular
orientation on each axis (Figure 1b) and the cable length, the estimated position of the load
relative to the fixed point can be calculated by converting the Euler angle measurements to
Cartesian coordinates in the local coordinate system (Figure 1c).
© ASCE
Computing in Civil Engineering 2015 703
Yaw (x1,y1)
Roll
X
(0,0)
Pitch
Therefore, a single rotation matrix can be formed by multiplying the yaw, pitch, and roll
rotation matrices as below.
Since the load sway motion is a simple 3-dimentional pendulum motion, the load
trajectory actually lies on the internal surface of a sphere with the radius of the cable length.
Hence, the unit vector on local z-axis always points to the center of the sphere. Therefore,
converting the Euler angle measurements to Cartesian coordinates is simplified to converting this
unit vector on local z-axis to a vector on global coordinate system according to the single
rotation matrix containing three elemental rotations. Thus, the load position can be estimated by
multiplying the rotation matrix with the unit vector (0, 0, 1) and the cable length L.
© ASCE
Computing in Civil Engineering 2015 704
PILOT EXPERIME
E ENT AND RESULTS
R
The goal of this expeeriment is to test if the prroposed apprroach is capable of monnitoring cranee
load swaay and trackinng load posiition in a conntrolled lab environment
e t. The first taask is to sim
mulate
the condiition of cran
ne load sway. As shown in i Figure 2, a tripod wass used to estaablish a
stationaryy point abov
ve the load. Fixed
F on the stationary point,
p a nylonn cord hangeed a cylindriical
load and their connecction was seccured evenlyy on the periimeter of thee load (Figurre 2). In this
experimeent, an IMU sensor, MPU U-6000 from m InvenSense Inc. was em mployed to measure
m load
angular data.
d This sen
nsor containns a 3-axis acccelerometerr and a 3-axiis gyroscopee and the sennsor
is encapssulated in an
n Arduino compatible sysstem, APM 2.6 2 Set. The onboard miicroprocessoor is
capable of
o automaticcally convertting angular velocity meaasurements fromf the gyrroscope to
orientatioon values. Th
he IMU senssor was rigiddly attached to the centerr of the uppeer load surfaace,
and it waas connectedd to a computter via a USB B cable for data
d transmiission and poower supply.
W the purp
With pose of testinng the perforrmance of thhe coordinatee conversionn method, tw wo
typical swway scenarioos were simuulated in the lab environnment. The fiirst sway sceenario simulates
the most typical cranne load swayy motion, a liinear pendulum motion. In the experriment, the looad
was releaased 0.4 metters off its reesting equilibbrium positioon. Consequuently, the looad swung baack
and forthh in one direcction. The swway continueed for roughhly 30 secondds and was terminated
t
manuallyy. During thee sway, 313 load orientaation measureements weree taken by thhe IMU at a rate r
of 10Hz. After the ex xperiment thhe data was downloaded
d to a laptop for
f post-proccessing. The
conversioon method in ntroduced inn the methoddology sectioon was impleemented in Microsoft
M Exxcel.
Figure 3 shows the reesults of loadd positions anda estimateed sway trajeectory. The results
r are
presentedd in XY plan ne, XZ planee, and YZ plaane, respectiively. The reed dots indiccates load
positionss converted from
f orientattion measureement, and thet blue liness shows the estimated
trajectoryy obtained byy linking thee load positioons in sequeence. In the XY
X plane vieew, the loadd
sway trajjectory is appproximatelyy a straight line and it is visually
v mmetric abouut the origin point.
sym
The XY and YZ plan ne views shoow a regular arc trajectorry that fits veery well withh the actual
linear penndulum swaay trajectory..
© ASCE
Computing in Civil Engineering 2015 705
0.2 0.2
0.1 0.1
0 0
Y 3 Y
-0.1 idea of improving -0.1
X performance in a Z
-0.2
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1
sustainable
0.2 0.3 0.4 0.5 0.1 0.05 0 -0.05 -0.1
-0.2
industry. Despite
0.1 its importance, Converted
0.05
Z LCA load position
0
implementation in
-0.05
X the construction Estimated
-0.1 trajectory
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 industry
0.2 has not
0.3 0.4 0.5
been as smooth as
Figure 3: Converted load positions and estimated sway trajectory in linear sway scenario
in other more
The second sway scenario simulatesindustrialized
a more complex circular pendulum motion. In reality,
this type of sway motion usually happens industries due tojobs involving extreme maneuvers and/or
during lifting
under strong wind. In the second sway various scenario,inherent
the load was released 0.3 meters off its resting
equilibrium position with a random lateral force.(Khasreen
features As a result, the load started a circular sway
et al., 2009).
motion. The sway continued for roughly 80 seconds and was terminated manually. In total 815
load orientation measurements were taken The by application
the IMU at ofa rate of 10Hz. The results were shown
in XY, XZ, and YZ planes in Figure 4.LCA It was in observed that the load swung in an irregular oval-
shaped trajectory. construction
projects may be
0.3 addressed from 0.3
two
0.2
different 0.2
perspectives: on
0.1 0.1
the one hand as an
0 assessment of 0
individual
-0.1 construction -0.1
Y materials or Y
-0.2
components, and -0.2
DISCUS SSION
Compareed to existing g techniquess for crane motion
m tracking, the proposed approaach has severral
advantagges. Firstly annd foremost, this approaach accuratelly monitors and visualizes the three--
dimensioonal position n of the load.. Secondly, it
i continuoussly and reliabbly collects load positionn
data desppite of the in
nfluence of weather
w and visibility.
v Laastly, this appproach is coomparativelyy
easy to im
mplement in n daily crane activity withh little mainntenance effoorts. As the first
f phase inn this
research,, this paper reports
r the prreliminary teest results annd indicates the feasibilitty and great
potential of the propo osed approacch. It shouldd be noted thhat a controllled lab environment migght
not truly reflect real crane
c liftingg characteristtics. Several issues in fieeld operationn might
underminne the perforrmance of thhe proposed approach.
a It is assumed in this approoach that thee
crane cabble is rigid an
nd inelastic but in realityy the cable might
m bent when
w the loadd is very lighht or
when exttreme maneu uver is applieed. Additionnally, differeent rigging methods
m (e.g.., two-leggedd and
four-leggged bridle) annd load shappes (e.g., flatt plate, long rebar) mighht result in slightly differrent
sway mootion which could
c affect the accuracyy of the results.
© ASCE
Computing in Civil Engineering 2015 707
REFERENCES
Hwang, S. (2012). Ultra-wide band technology experiments for real-time prevention of tower
crane collisions. Automation in Construction, 22, 545-553.
Lee, G., Kim, H. H., Lee, C. J., Ham, S. I., Yun, S. H., Cho, H., & Kim, K. (2009). A laser-
technology-based lifting-path tracking system for a robotic tower crane. Automation in
Construction, 18(7), 865-874.
Lee, G., Cho, J., Ham, S., Lee, T., Lee, G., Yun, S. H., & Yang, H. J. (2012). A BIM-and sensor-
based tower crane navigation system for blind lifts. Automation in construction, 26, 1-10.
Neitzel, R. L., Seixas, N. S., & Ren, K. K. (2001). A review of crane safety in the construction
industry. Applied Occupational and Environmental Hygiene, 16(12), 1106-1117.
Shapira, A., Rosenfeld, Y., & Mizrahi, I. (2008). Vision system for tower cranes. Journal of
Construction Engineering and Management, 134(5), 320-332.
Zhang, C., Hammad, A., & Rodriguez, S. (2011). Crane pose estimation using UWB real-time
location system. Journal of Computing in Civil Engineering, 26(5), 625-637.
© ASCE
Computing in Civil Engineering 2015 708
Abstract
SUSTAINABLE CONSTRUCTION
Since the first definition of sustainability came out, different attempts for its
implementation have been developed all over the world. As is the case in other
sectors, the construction industry, which is currently one of the biggest economic
engines with great influence on the life quality of the citizens and tremendous impacts
on resources consumption worldwide, is also trying to achieve this aim (Berry &
McCarthy, 2011). Nevertheless, adopting sustainability in the construction industry
will require enormous effort and should be undertaken by all the stakeholders
involved.
Assessing a construction’s sustainable performance requires a systematic and
objective method which covers the whole life cycle of the facility and at the same
time analyzes environmental, social and economic factors (Burdová & Vilceková,
2012). Furthermore, it should be taken into account that defining overall performance
is a complicated process due to the involvement of different stakeholders in different
© ASCE 1
Computing in Civil Engineering 2015 709
processes and because of the particular features of the construction industry itself
(Haapio & Viitaniemi, 2008). The fragmented nature of construction processes often
hinders the implementation of sustainable principles and highlights the importance of
an interdisciplinary working platform (Dowsett & Harty, 2014).
There is a pressing need for design tools which can integrate all the
information to be evaluated in order to achieve a sustainable performance. These
tools should render it possible to compare various alternatives when making decisions.
Thus the amount of information required is in itself one of the difficulties that
decision-making methodologies have to deal with.
All in all, new methodologies based on integrative work and synergy
generation are required to achieve sustainable construction. Moreover, management
systems have to be improved and decision-making has to be based on long-term
thinking. Due to its special features, Building Information Modeling (BIM) is able to
fulfill such requirements and can be highlighted as one of the most suitable
methodologies for steering the construction industry towards a more sustainable
performance.
© ASCE 2
Computing in Civil Engineering 2015 710
The task of extracting building information from the BIM model and
integrating it into the LCA tool in order to perform environmental assessment is a
time-consuming and complicated process. Therefore a more efficient alternative has
to be developed. It should be mentioned at this point that several attempts have
already been made to integrate BIM and LCA tools, some of which consider the
whole construction in the assessment.
One example of such an approach is LCADesign, which is based on extracting
data (building quantity data) from the BIM model by means of an IFC file format and
using them for an environmental evaluation. LCADesign combines extracted data
from the BIM model with life cycle inventory data, thus obtaining different
environmental indicators and creating the opportunity of comparing various design
alternatives (Tucker, et al., 2003). When using this approach, environmental
evaluation is performed with software which is external to the BIM platform and
therefore automatic feedback into the BIM platform is no longer possible.
In November 2013 Tally was released. This new application is compatible
with the BIM software Revit. It enables environmental evaluation of the different
construction materials and estimation of the various environmental impacts of the
whole building. In addition, it compares different design alternatives. The procedure
is based on establishing relationships between different BIM elements included in the
model and materials included in the LCA database of Tally. In this way, it becomes
possible to quantify the environmental impacts arising in the different categories.
Thus the process of developing an assessment of the whole construction has been
simplified. However, the relation between model materials and Tally database has to
be set by the user. Moreover, it has to be indicated that it is only available for Revit
users (KT Innovations, 2014).
The attempts highlighted above aim at assessing the whole construction. In
this study, the starting point is a perspective based on the building’s materials and
components. It will seek a way of including environmental information in BIM
platforms in order to serve as a predesign assessment tool. Taking into account what
has been presented so far, the aim is to profit from the information derived from
environmental assessments from the early design phases on and use it as support in
© ASCE 3
Computing in Civil Engineering 2015 711
decision-making. For this purpose, a link between information contained in the LCA
databases and BIM methodology is required (Álvarez & Díaz, 2014).
Figure 1. Sought link between LCA databases and BIM methodology and tools.
© ASCE 4
Computing in Civil Engineering 2015 712
Internal Assessment
Strengths:
- It enables automatic access to the environmental information of the
element when selecting it. Users do not have to refer to other platforms
to check this information; they can access it directly from the BIM tool.
- It helps to increase social awareness of environmental aspects.
- Once the initial effort of calculating different environmental indicators
of elements in the library has been made, there is no need to repeat the
task for every single project.
Weaknesses:
- The presented information may be too complex to be interpreted by
designers, taking into account the limited time available for deciding
© ASCE 5
Computing in Civil Engineering 2015 713
External assessment
Opportunities:
- Manufacturers could offer a complete evaluation and all the required
information about their products.
- Governments all over the world are starting to ask for environmental
information.
- It is integrated in the BIM platform.
- BIM library use is widespread among engineers and designers.
Threats:
- There is a lack of training regarding environmental issues among
stakeholders in the construction industry.
- Effort, time and investment are required to ensure that environmental
criteria are taken into account when making decisions.
- There are difficulties attached to evaluating environmental criteria
together with other criteria in order to reach a solution based on a
balance between the various factors.
The presented solution has positive aspects and strengths, as shown in the
assessment, but there are also other aspects which reduce the effectiveness of this
option; the main ones are summarized in the following paragraphs.
On the one hand, a detailed environmental assessment of the different
elements included in BIM libraries should be developed using an external LCA tool
and then this information has to be inserted in the information of the object. Such a
process requires time and effort as well as a change in the manufacturer’s way of
thinking. Nevertheless, once the object includes the environmental information, it can
be used whenever it is needed without repeating the time-consuming task of
performing the environmental assessment.
On the other hand, due to the fact that each construction project is unique and
has its own location, it is quite difficult to include the distance from manufacturer to
construction site as a general factor in the environmental assessment. Therefore, some
assumptions have to be made, which means that the accuracy of the assessment is
reduced. Moreover, understanding the environmental results is one of the main
© ASCE 6
Computing in Civil Engineering 2015 714
CONCLUSIONS
© ASCE 7
Computing in Civil Engineering 2015 715
REFERENCES
Álvarez Antón, L. & Díaz, J. (2014). “Integration of Life Cycle Assessment in a BIM
Environment”, Procedia Engineering, 85, pp. 26 – 32.
American Institute of Architects (2010). “AIA Guide to Building Life Cycle
Assessment in Practice”. American Institute of Architects. Washington DC.
Berry, C. & McCarthy, S. (2011) Guide to sustainable procurement in construction,
Ciria, London.
Burdová, E. K. & Vilceková, S. (2012). “Building Environmental Assessment of
Construction and Building Materials”. Journal of Frontiers in Construction
Engineering, 1(1), pp. 1-7.
de Vries, B., Allameh, E. & Heidari Jozam, M. (2012). “Smart-BIM (Building
Information Modeling.” Proceedings of the 29th International Symposium of
Automation and Robotics in Construction, ISARC. Eindhoven, Nehterlands,
26th June 2012.
Díaz, J., Álvarez Antón, L., Anis, K. & Drogemuller, R. (2014). “Synergies for
sustainable construction based on BIM and LCA.” Tagungsband BIM
Kongress 2014. pp. 108 – 125, GRIN Verlag GmbH.
Dowsett, R. M. & Harty, C. F. (2014) “Evaluating the benefits of BIM for sustainable
design – A review”. Proceeding 29th Annual Association of Researcher in
Construction Management Conference, ARCOM 2013, pp. 13 – 23.
Haapio, A. & Viitaniemi, P. (2008). “A critical review of building environmental
assessment tools.” Environmental Impact Assessment Review, 28(7), pp. 469-
482.
Heidari, M., Allameh, E., de Vries, B., Timmermans, H., Jessurn, J. & Mozaffar, F.
(2014). “Smart-BIM virtual prototype implementation”. Automation in
Construction, 39, pp. 134 – 144.
HM Government (2008). Strategy for sustainable construction. Department for
Business, Enterprise & Regulatory Reform, London.
Khasreen, M. M., Banfill, P. F., & Menzies, G. F. (2009). “Life-Cycle Assessment
and the Environmental Impact of Buildings: A Review.” Sustainability, 1(3),
pp. 674 - 701.
KT Innovations (2014). “Tally. Overview”. http://www.choosetally.com/overview/>
(Dec. 2, 2014).
McGraw Hill Construction (2010). SmartMarket Report. Green BIM. McGraw-Hill
Construction. Bedford.
Ortiz, O., Catells, F. & Sonnemann, G. (2009). “Sustainability in the construction
industry: A review of recent developments based on LCA.” Construction and
Building Materials, 23, pp. 28 - 29.
Shennan, R. (2014). “BIM advances sustainability”. Mott MacDonald,
https://www.mottmac.com/views/bim-advances-sustainability>(Dec. 4, 2014).
Tucker, S. N., Ambrose, M.D. & Johnston, D.R. (2003). “Integrating eco-efficiency
assessment of commercial buildings into the design process: LCADesign”.
International Conference on Smart and Sustainable Built Environment,
November 2003, Brisbane
© ASCE 8