0% found this document useful (0 votes)
425 views

Computing in Civil Engineering 2015

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
425 views

Computing in Civil Engineering 2015

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 730

COMPUTING IN CIVIL

ENGINEERING 2015
PROCEEDINGS OF THE 2015 INTERNATIONAL WORKSHOP ON
COMPUTING IN CIVIL ENGINEERING

June 21–23, 2015


Austin, Texas

SPONSORED BY
Computing and Information Technology Division
of the American Society of Civil Engineers

EDITED BY
William J. O'Brien, Ph.D., P.E.
Simone Ponticelli, Ph.D.

Published by the American Society of Civil Engineers


Published by American Society of Civil Engineers
1801 Alexander Bell Drive
Reston, Virginia, 20191-4382
www.asce.org/publications | ascelibrary.org

Any statements expressed in these materials are those of the individual authors and do not necessarily
represent the views of ASCE, which takes no responsibility for any statement made herein. No reference
made in this publication to any specific method, product, process, or service constitutes or implies an
endorsement, recommendation, or warranty thereof by ASCE. The materials are for general information
only and do not represent a standard of ASCE, nor are they intended as a reference in purchase
specifications, contracts, regulations, statutes, or any other legal document. ASCE makes no
representation or warranty of any kind, whether express or implied, concerning the accuracy,
completeness, suitability, or utility of any information, apparatus, product, or process discussed in this
publication, and assumes no liability therefor. The information contained in these materials should not be
used without first securing competent advice with respect to its suitability for any general or specific
application. Anyone utilizing such information assumes all liability arising from such use, including but
not limited to infringement of any patent or patents.

ASCE and American Society of Civil Engineers—Registered in U.S. Patent and Trademark Office.

Photocopies and permissions. Permission to photocopy or reproduce material from ASCE publications
can be requested by sending an e-mail to [email protected] or by locating a title in ASCE's Civil
Engineering Database (http://cedb.asce.org) or ASCE Library (http://ascelibrary.org) and using the
“Permissions” link.

Errata: Errata, if any, can be found at http://dx.doi.org/10.1061/9780784479247


Copyright © 2015 by the American Society of Civil Engineers.
All Rights Reserved.
ISBN 978-0-7844-7924-7 (PDF)
Manufactured in the United States of America.
Computing  in  Civil  Engineering  2015 iii

Preface
The 2015 ASCE International Workshop on Computing in Civil Engineering (IWCCE) was held
in Austin from June 21-23, 2015. The workshop was hosted by The University of Texas at
Austin under the auspices of the ASCE Technical Council on Computer and Information
Technology (TCCIT). The IWCCE has become an international platform for researchers to share
and debate topical issues across the various aspects of computing research in Civil Engineering.
This edition of the workshop is aimed at continuing this successful debate by attracting a strong
active audience through keynote sessions, dedicated topic tracks, and committee meetings.

These proceedings include 88 papers from 14 countries around the globe with topics in three
main categories: Data, Sensing, and Analysis; Visualization, Information Modeling, and
Simulation; Education. From a total of 168 received abstracts, the final set of articles was
selected through a rigorous peer-review process, which involved the collection of at least two
blinded reviews per article. The review process was performed for both abstracts and full papers,
ensuring that only the best contributions were selected. Finally, authors had the chance to
incorporate reviewers’ comments into the final version. We are really pleased with the high
quality of selected articles – that represent most contemporary and cutting-edge research on
Computing in Civil Engineering – and we wish to thanks both authors and reviewers for their
efforts.

Organizing this workshop has been possible only with the support of many. We are particularly
grateful to the Department of Civil, Architectural, and Environmental Engineering at The
University of Texas at Austin for their support and infrastructure. The TCCIT committees
supported the review process and provided extensive organizational assistance. Special thanks
are also extended to Simone Ponticelli and Kris Powledge for their hard work in assisting the
committee.

We hope that you enjoyed the technical sessions during the workshop and that you had a
memorable stay in Austin!

William J. O’Brien, PhD, P.E.


Conference Chair
2015 ASCE International Workshop on Computing in Civil Engineering

©  ASCE
Computing  in  Civil  Engineering  2015 iv

Acknowledgments
Special thanks are due to the members of the Local Organizing Committee for their continuous
and tireless support throughout the organization of the workshop:

Name Institution
Dr. Carlos Caldas The University of Texas at Austin
Dr. Fernanda Leite The University of Texas at Austin
Dr. William J. O’Brien The University of Texas at Austin
Dr. Simone Ponticelli The University of Texas at Austin

The editor would also like to thank the following reviewers for their dedication, which ensured
the high value of the paper selection process:

Name Institution
Mani Golparvar-Fard University of Illinois at Urbana-Champaign
Nora El-Gohary University of Illinois at Urbana-Champaign
Yong Cho Georgia Institute of Technology
Ivan Mutis University of Florida
Carol Menassa University of Michigan
SangHyun Lee University of Michigan
Fernanda Leite The University of Texas at Austin
Hazar Dib Purdue University
Changbum Ahn University of Nebraska-Lincoln
Eric Marks University of Alabama
Pingbo Tang Arizona State University
SangUk Han University of Alberta
Abbas Rashidi The University of Tennessee, Knoxville
Ken-Yu Lin University of Washington
Zhenhua Zhu Concordia University
Renate Fruchter Stanford University
JoonOh Seo University of Michigan
Ali Mostafavi Florida International University
Chao Wang Georgia Institute of Technology
Reza Akhavian University of Central Florida
Omar El-Anwar Cairo University
Frederic Bosche Heriot-Watt University

©  ASCE
Computing  in  Civil  Engineering  2015 v

Timo Hartmann University of Twente


Fei Dai West Virginia University
Mohammad Jahanshahi Purdue University
Man-Woo Park Myongji University
Yelda Turkan Iowa State University
Svetlana Olbina University of Florida
Ming Lu University of Alberta
Jie Gong Rutgers University
Steven Ayer Arizona State University
Simone Ponticelli University of Texas at Austin
Amir Behzadan University of Central Florida

©  ASCE
Computing  in  Civil  Engineering  2015 vi

Contents

Data, Sensing, and Analysis

4D Full-Scale Finite Element Analysis in Bridge Engineering ............................... 1


Yi-Xin Li and Jian-Guo Nie

Agent Based Model for Resource Constrained Scheduling of Construction


Projects ........................................................................................................................ 8
A. B. Senouci, K. Abdel Warith, and N. N. Eldin

Comparison between Nominal and Actual Hourly Production of


Crawler-Type Dozer: A Case Study ....................................................................... 17
Hamed Nabizadeh Rafsanjani

Automatic Satellite-Based Vessel Detection Method for Offshore Pipeline


Safety .......................................................................................................................... 25
M. Poshtiban, L. Song, and N. Eldin

Skeleton-Based Registration of 3D Laser Scans for Automated Quality


Assurance of Industrial Facilities ............................................................................ 33
M. Nahangi, L. Chaudhary, J. Yeung, C. T. Haas, and S. Walbridge

Mobile Proximity Sensing Technologies for Personnel and Equipment Safety


in Work Zones ........................................................................................................... 41
J. Park, E. Marks, Y. K. Cho, and W. Suryanto

Feature Extraction and Classification of Household Refrigerator Operation


Cycles: A Study of Undisturbed Refrigerator Scenario ........................................ 49
Zeyu Wang and Ravi S. Srinivasan

Automated Tolerance Analysis of Curvilinear Components Using 3D Point


Clouds for Adaptive Construction Quality Control .............................................. 57
Vamsi Sai Kalasapudi and Pingbo Tang

A Spatial Analysis of Occupant Energy Efficiency by Discrete Signal


Processing on Graphs ............................................................................................... 66
Rishee K. Jain, Rimas Gulbinas, José M. F. Moura, and John E. Taylor

Pavement Surface Permanent Deformation Detection and Assessment Based


on Digital Aerial Triangulation ............................................................................... 74
Su Zhang, Susan M. Bogus, and Christopher D. Lippitt

©  ASCE
Computing  in  Civil  Engineering  2015 vii

A Case Study of Construction Equipment Recognition from Time-Lapse Site


Videos under Low Ambient Illuminations ............................................................. 82
Xiaoning Ren, Zhenhua Zhu, Chantale Germain, Bryan Dean, and Zhi Chen

A BIM-Enabled Platform for Power Consumption Data Collection and


Analysis ...................................................................................................................... 90
C. T. Chiang, T. W. Ho, and C. C. Chou

Wavelet Transform on Multi-GPU for Real-Time Pavement Distress


Detection .................................................................................................................... 99
K. Georgieva, C. Koch, and M. König

Real-Time Location Data for Automated Learning Curve Analysis of Linear


Repetitive Construction Activities......................................................................... 107
H. Ergun and N. Pradhananga

An Empirical Comparison of Internal and External Load Profile Codebook


Coverage of Building Occupant Energy-Use Behavior ....................................... 115
Ardalan Khosrowpour, Rimas Gulbinas, and John E. Taylor

Challenges in Data Management When Monitoring Confined Spaces Using


BIM and Wireless Sensor Technology .................................................................. 123
Z. Riaz, M. Arslan, and F. Peña-Mora

GIS-Based Decision Support System for Smart Project Location ..................... 131
A. Komeily and R. Srinivasan

Thermal Comfort and Occupant Satisfaction of a Mosque in a Hot and


Humid Climate ........................................................................................................ 139
Gulben Calis, Berna Alt, and Merve Kuru

Threshold-Based Approach to Detect Near-Miss Falls of Iron Workers


Using Inertial Measurement Units ........................................................................ 148
K. Yang, H. Jebelli, C. R. Ahn, and M. C. Vuran

A Framework for Model-Driven Acquisition and Analytics of Visual Data


Using UAVs for Automated Construction Progress Monitoring ....................... 156
Jacob J. Lin, Kevin K. Han, and Mani Golparvar-Fard

Semantic Annotation for Context-Aware Information Retrieval for


Supporting the Environmental Review of Transportation Projects .................. 165
Xuan Lv and Nora M. El-Gohary

Automated Extraction of Information from Building Information Models


into a Semantic Logic-Based Representation ....................................................... 173
J. Zhang and N. M. El-Gohary

©  ASCE
Computing  in  Civil  Engineering  2015 viii

Electrical Contractors’ Safety Risk Management: An Attribute-Based


Analysis .................................................................................................................... 181
Pouya Gholizadeh and Behzad Esmaeili

Ontology-Based Information Extraction from Environmental Regulations


for Supporting Environmental Compliance Checking ....................................... 190
Peng Zhou and Nora El-Gohary

Laser Scanning Intensity Analysis for Automated Building Wind Damage


Detection .................................................................................................................. 199
A. G. Kashani, M. J. Olsen, and A. J. Graettinger

Recognition and 3D Localization of Traffic Signs via Image-Based Point


Cloud Models .......................................................................................................... 206
Vahid Balali and Mani Golparvar-Fard

Automated Cycle Time Measurement and Analysis of Excavator’s Loading


Operation Using Smart Phone-Embedded IMU Sensors.................................... 215
N. Mathur, S. S. Aria, T. Adams, C. R. Ahn, and S. Lee

Deviation Analysis of the Design Intent and Implemented Controls of HVAC


Systems Using Sensor Data: A Case Study........................................................... 223
R. Sunnam, S. Ergan, and B. Akinci

Implications of Micro-Management in Construction Projects: An Agent


Based Modeling Approach ..................................................................................... 232
Jing Du and Mohamed El-Gafy

A Data Quality-Driven Framework for Asset Condition Assessment Using


LiDAR and Image Data.......................................................................................... 240
Pedram Oskouie, Burcin Becerik-Gerber, and Lucio Soibelman

Efficient Management of Big Datasets Using HDF and SQLite: A


Comparative Study Based on Building Simulation Data .................................... 249
Wael M. Elhaddad and Bulent N. Alemdar

Trip Characteristics Study through Social Media Data ..................................... 257


Chuan-Heng Lin and Albert Y. Chen

A Non-Invasive Sensing System for Decoding Occupancy Behaviors


Affecting Building Energy Performance .............................................................. 263
Triana S. Carmenate, Diana Leante, Sebastian A. Zanlongo, Leonardo Bobadilla, and
Ali Mostafavi

©  ASCE
Computing  in  Civil  Engineering  2015 ix

Visual Complexity Analysis of Sparse Imageries for Automatic Laser Scan


Planning in Dynamic Environments ..................................................................... 271
Cheng Zhang and Pingbo Tang

Automated Post-Production Quality Control for Prefabricated Pipe-Spools... 280


Mahdi Safa, Arash Shahi, Mohammad Nahangi, Carl Haas, and Majeed Safa

Characterizing Travel Time Distributions in Earthmoving Operations Using


GPS Data ................................................................................................................. 288
Sanghyung Ahn, Jiwon Kim, Phillip S. Dunston, Amr Kandil, and Julio C. Martinez

Education

A Qualitative Study on the Impact of Digital Technologies on Building


Engineering Design Offices .................................................................................... 296
Graham Hayne, Bimal Kumar, and Billy Hare

Learning about Performance of Building Systems and Facility Operations


through a Capstone Project Course ...................................................................... 305
Miguel Mora, Semiha Ergan, Hanzhi Chen, Hengfang Deng, An-Lei Huang,
Jared Maurer, and Nan Wang

Integrated Target Value Approach Engaging Project Teams in an Iterative


Process of Exploration and Decision Making to Provide Clients with the
Highest Value .......................................................................................................... 313
Renate Fruchter, Flavia Grey, Norayr Badasyan, Sarah Russell-Smith, and
Fernando Castillo

Enhancing Spatial and Temporal Cognitive Ability in Construction


Education through the Effect of Artificial Visualizations................................... 322
Ivan Mutis

Visualization, Information Modeling, and Simulation

A Real-Time Building HVAC Model Implemented as a Plug-In for


SketchUp .................................................................................................................. 330
Zaker A. Syed and Thomas H. Bradley

Visualization of Indoor Thermal Conditions Using Augmented Reality for


Improving Thermal Environment......................................................................... 339
Nobuyoshi Yabuki, Masahiro Hosokawa, Tomohiro Fukuda, and Takashi Michikawa

©  ASCE
Computing  in  Civil  Engineering  2015 x

A Conceptual Framework for Modeling Critical Infrastructure


Interdependency: Using a Multilayer Directed Network Model and
Targeted Attack-Based Resilience Analysis ......................................................... 347
Chen Zhao, Nan Li, and Dongping Fang

An Integrated Framework for the Assessment of the Impacts of Uncertainty in


Construction Projects Using Dynamic Network Simulation .............................. 355
Jin Zhu and Ali Mostafavi

Information Representation Schema for Tower Crane Planning in Building


Construction Projects ............................................................................................. 363
Yuanshen Ji and Fernanda Leite

An eeBIM-Based Platform Integrating Carbon Cost Evaluation for


Sustainable Building Design .................................................................................. 371
Qiuchen Lu and Sanghoon Lee

Optimizing Disaster Recovery Strategies Using Agent-Based Simulation ........ 379


Mohamed S. Eid and Islam H. El-Adaway

An Analytical Approach to Evaluating Effectiveness of Lift Configuration .... 387


Seokyon Hwang

A Case Study of BIM-Based Model Adaptation for Healthcare Facility


Management—Information Needs Analysis ........................................................ 395
Zhulin Wang, Tanyel Bulbul, and Jason Lucas

Estimating Fuel Consumption from Highway-Rehabilitation Program


Implementation on Transportation Networks ..................................................... 403
Charinee Limsawasd, Wallied Orabi, and Sitthapon Pumpichet

GPU-Powered High-Performance Computing for the Analysis of


Large-Scale Structures Based on OpenSees ......................................................... 411
Yuan Tian, Linlin Xie, Zhen Xu, and Xinzheng Lu

Automated Underground Utility Mapping and Compliance Checking Using


NLP-Aided Spatial Reasoning ............................................................................... 419
Shuai Li and Hubo Cai

Information Extraction for Freight-Related Natural Language Queries ......... 427


Dan P. K. Seedah and Fernanda Leite

BIM-Based Planning of Temporary Structures for Construction Safety ......... 436


Kyungki Kim and Yong Cho

©  ASCE
Computing  in  Civil  Engineering  2015 xi

Applying a Reference Collection to Develop a Domain Ontology for


Supporting Information Retrieval ........................................................................ 445
Nai-Wen Chi, Yu-Huei Jin, and Shang-Hsien Hsieh

BIM-Driven Islamic Construction: Part 1—Digital Classification .................... 453


A. M. Almaimani and N. O. Nawari

BIM-Driven Islamic Construction: Part 2—Digital Libraries ........................... 460


A. M. Almaimani and and N. O. Nawari

Integration of BIM and GIS: Highway Cut and Fill Earthwork Balancing ..... 468
Hyunjoo Kim, Zhenhua Chen, Chung-Suk Cho, Hyounseok Moon, Kibum Ju, and
Wonsik Choi

Towards Understanding End-User Lighting Preferences in Office Spaces by


Using Immersive Virtual Environments ............................................................... 475
Arsalan Heydarian, Evangelos Pantazis, Joao P. Carneiro, David Gerber, and
Burcin Becerik-Gerber

Temporal and Spatial Information Integration for Construction Safety


Planning ................................................................................................................... 483
Sooyoung Choe and Fernanda Leite

Modeling Construction Processes: A Structured Graphical Approach


Compared to Construction Simulation ................................................................. 491
I. Flood

A Hybrid Control Mechanism for Stabilizing a Crane Load under


Environmental Wind on a Construction Site ....................................................... 499
Bin Ren, A. Y. T. Leung, Jiayu Chen, and Xiaowei Luo

Regional Seismic Damage Simulation of Buildings: A Case Study of the


Tsinghua Campus in China ................................................................................... 507
Xiang Zeng, Zhen Xu, and Xinzheng Lu

Construction Operation Simulation Reflecting Workers’ Muscle Fatigue ....... 515


J. Seo, M. Moon, and S. Lee

Construction Productivity Impacts of Forecasted Global Warming Trends


Utilizing an Integrated Information Modeling Approach .................................. 523
Yongwei Shan, Paul M. Goodrum, and M. Phil Lewis

Towards Automated Constructability Checking: A Case Study of Aligning


Design Information with Formwork Decisions .................................................... 531
Li Jiang, Robert M. Leicht, and John I. Messner

©  ASCE
Computing  in  Civil  Engineering  2015 xii

A Hierarchical Computer Vision Approach to Infrastructure Inspection........ 540


Ali Khaloo and David Lattanzi

Advancing in Object-Based Landscape Information Modeling: Challenges


and Future Needs .................................................................................................... 548
Hamid Abdirad and Ken-Yu Lin

BIM and QR Code for Operation and Maintenance ........................................... 556


Pavan Meadati and Javier Irizarry

Measuring End-User Satisfaction in the Design of Building Projects Using


Eye-Tracking Technology ...................................................................................... 564
Atefeh Mohammadpour, Ebrahim Karan, Somayeh Asadi, and Ling Rothrock

Automated Rule-Based Checking for the Validation of Accessibility and


Visibility of a Building Information Model .......................................................... 572
Y. C. Lee, C. M. Eastman, and J. K. Lee

Using Building Energy Simulation and Geospatial Analysis to Determine


Building and Transportation Related Energy Use .............................................. 580
Ebrahim Karan, Somayeh Asadi, Atefeh Mohammadpour, Mehrzad V. Yousefi, and
David Riley

A Simulation Framework for Network Level Cost Analysis in Infrastructure


Systems..................................................................................................................... 588
M. Batouli, O. A. Swei, J. Zhu, J. Gregory, R. Kirchain, and A. Mostafavi

BIM-Assisted Structure-from-Motion for Analyzing and Visualizing


Construction Progress Deviations through Daily Site Images and BIM ........... 596
Kevin K. Han and Mani Golparvar-Fard

Using BIM for Last Planner System: Case Studies in Brazil ............................. 604
M. C. Garrido, R. Mendes Jr., S. Scheer, and T. F. Campestrini

Collecting Fire Evacuation Performance Data Using BIM-Based Immersive


Serious Games for Performance-Based Fire Safety Design ................................ 612
Jun Zhang and Raja R. A. Issa

Automatic Construction Schedule Generation Method through BIM Model


Creation ................................................................................................................... 620
Jaehyun Park and Hubo Cai

Data-Driven Analysis Framework for Activity Cycle Diagram-Based


Simulation Modeling of Construction Operations............................................... 628
Sanghyung Ahn, Phillip S. Dunston, Amr Kandil, and Julio C. Martinez

©  ASCE
Computing  in  Civil  Engineering  2015 xiii

Process Mining Technique for Automated Simulation Model Generation


Using Activity Log Data ......................................................................................... 636
Sanghyung Ahn, Phillip S. Dunston, Amr Kandil, and Julio C. Martinez

Towards the Integration of Inspection Data with Bridge Information Models to


Support Visual Condition Assessment .................................................................. 644
V. Kasireddy and B. Akinci

Methodology for Crew-Job Allocation Optimization in Project and


Workface Scheduling.............................................................................................. 652
Ming-Fung Francis Siu, Ming Lu, and Simaan AbouRizk

Error Assessment of Machine Vision Techniques for Object Detection and


Evaluation ................................................................................................................ 660
S. German Paal and I. F. C. Smith

An Integrated BIM-GIS Framework for Utility Information Management


and Analyses ............................................................................................................ 667
Jack C. P. Cheng and Yichuan Deng

The Benefits of BIM Integration with Facilities Management: A Preliminary


Case Study ............................................................................................................... 675
S. Terreno, C. J. Anumba, E. Gannon, and C. Dubler

A Computational Procedure for Generating Specimens of BIM and Point


Cloud Data for Building Change Detection.......................................................... 684
Ling Ma, Rafael Sacks, Reem Zeibak-Shini, and Sagi Filin

Understanding the Science of Virtual Design and Construction: What It


Takes to Go beyond Building Information Modeling ......................................... 692
David K. H. Chua and Justin K. W. Yeoh

Crane Load Positioning and Sway Monitoring Using an Inertial Measurement


Unit ........................................................................................................................... 700
Yihai Fang and Yong K. Cho

Sustainable Construction Enhanced through Building Information


Modeling .................................................................................................................. 708
L. Alvarez-Anton, J. Díaz, and D. Castro-Fresno

©  ASCE
This page intentionally left blank
Computing  in  Civil  Engineering  2015 1

4D Full-Scale Finite Element Analysis in Bridge Engineering

Yi-Xin Li1 and Jian-Guo Nie2


1
Dept. of Civil Engineering, Tsinghua University, Beijing, China 100084. E-mail:
[email protected]
2
Dept. of Civil Engineering, Tsinghua University, Beijing, China 100084. E-mail:
[email protected]

Abstract

In bridge engineering, besides the design in 3D space scale, the construction


at the time scale is also an important aspect that will influence the performance of a
bridge in use stage. Two aspects of work are contained in this paper. Firstly, based
on a three-span composite bridge of continuous box beam (88m+156m+88m), a full-
scale finite element model was constructed in the program Midas FEA. Specially, the
box beam is consisted of corrugated steel webs and concrete flanges. More than 300
thousand elements were applied in this full-scale model, to exactly simulate the
authentic construction, shrinkage and creep of concrete, effect of pre-stressed-strand,
shear lag effect and slip effect. Thus, the full-scale model may indicate the real
mechanical behavior of the whole structure in both construction and use stage.
Secondly, focusing on the connection style, two types of connectors including
headed stud and ‘perfobond strip’ were applied in the composite box beam.
Furthermore, to reduce the gravity of self-load, some parts of the concrete box in
central span ware replaced by steel plate. And the performance of these three
programs both in construction and usage were compared to approach an optimization
design. With the simulation of 4D full-scale finite element model, the results wear
reasonable and exact. And the comparison research indicated that the program with
stud connectors and partly replaced box beam had the best performance in key index.
The 4D full-scale finite element analysis method can provide reference and
experience for further study.

Keywords

4D full-scale; Composite bridge; Finite element analysis; Connection style;


Optimization design

INTRODUCTION

During the past decades, corrugated steel webs were introduced to replace the
stiffened steel plates of box girders for bridges. Generally, beams and girders with
corrugated webs are more economical and improve the aesthetics of the structure
(Sayed-Ahmed 2001). In 1982, the advantages of using trapezoid corrugated steel
webs along with external pre-stressing for box or I-girder composite systems in
bridge construction were recognized by Campenon from France (Cheyrezy and
Combault 1990). French research started in 1983 and led to building of four bridges

1
©  ASCE
Computing  in  Civil  Engineering  2015 2

between 1986 and 1994. In Japan, similar research led to construction of three
bridges with corrugated webs between 1993 and 1998 (Naito and Hattori 1994;
Metwally and Loov 2003). One of the most critical issues in design of composite
girders is the connection reliability between steel girder and concrete slab. Currently
two types of connectors are widely used to realize compatible deformation of steel
and concrete, i.e. the headed stud and the ‘Perfobond strip’ (PBL connector) (Kraus
and Wurzer 1997; Saari et al. 2004; Shim et al. 2004; Lee et al. 2005). Both types
can transfer shear force and prevent separation between two parts of composite
girders, i.e., anti-shearing and anti-uplift.
As the development of computers, finite element programs have been widely
applied in analysis and design of composite bridges. In small scale, the mechanism of
composite girder could be simulated exactly by fine finite element analysis. But in
large scale, such as long-span bridges, the finite element usually should be simplified
to save computing time (Mabsout et al. 1997; Sebastian and McConnel 2000). In this
study, a full-scale finite element model of composite box girder bridge with
corrugated webs was constructed in the program Midas FEA. As considering time
scale, this 4D full-scale finite element analysis was conducted to exactly simulate the
authentic construction, shrinkage and creep of concrete, effect of pre-stressed-strand,
and slip effect. Furthermore, to compare the exact performance of headed stud and
PBL connectors, two models with different connectors and construction were
compared to approach an optimization design.

FULL-SCALL FINITE ELEMENT MODEL

Project background

The analysis model was based on the construction project of Xin’an Bridge
across Dongbao River in Shenzhen, China. It was a 3-spans (88m+156m+88m)
composite box girder bridge. In the design, girders of this bridge were composite box
girders using corrugated steel webs.
Fig.1 shows the design of Xin’an Bridge. The box girders were consisted of
concrete slab and corrugated steel web. And the height of box girders changed along
the whole bridge between 3.5m and 8.3m.

2
©  ASCE
Computing  in  Civil  Engineering  2015 3

Fig.1 Design of Xin’an Bridge

(a) Model of the whole bridge (b) Pre-stressed steel rebar


Fig.2 Full-scale finite element model in Midas FEA

Model construction

The finite element model was constructed by the program Midas FEA. In this
model, solid elements were adopted to simulate concrete slab and shell elements
were used as corrugated steel webs. Pre-stressed steel rebar was also simulated by
2D link elements. The compressive strength of concrete was set as 60MPa, and the
yield strength of steel was set as 345MPa. Fig.2 shows the model constructed in
Midas FEA.
Two models, i.e., Model A and Model B, were constructed with different
arrangement of connectors. In Model A, only PBL connectors were adopted. As the
stiffness of PBL connectors were high, the slips between concrete slab and
corrugated steel web had not been considered.
In Model B, PBL connectors in some regions were replaced by studs. Fig.3
shows the replacing regions and the arrangement of studs. As the stiffness of studs
was lower than that of PBL connectors, slip deformation between concrete slab and
corrugated steel web was considered. Spring elements were used to simulate studs.

3
©  ASCE
Computing  in  Civil  Engineering  2015 4

(a) Region
ns using stu
uds

150

(b) Stud
ds on top flaange (cc) Studs on b
bottom flan
nge
Fig.3 Arrangemen
A nt of studs in
i Model B

This briddge was buuilding by 16


1 stages from
f the miiddle steadiies. Fig.4
preseents the consstruction proocess.

(aa) Stage 1 (b) Stage 5

(cc) Stage 10 (d) Sttage 16


onstruction
Fig.4 Co n stage of Xiin’an Bridgee

4D FINITE
F ELE
EMENT AN
NALYSIS

In order to
t concentratte on the tim
me influence and differennce between Model A
and Model B, th he same loaad effect waas applied anda the matterial perform mance of
conccrete was alsso time-relatted. As pagee limited, thrree represenntative stagess of finite
elem
ment simulattion were presented aand differen nce betweeen two moddels was
comp pared.

4
©  ASCE
Computing  in  Civil  Engineering  2015 5

Consstruction stage 8

Before thhe constructiion stage 8, there was noo difference between twwo models
as th
he change off connectors started form m constructioon stage 9. O
On stage 8, under self-
gratittude, the peerformance is shown inn Fig.5. Thee maximum vertical defformation
was 8.1mm, andd happened in i the middle of bridgee. The maxim mum tensilee stress in
conccrete was 1.36MPa and d the maxiimum comp pressive streess in conccrete was
22.5MMPa. The maximum
m Von Mises sttress of corrrugated steell webs was 172MPa,
happpening in th he top flange of sectionn 3, where the parametters of conccrete slab
channged. As the results, thhe simulatioon of full-sscale finite element model was
reasoonable and acceptable.
a

(c) V
Von Mises stress
s of
(a) Vertical
V defformation (b) Stresss of concreete
steel web
bs
Fig
g.5 Simulatee results of stage 8

(c) V
Von Mises stress of
(a) Vertical
V defformation (b) Stresss of concrette
steel webs
mulate results of Model A at stage 16
Fig.6 Sim

Table 1 Targets off two modelss at stage 166


Maiin Targets Model
M A Model B
54.1
5 46.6
Maxximum vertical deformattion(mm)
(1/2883 spann) (1/33488 span)
Maxximum tensiile stress in concrete(MP
c Pa) 1.87 1.85
Maxximum comp pressive streess in concreete(MPa) 18.4 18.2
Maxximum Von Mises stresss of steel(MP Pa) 295
2 231

Consstruction stage 16

After stage 16, consstruction of this bridge had been doone. Fig.6 shows
s the
resullt of Model A.
A Table.1 presents
p the key targets of Model A and Modell B. Also,
the Von
V Mises stresss of 98.7% of corruugated steel webs in Moodel A and 99.2% in

5
©  ASCE
Computing  in  Civil  Engineering  2015 6

Model B were below 200MPa. As the comparison, Model B owned a smaller vertical
deformation and a lower stress level. As the stiffness of stud was smaller than PBL
connector, slip deformation was permitted in Model B and the stress could be
homogenized.

Use stage

At use stage, self-gravity, traffic load, temperature effect were applied, as


Fig.7 and Table.2 present. The Von Mises stress of 85.0% of corrugated steel webs
in Model A and 88.0% in Model B were below 200MPa. And the deformation and
stress of two models was similar. Because the stiffness of Model A was larger,
vertical deformation of Model was smaller. Although, Model B owned a lower stress
level because of slip performance between concrete slab and steel webs.

(c) Von Mises stress of


(a) Vertical deformation (b) Stress of concrete
steel webs
Fig.7 Simulate results of Model A at use stage

Table 2 Targets of two models at use stage


Main Targets Model A Model B
194.0 197.0
Maximum vertical deformation(mm)
(1/804 span) (1/792 span)
Maximum tensile stress in concrete(MPa) 3.04 2.90
Maximum compressive stress in concrete(MPa) 21.2 19.6
Maximum Von Mises stress of steel(MPa) 325 318

Based on the 4D finite element analysis of two full-scale models and the
comparing results, the performance of two types of connectors used in the whole
bridge could be researched. The model applying PBL connectors had a larger
stiffness than that using stud, while the latter had a better stress condition.

CONCLUSION

The 4D full-scale finite element analysis of long-span composite box girder


bridges has been carried out in this study. The following observations and
conclusions are made based on the study in this paper:
(1) By finite element program Midas FEA, 4D full-scale finite element
analysis may be carried out to concentrate on the whole construction of a long-span
bridge. The full-scale finite element models in this study could simulate the authentic

6
©  ASCE
Computing  in  Civil  Engineering  2015 7

construction, shrinkage and creep of concrete, effect of pre-stressed-strand and slip


effect.
(2) From the comparison of two types of connectors, the model applying PBL
connectors had a larger stiffness than that using stud, while the latter had a better
stress condition. In the model that using stud, as the stiffness of stud was smaller than
PBL connector, slip deformation was permitted and the stress could be homogenized.
Research in this paper may provide foundations for 4D full-scale finite
element analysis and further study of the performance of different connectors.

ACKNOWLEDGEMENT

The writers gratefully acknowledge the financial support provided by the


National Science Fund of China (51138007, 51222810), the National Science &
Technology Support Program of China (2011BAJ09B02), and Huang Ting-
fang/Xing-he Talents Training Fund.

REFERENCES

Cheyrezy M, Combault J. (1990). “Composite bridges with corrugated steel webs-


Achievements and prospects.” IABSE Symposium Brussels 1990: Mixed
Structures, including New Materials, IABSE Reports.
El Metwally A, Loov R E. (2003). “Corrugated steel webs for prestressed concrete
girders.” Mat. and Struct. 36(2), 127-134.
Kraus D, Wurzer O. (1997). “Nonlinear finite-element analysis of concrete
dowels.” Comp. & Struct. 64(5), 1271-1279.
Lee P G, Shim C S, Chang S P. (2005). “Static and fatigue behavior of large stud
shear connectors for steel-concrete composite bridges.” J. Constr. Steel
Res. 61(9), 1270-1285.
Mabsout M E, Tarhini K M, Frederick G R, et al. (1997). “Finite-element analysis of
steel girder highway bridges.” J. Bridge Eng. 2(3), 83-87.
Naito T, Hattori M. (1994). “Prestressed concrete bridge using corrugated steel
webs—Shinkai Bridge.” Proceedings of the Xll FIP congress.
Saari W K, Hajjar J F, Schultz A E, et al. (2004). “Behavior of shear studs in steel
frames with reinforced concrete infill walls.” J. Constr. Steel Res. 60(10),
1453-1480.
Sayed-Ahmed E Y. (2001). “Behaviour of steel and (or) composite girders with
corrugated steel webs.” Canadian J. Civ. Eng. 28(4), 656-672.
Sebastian W M, McConnel R E. (2000). Nonlinear FE analysis of steel-concrete
composite structures.” J. Struct. Eng. 126(6), 662-674.
Shim C S, Lee P G, Yoon T Y. (2004). “tatic behavior of large stud shear
connectors.” Eng. Struct. 26(12), 1853-1860.

7
©  ASCE
Computing  in  Civil  Engineering  2015 8

Agent Based Model for Resource Constrained Scheduling of Construction


Projects
A. B. Senouci1; K. Abdel Warith2; and N. N. Eldin3
1
Department of Construction Management, University of Houston, 300
Technology Building Rm#111, Houston, TX 77204-4020. E-mail:
[email protected]
2
Department of Civil and Architectural Engineering, Qatar University, P. O. Box
2713, Doha, Qatar. E-mail: [email protected]
3
Department of Construction Management, University of Houston, Technology
Annex Building-T1-110G. E-mail: [email protected]
Abstract
This paper presents an Agent Based Model (ABM) for resource constrained
scheduling of construction projects. The model considers typical scheduling
characteristics such as the four precedence relationships. It also allows for activities
to be interrupted, if needed. Moreover, the model considers the impact of the finish
quality of a given activity on its successors. An illustrative example was presented to
demonstrate the performance of the proposed model. Overall, the model was
successful in minimizing project durations by utilizing different priority rules. ABM
concepts were proved to be applicable in resource constrained scheduling. The model
provided added versatility to activity characteristics.

INTRODUCTION
Techniques such as Branch and bound methods (Brucker, et al., 2003), mathematical
programming (Winston, et al., 2002), genetic algorithms (Senouci, et al., 2004), Ant
Colony Optimization (Christodoulou, 2009), neural networks (Senouci, et al., 2001),
and particle swarm optimization (Zhang, et al., 2006), were used to solve Resource-
Constrained Project Scheduling Problems (RCPSP). This paper develops an Agent
Based Model (ABM) for resource constrained scheduling of construction projects.
ABM can be a promising alternative technique for solving RCPSP problems (Knotts
et al., 2000). It has several advantages, including flexibility in defining activities and
resources. The ABM model includes two features, namely, handling of activity
interruptions and incorporating the impact of predecessor quality on successor
duration. These features were not addressed in previous RCPSP models. The ABM
model was validated using an illustrative example from the literature.

PROBLEM IMPLEMENTATION
Model Architecture
The model follows a modular architecture and object oriented programming. This
will allow other researchers to build on this work with minimum effort. An Agent
Based Model is considered herein.
Definition of an Agent
Agents are autonomous objects that have the ability of satisfying internal goals. They
have a complex underlying functional architecture such as the belief-desire-intention
(BDI) architecture (Rao et. al., 1992).
Types of Agents

1
©  ASCE
Computing  in  Civil  Engineering  2015 9

The agents used in this model range from relatively simple agents to extremely
complicated ones. The common agent types are reactive, adaptive, goal oriented,
learning, and intelligent (Sycara, et al., 1996).
Model Components
Critical Path Module
It is important to note that the model computes Critical Path Method (CPM) values
with no resource constraints before the resource constrained simulation starts. These
values include Early Start, Early Finish, Late Start, Late Finish, and Total Float.
The Activity Agent
This agent is the main agent in the proposed Agent Based Model. Thus, agents are the
main drivers of the simulation. Activities are not intelligent agents (i.e., they do not
learn). They can be considered goal oriented reactive agents. The activity agent’s goal
is to be completed. This is done by starting, performing certain tasks for a given
duration then concluding. In some cases, interruptible activities may be interrupted.
The life of an activity can be translated to a state chart for coding purposes. All
activities start in a “NotReady” state. Each activity then assesses whether its
predecessors are complete or not. If they are complete then the activity becomes
“Ready”. Once resources for this activity are available then the activity can start and
become in an “InProgress” state. If the activity is interrupted for any reason, it moves
to an “Inter” state. Once the activity is finished, it is transformed to a “Complete”
state.
The activities that are in “Ready” state compete for the resources available through
preset priority rules. Three Priority rules are currently coded into the model, namely,
shortest remaining float, earliest early start, and latest late finish. The user chooses
which priority rule to apply. The activity that is in “Ready” state and has priority will
then check if there are enough resources available for it. If there are enough
resources, the activity will commence otherwise the activity will remain in a “Ready”
state. Another important aspect is interruption. During the priority check, the
activities that can be interrupted may be interrupted in favor of more critical
activities. In this case, the state of the activity will be converted from “InProgress” to
“Interrupted”.
In addition to states, the activity agent includes the following parameters.
Predecessors, Duration, and Resources: These are input parameters that must be
included in each activity.
Early Start, Early Finish, Late Start, Late Finish, Total Float: These parameters are
calculated in the CPM module before the simulation, then assigned to each activity.
These parameters are used in priority calculations.
Actual Start, Actual Finish: These are the start and finish times calculated through the
resource constrained ABM simulation.
Interruptability: This is an input parameter that reflects whether the activity can be
interrupted or not. This parameter is usually assigned by the user.
Quality: This parameter reflects the finish quality of the activity. This parameter is
usually assigned by the user.
Dependency on Predecessor Quality: This parameter is an input parameter that
reflects whether the duration of this activity is affected by the finish quality of a given
predecessor.

2
©  ASCE
Computing  in  Civil  Engineering  2015 10

Duration Object
Duration was modeled as a separate object to allow for activity duration to be
manipulated as an aggregate. For instance, the user may decide to apply a certain
distribution that would calculate the duration given certain limits. This can be used
easily when durations are treated as a separate object.
Resource Pool
The resource pool is the source of resources. The model handles only one type of
resources. The resource pool is constrained to a user predefined number of resources.
Activities book these resources and the remainder resources remain in the resource
pool.
Concept Flow Chart
Figure 1 highlights the overall process being implemented in the code.

Figure 1 Concept Flow Chart

ILLUSTRATIVE EXAMPLE
An example project (after (Moroto et.al. (1994)) was used to illustrate the different
aspects of the proposed model. Table 1 summarizes the project activity duration and
resources, and preceding activities.
Priority Rules
Three different priority rules were implemented in the developed program. The
priority is given to the activity that has the earliest start date, the latest late finish, or
the shortest float period, given that there are sufficient resources for the activity to
start. Three different runs were performed. Each run used one of the priority rules
(PR) as the BDI of the agents. The results are summarized in Table 2.
The difference in priority rules is apparent. However, this does not necessarily
suggest that using the earliest start priority rule will always yield the best results.
Each priority rule will produce a better solution at different situations. The three
priority rules can be used as a preliminary step towards finding an optimum solution.

3
©  ASCE
Computing  in  Civil  Engineering  2015 11

Table 1 Project Inputs


Activity ID Duration Resources Preceding activities
1 5 8 ----
2 7 5 ----
3 2 8 1,2
4 14 6 3
5 1 4 3
6 5 6 3
7 3 8 3
8 8 4 3
9 20 2 4,5,6,7,8
10 3 7 9
11 4 5 10
12 7 6 10
13 6 4 10
14 25 2 11
15 4 8 11
16 3 8 12
17 8 4 12
18 12 3 13
19 5 5 13
20 5 4 13
21 8 7 11
22 6 4 14,15,21
23 4 8 16,17,18
24 3 5 19,20
25 5 3 22
26 5 2 23,24
27 3 4 25
28 3 8 27
29 20 4 26
30 5 4 26
31 11 4 28,29

For the sake of simplicity, only finish to start relationships were considered herein.

Table 2 Comparison between Different Priorities


Resource Constrained Schedule* Is the
Late Early Total Activity
CPM Calculations Finish PR Start PR Float PR Critical
Activity ES EF LS LF AS AF AS AF AS AF
0 0 0 0 0 0 0 0 0 0 0 Y
1 0 5 2 7 0 5 0 5 7 12 N
2 0 7 0 7 5 12 5 12 0 7 Y

4
©  ASCE
Computing  in  Civil  Engineering  2015 12

3 7 9 7 9 12 14 12 14 12 14 Y
4 9 23 9 23 14 28 14 28 14 28 Y
5 9 10 22 23 28 29 28 29 31 32 N
6 9 14 18 23 29 34 29 34 39 44 N
7 9 12 12 15 34 37 34 37 28 31 N
8 12 20 15 23 37 45 37 45 31 39 N
9 23 43 23 43 45 65 45 65 44 64 Y
10 43 46 43 46 65 68 65 68 64 67 Y
11 46 50 47 51 97 101 68 72 73 77 N
12 46 53 49 56 68 75 72 79 87 94 N
13 46 52 46 52 78 84 79 85 67 73 Y
14 50 75 51 76 101 126 97 122 94 119 N
15 50 54 72 76 126 130 122 126 150 154 N
16 53 56 61 64 75 78 134 137 127 130 N
17 53 61 56 64 78 86 79 87 119 127 N
18 52 64 52 64 86 98 85 97 73 85 Y
19 52 57 60 65 89 94 87 92 77 82 N
20 52 57 60 65 84 89 92 97 82 87 N
21 50 58 68 76 130 138 126 134 137 145 N
22 75 81 76 82 138 144 144 150 154 160 N
23 64 68 64 68 155 159 140 144 130 134 Y
24 57 60 65 68 94 97 137 140 134 137 N
25 81 86 82 87 144 149 155 160 160 165 N
26 68 73 68 73 159 164 144 149 145 150 Y
27 86 89 87 90 149 152 160 163 165 168 N
28 89 92 90 93 152 155 169 172 174 177 N
29 73 93 73 93 164 184 149 169 154 174 Y
30 73 78 99 104 164 169 150 155 168 173 N
31 93 104 93 104 184 195 172 183 177 188 Y
*Resource pool set at 8 resources
It is worth noting that these results are specific to the resource constraints mentioned
earlier. If the resource constraints are changed, the total duration will change and the
ranking of the priority rules may change as well. Figure 2 illustrates the current
resource profile of the above results.
Interruption
Another aspect of this model is the ability to allow activity interruption. The activity
agents in the model allow for the interrupt ability option to be highly selective and
flexible. For instance, some of the activities are allowed to be interrupted while
others are not. Table 3 illustrates an example where all the activities are allowed to be
interrupted.

5
©  ASCE
Computing  in  Civil  Engineering  2015 13

(a)

(b)

(c)
Figure 2. Resource Profile (a) Late Finish Priority (b) Early Start Priority (c) Total
Float Priority

6
©  ASCE
Computing  in  Civil  Engineering  2015 14

Table 3 Interruption Results


Not
Is the
Interrupted Interrupted
Interruption Interrupted
Duration Activity
Day of Resumed
Activity Critical?
AS AF AS AF Interruption on day
0 0 0 0 0
1 0 5 0 5
2 5 12 5 12
3 12 14 12 14
4 14 28 14 28
5 28 29 28 29
6 29 34 29 34
7 34 37 34 37
8 37 45 37 45
9 45 65 45 65
10 65 68 65 68
11 97 101 97 105 98 102 4 Days N
12 68 75 68 75
13 78 84 78 84
14 101 126 105 130
15 126 130 130 134
16 75 78 75 78
17 78 86 78 91 84 89 5 Days N
18 86 98 84 98 89 91 2 Days Y
19 89 94 84 89
20 84 89 89 94
21 130 138 134 142
22 138 144 142 153 144 149 5 Days N
23 155 159 98 102
24 94 97 94 97
25 144 149 153 158
26 159 164 102 144 105 142 37 Days Y
27 149 152 158 161
28 152 155 161 164
29 164 184 144 167 161 164 3 Days Y
30 164 169 144 149
31 184 195 167 178
The highlighted values represent the total duration of the project. Two activities were
interrupted. The interruption resulted in shortening the project by 17 days only. In
reality such interruption may cause delays such that the total savings are less than 17
days. This is because of the hidden wasted time of moving resources from one
location to another and back. Time can also be wasted on setting up the works.

7
©  ASCE
Computing  in  Civil  Engineering  2015 15

Impact of Quality
As mentioned earlier, the impact of the quality of predecessor on successors can be
implemented in this model. This impact is translated into extra time that is needed by
the successor activity. As an illustration, different activities were chosen to have
lower quality and the impact on the entire project duration was observed. Figure 3
summarizes the result of this exercise. It can be observed in Figure 3 that as the
percentage of predecessors of lower quality increases the total project duration
increases. The relation is expected to be non-linear and unique to a given set of
activities.

CONCLUSION
The agent based model developed herein was capable of calculating total project
durations using three different priority rules. The model performed decisions on
whether or not to interrupt activities, provided the user allowed interruption, to obtain
a shorter schedule. The model was also able to incorporate the impact of poor
predecessor finish quality on successor’s duration. Agent based modeling proved to
be applicable and useful in handling resource constrained schedules. The model
developed illustrated the flexibility that can be added to standard techniques that
solve resource constrained problems. Future work includes adding more attributes to
activities and resources, such as trade, skill level, learning and complexity of task.
Future models are expected to link this attributes and predict individual activity
duration stochastically.

Figure 3 Impact of Reduction in Quality


REFERENCES
Brucker, P., & Knust, S. (2003). “Lower bounds for resource-constrained project
scheduling problems.” Eur. J. Oper. Res, 149(2), 302-313.
Christodoulou, S. (2009). “Scheduling resource-constrained projects with ant colony
optimization artificial agents.” J. Comput. Civ. Eng., ASCE, 24.1, 45-55.
Knotts, G., Dror, M., & Hartman, B. C. (2000). “Agent-based project scheduling.”
IIE Transactions, 32(5), 387-401.
Maroto, C. Tormos, P. (1994). “Project management: an evaluation of software
quality.” International Transactions in Operational Research, 1(2), 209-221.
8
©  ASCE
Computing  in  Civil  Engineering  2015 16

Rao, A. S., & Georgeff, M. P. (1992). “An abstract architecture for rational agents.”
KR, 92, 439-449.
Senouci, A. B., & Adeli, H. (2001). “Resource scheduling using neural dynamics
model of Adeli and Park.” J. Constr. Eng. Manage.,ASCE, 127(1), 28-34.
Senouci, A. B., & Eldin, N. N. (2004). “Use of genetic algorithms in resource
scheduling of construction projects.” J. Constr. Eng. Manage., ASCE, 130(6), 869-
877.
Sycara, K., Pannu, A., Willamson, M., Zeng, D., & Decker, K. (1996). “Distributed
intelligent agents.” IEEE expert, 11(6), 36-46.
Winston, W. L., & Venkataramanan, M. (2002). Introduction to mathematical
programming (4th ed.). Thomson-Brooks/Cole, Pacific Grove, Calif.
Zhang, H., Li, H., & Tam, C. (2006). Permutation-based particle swarm optimization
for resource-constrained project scheduling. J. Comput. Civ. Eng., 20(2), 141-149.

9
©  ASCE
Computing  in  Civil  Engineering  2015 17

Comparison between Nominal and Actual Hourly Production of Crawler-Type


Dozer: A Case Study

Hamed Nabizadeh Rafsanjani, S.M.ASCE

Durham School of Architectural Engineering and Construction, University of


Nebraska-Lincoln, 113 NH, Lincoln, NE 68588. E-mail:
[email protected]

Abstract

Understanding the discrepancies between the nominal and actual production


rate of construction machinery is important in construction projects execution. There
are many site factors affecting the actual production rate. This paper presents a case
study to understand the discrepancies between the nominal and actual production
rates of crawler-type dozer and different main parameters affecting this production.
The machinery considered includes different models of Caterpillar and Komatsu
crawler-type dozers. The nominal production rates were derived from the two
manufacturers. The actual production rates data were from records of actual
construction projects. Each machine model was individually considered at project site,
and the site conditions were taken into account. Working condition, type of materials,
and ground slope are three parameters considered and evaluated for each machine.
The distribution of working experience of operators and age group of all models of
machine are also provided. The results of statistical analyses on the data and
parameters show a) the discrepancies between the actual and nominal hourly
production, and b) the importance of each parameter. The results of this study can be
useful in planning of machinery and be a great help to the project management team.

INTRODUCTION AND BACKGROUND

Construction machinery and equipment are one of the necessary resources to


the accomplishment and success of construction projects. Today contractors
undertake many types of construction activities that require different types and sizes
of machinery (Gransberg et al. 2006). It has been universally accepted that the
machinery hourly production is one of the key factors in success of construction
projects. Machinery manufacturers generally provide an ideal hourly production of
their machinery for users. This ideal hourly production named nominal hourly
production clearly differs from the actual hourly production of a machine in
construction projects. The actual production depends mainly on the condition of
project sites. Estimating actual production and therefore the discrepancies between
the nominal and actual production rate is a key element in estimating the time and
cost required to terminate the construction operations.
The accurate estimation of hourly production has intrigued some researchers
in the past decades. In 1994, Edmonds et al. took actual production into account, and

©  ASCE
Computing  in  Civil  Engineering  2015 18

viewed the actual production as a percentage of full capacity. By using several


methods such as short range analysis, analysis of running time, and analysis of
running speed, they finally just concluded that the actual production of machinery is
approximately 52.5 percent of the nominal production. Zou (2006) employed the
color space digital image processing method to study the effect of site conditions on
machinery production. He tried to achieve more realistic results of machinery
production. But, finally no data of actual hourly production was provided. Recently,
Nabizadeh Rafsanjani et al. (2009; 2011; 2012) studied the hourly production of
different machinery models and provided the actual production of some machines.
However, they did not consider all models of machinery in their study. Apparently,
over the past 20 years, construction literatures have written little information to
advance both theoretical and practical basis for actual machinery production
estimation. Current maturity level of literature therefore shows that there is still a
great challenge to estimates actual production.
The objective of this paper is to assess the actual hourly production of
crawler-type dozer in order to find the discrepancies between the nominal and actual
production rates of crawler-type dozer and different main parameters affecting this
production. The next sections first provide the methodology of research and then
provide and discuss the results.

RESEARCH APPROACH

In order to estimate the machine actual production and the discrepancy


between nominal and actual production, the author considered a real survey on
crawler-type dozer in real projects.

The projects. The projects for this case study are five different borrow pits in Middle
East. These pits have been dug for use in different construction projects, e.g.
highways and dams. The borrow pits are located in areas containing almost similar
climate conditions and annual rainfall.
Machine selection. The choice of machine in this study was limited to two
manufacturers’ models: Caterpillar, and Komatsu. Caterpillar believes to control
more than half of the U.S. construction equipment market and one third of the world
market (Arditi et al. 1997). Both Caterpillar and Komatsu were the only brands of
crawler-type dozer used in the borrow pits. The following models were used in the
projects:
- Caterpillar: D6N, D6T, D7R, D9T, D10T
- Komatsu: D155A-2, D155A-6, D275A, D375A
Table 1 shows the number and age of each model. There were totally 39
machines with average age of 7.3 working years. All these machines worked with
Semi-Universal (SU) blades.
Production data collection. The data for the nominal hourly production was derived
from machinery charts and performance handbooks of the two manufacturers:
Caterpillar Performance Handbook, and Komatsu Specifications & Application
Handbook.

©  ASCE
Computing  in  Civil  Engineering  2015 19

Table 1. Number and Age for each Model


Caterpillar
Model Number Age (years) Average of Age
D6N (H) 6 8 10 10 11 13 14 11.0
D6T (G) 5 8 9 11 11 14 ---- 10.6
D7R (F) 6 6 10 10 10 12 13 10.2
D9T (C) 2 4 11 ---- ---- ---- ---- 7.5
D10T (B) 2 7 9 ---- ---- ---- ---- 8.0
TOTAL 21 10.1
Komatsu
D155A-2 6 2 4 4 5 5 6 4.3
D155A-6 5 2 2 3 7 7 ---- 4.2
D275A 3 2 3 5 ---- ---- ---- 3.3
D375A 4 3 3 4 6 ---- ---- 4.0
TOTAL 18 4.1

The actual production data was derived from records of different machines in
the five borrow pits. Each model was individually considered at project site. In data
gathering phase, the hauling distance (average dozing distance) of machine was
considered as a parameter to achieve integrated results. The author recorded the
hourly production of all machines in some specified hauling distances. On the basis of
the project conditions, 15, 30, and 75 meter were finally chosen as the three main
hauling distances for all models.
Figure 1 shows the distribution for working experience of machine operators.
The majority of operators had more than five years of working experience. It is
noteworthy that the operators assigned randomly to different machines.
Working condition, ground slope, and materials conditions have great effects
on dozer production (Peurifoy et al. 2011). In this study, several surveys for different
site conditions were employed to individually asses each of these three independent
parameters. Accordingly, there was a need to find various conditions for the machines.
Based on the project conditions, project machinery managers’ experience, and the
machinery production literatures (research works, and machinery charts and
handbooks), the author finally decided to categorize each of these three independent
parameters as follow:
- Working condition: 1) Good; 2) Medium; 3) Weak
- Ground slope: 1) Zero; 2) +15; 3) 15
- Types of Materials: 1) Loose soil; 2) Soil containing rubble stones; 3)
Blaster rocks

©  ASCE
Computing  in  Civil  Engineering  2015 20

Figure 1. Working experience (years) of machine operators

In the good working condition, it was assumed an experienced operator


operates a low-working age machine in an ideal site condition. In this condition, the
efficiency for time-working of machine is approximately 50min per hour. 40min per
hour and 30min per hour are the time-working of medium and weak working
conditions, respectively. Depending on various operating conditions and terms, the
site condition can be considered as one of the three working conditions. It must be
noticed that the experience of machinery manager in construction projects plays the
main role to do this important decision making.
The time of data gathering in the five projects was roughly 8 months. The
machines run for one 8-hour shift per day, 20 days per month. It was tried hard to
gather the data of each model individually in different site conditions in the 8-month
duration in order to better understand the performance of each machine and therefore
to achieve more realistic results.
Based on the three subsets considered for each of three main parameters, there
were finally 27 (=33) different scenarios for each hauling distance of each model. For
example, a scenario for a machine with hauling distance of 15m can be: to work with
blaster rocks in a good working condition on the ground with +15 slope. Accordingly,
26 other scenarios for this machine can also be considered.
After the data collecting process was done, there was a need to analysis the
data statistically to find the best representative of actual production for each scenario.
To achieve this goal, in each scenario for 40 different days, data for all projects was
gathered. Finally, based on the data, One-Way ANOVA analysis was selected and
employed to analyse the data.

RESULTS AND DISCUSSION

By applying the mentioned analysis method for each scenario of each machine,
the actual data in different scenarios was finally found. In this research, the author
have tried to present the results in a practical implication. For achieve this goal, the
author first chose one of the scenarios as the base scenario. Then by assessing the
relationship among the base scenario and other scenarios, some correction factors
which are used to change the results of the base scenario to the others were defined. It
must be mentioned that manufacturers generally use this method to show the hourly

©  ASCE
Computing  in  Civil  Engineering  2015 21

production of their machinery and equipment. Indeed, the author presents the results
as same as manufacturers results.
The scenario in which machines work with loose soil, in good condition, and
on ground with zero slope was chosen as the base one. Figure 2 and 3 present the
results of this scenario. Table 2 and 3, and figure 4 present the correction factors
which lead to achieve the results of other scenario. The correction factors are outputs
of the One-Way ANOVA analysis. Depending on the project site conditions, the
author’s correction factors (obtained results) must be applied to the data of actual
hourly production (obtained results) presented in figure 2 and 3 to find the desired
hourly production.

Figure 2. Comparison between nominal and actual data for Caterpillar

Figure 3. Comparison between nominal and actual data for Komatsu

©  ASCE
Computing  in  Civil  Engineering  2015 22

Table 2. Correction Factors for Working Conditions


Obtained Results Manufacturers Data
Working Condition*
Caterpillar Komatsu Caterpillar Komatsu
Good 0.83 ~ 0.61 0.83 ~ 0.77 0.83 ~ 0.49 0.83
Medium 0.67 ~ 0.52 0.67 ~ 0.59 0.67 ~ 0.40 0.67
Weak 0.50 ~ 0.36 0.50 ~ 0.44 0.50 ~ 0.30 0.50
* The definition of working condition has some changes with the manufacturer definition.

Table 3. Correction Factors for Materials


Obtained Results Manufacturers Data
Type of Materials *
Caterpillar Komatsu Caterpillar Komatsu
Loose soil 1.00 1.00 1.00 1.00 ~ 0.81
Soil + Rubble stones 0.62 0.69 0.67 0.81 ~ 0.67
Blaster rocks 0.52 0.46 0.67 ~ 0.50 0.50 ~ 0.36

Figure 4. Correction factors for ground slope

A comparison between the obtained results and manufacturers’ results shows


(see figure 2 and 3) a considerable shortfall between the nominal and actual
production in the best working condition. In addition, table 2 and 3, and figure 4 show
the differences between the obtained correction factors and the factors provided by
two manufacturers. The obtained factors are within the provided range of
manufacturer factors
Therefore, a preliminary and visible outcome on the basis of the results is that
in different project sites, the hourly production can be decreased significantly, while
the correction factors remain almost unchanged. In addition, for positive grade slope,
there is a considerable shortfall between the obtained correction factors and
manufacturer factors. The main reason for this shortfall is that a machine worked for

©  ASCE
Computing  in  Civil  Engineering  2015 23

several years cannot work in positive grade as well as a new machine. In a negative
grade, however, the machine weight is the main parameter leading to a machine to
work well, and so the author’s and manufacturers’ factors are roughly the same.
Furthermore, in order to find the importance of each parameter on hourly
production of machine, Factor Analysis was employed on the different data. The
results show that the Working Condition (47%) is the main parameter. Type of
Materials (31%) and Ground Slope (22%) are the next parameters, respectively.
The working condition defined in this study as a parameter applying the effect
of machinery age, operators’ skills, and weather condition. The manufactures’ charts
are on the basis of ideal condition that a machine is a new one, the operator is a fully-
skilled one, and the weather is in the best conditions. However, in real construction
projects, achieving to these ideal conditions rarely occurs. In the presented case study,
there were no new machines (see table 1) and the machine operators were not always
the best ones (see figure 1). These two sub parameters of working condition lead to a
considerable shortfall between nominal and actual hourly production.
It must be noticed that the presented results are based on a case study. The
main limitation is that the results cannot be generalized to any other type of projects
where machines work. The other limitation is that there is uncertainty in the collected
data. In most case studies, it has been mentioned there is a bias in data collection.
Furthermore, assuming similar climate conditions and annual rainfall for different
borrow pits also limits the results. The machine operators were also assigned
randomly to different machine. If there is a plan to show which operator must work
with a specific machine, it can also be concluded how the machine operators affect
the machine production.

CONCLUSION

This paper provides a case study to assess the discrepancies between actual
and nominal production for crawler-type dozer. 21 different models of Caterpillar
with average working year of 10.1 and 18 different models of Komatsu with average
working year of 4.1 were individually considered in five different borrow pits. In
each machine, three different hauling distances was considered, and for each hauling
distance the actual hourly production were obtained. The results show a considerable
shortfall of machine hourly production for the ideal site conditions. This high range of
variation must be considered by project managers in planning of machinery. The
importance of different parameters affecting the machine production was also
examined. The main important factor is the working condition of project site. The
author believes that the results of this study can be an appropriate prospect for the
machinery managers in construction projects.

ACKNOWLEDGMENTS

The author wishes to acknowledge the valuable advice of Dr. Yaghoob


Gholipour, Professor of Civil Engineering.

©  ASCE
Computing  in  Civil  Engineering  2015 24

It is noteworthy that any opinions, findings, and conclusions expressed in this


paper are those of the author and do not necessarily prove or disprove the view of
Caterpillar and Komatsu.

REFERENCES

Arditi, D., Kale, S., and Tangkar, M. (1997). "Innovation in construction equipment
and its flow into the construction industry", J. of Construction Engrg and
management-ASCE, 123, pp. 371-378.
Caterpillar Inc. (2012).Caterpillar performance handbook, 42th Ed., Peoria, Illinois.
Caterpillar Inc. (2010).Caterpillar performance handbook, 40th Ed., Peoria, Illinois.
Edmonds, C.D., Tsay, B., and Lin, W. (1994). "Analyzing machine efficiency." The
National Public Accountant, 39 (12): 28-44.
Gransberg, D., Popescu, M., and Ryan, C. (2006).Construction equipment
management for engineers, estimators and owners, Taylor& Francis, Florida.
Komatsu Inc. (2009). Komatsu Specifications & Application Handbook, 30th Ed.,
Tokyo.
Nabizadeh Rafsanjani, H., Gholipour, Y., and Hosseini Ranjbar, H. (2009). "An
assessment of nominal and actual hourly production of the construction
equipment based on several earth-fill dam projects in iran." J. of Open Civil
Engrg, 3: 74–82.
Nabizadeh Rafsanjani, H. (2011). "An assessment of nominal and actual hourly
production of crawler-type front shovel in construction project." J. of Civil
and Environmental Engrg, 11 (6): 59–63.
Nabizadeh Rafsanjani, H., Shahrokhabadi, Sh., and Hadjahmadi, A. (2012). "The use
of linear regression to estimate the actual hourly production of a wheel-type
loader in construction projects." ICSDEC 2012: Developing the Frontier of
Sustainable Design, Engineering, and Construction; ASCE Proceeding, 727-
731.
Peurifoy, R. L., Schexnayder, C. J., Shapira, A., and Schmitt, R. (2011).Construction
planning, equipment, and methods,8th Ed., McGraw-Hill, Boston.
Zou, J. (2006). HSV color-space digital image proceeding for the analysis of
construction equipment utilization and for the maintenance of digital cities
image inventory. MSc thesis, Dept. of Civil Engineering, University of
Alberta, Edmonton, Alberta.

©  ASCE
Computing  in  Civil  Engineering  2015 25

Automatic Satellite-Based Vessel Detection Method for Offshore Pipeline Safety

M. Poshtiban1; L. Song2; and N. Eldin3


1
Department of Construction Management, University of Houston, 111 T1, Houston,
TX 77204-4020. E-mail: [email protected]
2
Department of Construction Management, University of Houston, 111D T1,
Houston, TX 77204-4021. E-mail: [email protected]
3
Department of Construction Management, University of Houston, 110M T1,
Houston, TX 77204-4021. E-mail: [email protected]

Abstract

Offshore pipelines are vital infrastructure systems for oil and gas
transportation. Statistics around the globe confirm that third-party threats, such as
vessel anchoring, fishing, and offshore construction, contribute the most to offshore
pipeline damages and are the number one cause of death, injury, and pollution. This
research studies satellite imagery and its application in automating vessel detection
for the purpose of offshore pipeline surveillance. Current methods of relying on high-
resolution satellite images lead to a high implementation cost and less efficient image
processing. This paper proposes a method of utilizing lower resolution satellite
images for vessel detection in offshore pipeline safety zones. It applies a combination
of cascade classifier and color segmentation method as well as a unique “color-coding”
scheme to achieve an accurate and efficient satellite image processing procedure. The
proposed method was tested on 150 Google Earth satellite images with an average
detection rate of 94% for large and medium vessels and an average false alarm rate of
19%.

BACKGROUND

Offshore pipelines are vital infrastructure systems that play an important role
in transporting gases and liquids over long distances across the ocean. Offshore
pipelines must be constantly and reliably operated and monitored to ensure maximum
operating efficiency and safety. Offshore pipelines generally transport perilous
pressurized products and operate in hostile ocean environments, including current
dynamics, geo-hazards, as well as third-party threats. Leaks and bursts in such
pipeline networks cause significant economic losses, service interruption, and can
also lead to enormous negative impact on the public and environment.

©  ASCE 1
Computing  in  Civil  Engineering  2015 26

There are several causes of offshore pipeline damages including construction


damages, operation flaws, design inaccuracy, material weaknesses, pipe corrosion,
ground movements, and third-party damages. In particular, third-party damages refer
to accidental damages caused by activities not associated with the pipeline operations,
including vessel anchoring, collision, fishing/trawling, dredging, offshore
construction, and dropped objects. According to a UK study (MacDonald 2003), 51%
of offshore pipeline accidents are caused by third-party damages. Moreover, third-
party maritime activities are also recognized as the second major cause of offshore
pipelines failure (Woodson 1990). Past studies also confirmed that third-party
damages are the number one cause of death, injury, damage, and pollution (NRC
1994).

PROBLEM STATEMENT

Previous research on oil and gas pipelines monitoring has focused on the pre-
failure and leak detection techniques using sensing technologies, such as fiber optic,
acoustic, ultrasonic, and magnetic sensors, or in-line inspection methods (e.g. smart
pigging). However, they are reactive in nature and only confirm damages after the
fact. Traditional pipeline patrolling (e.g. spot check on vessel or aircraft) is costly and
tedious due to the spatial distribution of pipeline networks. Gatehouse (2010)
developed a third-party vessel tracking technique based on Automatic Identification
System (AIS). AIS is an automatic tracking system used on vessels for identifying
and locating vessels. Vessels equipped by AIS transponders communicate data
electronically with other nearby vessels and AIS base stations. Its primary purpose is
to avoid collisions in poor visibility situations, but it can also be used for proactively
monitoring vessels activities in pipeline safety zones. When a vessel is approaching
or entering a pipeline safety zone, the operator can be notified and warned if the
monitoring system detects a violating behavior. The detection algorithm is based on
heuristic rules obtained from experts. However, two limitations were noted: (1) data
from vessels far away from shore are not available due to limited coverage of AIS
stations; and (2) vessels may not be equipped with tracking devices, thus tracking
data is unavailable.
Satellite sensing can complement to AIS because of its global coverage and
the capability to identify vessels not equipped with AIS transponders. Over one
thousand satellites fly over our planet every day. They provide global coverage of
earth surface activities, including weather conditions, land movements, and traffic
(onshore and offshore). Remote sensors attached to satellites collect data by detecting
the energy that is reflected from earth including radio, microwave, infrared, visible
optical light and multispectral signals. Surface objects (e.g., vessels and fishing boats)
can be detected and classified by analyzing these sensory data. Furthermore, image
processing and computer vision techniques can be applied to automate the
identification of third-party entities. A sampling of their presence, frequency, and
traffic density can supplement AIS data for pipeline risk management.
The long-term goal of this research is to integrate AIS and satellite imaging
for pipeline safety zone traffic monitoring. The objective of this study is to develop a
cost-effective method to automatically identify vessels from optical satellite images

©  ASCE 2
Computing  in  Civil  Engineering  2015 27

for the purpose of characterizing marine traffic in pipeline safety zone. Current
methods of relying on high-resolution satellite images lead to a high implementation
cost and less efficient image processing. This paper proposes a method of utilizing
lower resolution satellite images for vessel detection. Along with AIS, this method
will provide third-party activity statistics to support more accurate new pipeline route
design and prioritization of maintenance effort. The section below describes related
works. It is followed by the proposed methodology and its implementation and testing.

LITERATURE REVIEW

A considerable number of algorithms are available for image processing, in


general, and objective recognition, in particular. The saliency method has been one of
the popular methods used in various applications. Saliency is defined as features
stands out relative to its neighbors in an image, or mathematically as the sum of the
absolute value of the local wavelet decomposition of an image. Previous approaches
of object detection based on saliency focused on low-level local contrasts such as
edges, boundaries, colors, and gradients.
Li et al. (2013) argues that a context is needed for saliency detection of vessel
targets to be meaningful. They detected saliency not at the isolated pixel level, but at
pixels’ surrounding block. Li’s detection algorithm composed of four parts: (a)
pyramid of image (e.g. color and intensity layers) is used to decrease the complex
computation required for processing high-resolution images; (b) image partition for
context consideration; (c) histogram analysis for each block of image to find the
salient candidates and create the spatial distribution of the image; and (d) a simplified
context-saliency detection algorithm. Their studies consist of a combination of
random saliency map and image pyramid. A saliency extraction method was used to
reduce the time required in processing high-resolution images. Spatial distribution
and local contrasts are considered in the algorithm. The algorithm resulted in an
average detect rate of 83.2% for large and medium vessels and false alarm rate of
33.5%.
Zhu et al. (2010) proposed a concept for vessel detection consists of two
simple steps: sea detection and vessel detection. They employed vessel detection from
space-born optical images, in which several factors such as clouds, ocean waves, and
small islands are often detected as false vessel candidates (small clouds are the most
difficult factors due to their random variation). A moderate variation of gray
distribution exists in images of a sea region. Since edges of a vessel are observable,
image segmentation with edge information can be used to extract possible regions of
interest. However, this procedure may include many false alarms such as ocean waves,
clouds, and islands due to their similar edge characteristics with those of a vessel.
Therefore, once the segmentation is accomplished, shape analysis must be applied to
minimize false alarms of vessels. Every vessel has its specific area, length, and width.
As a result, very large or very small islands and clouds can be deleted from the image.
Moreover, vessels are generally long and thin, so clouds and islands with very small
ratio will be eliminated as well. Support vector machine is used as the main classifier
for vessel classification and all samples classified as either vessel or non-vessel.
However, their study is not capable of detecting the vessels that are partly covered by

©  ASCE 3
Computing  in  Civil  Engineering  2015 28

clouds, vessels adjoin a large island, or when the gray scale of a neighbor area is very
close to that of the vessel. To overcome this shortcoming, cascade classifier is
employed in our research. In addition, samples of vessels or vessels partially covered
by clouds are included in sampling procedure of this study.
Qi et al. (2009) studied an object-oriented image analysis method to detect
and classify vehicles from satellite images. Image objects identified through
segmentation are organized in a hierarchical image object network. Feature space is
then created by extracting features of these objects and later used for vehicle
detection, classification, and traffic flow information analysis. Ortho-rectified image
date from QuickBird satellite with four spectral bands, including red, green, blue,
near infrared, and panchromatic, were employed for their study. The shadow region
of the vehicles (moving objects) accounts for about 10% errors in classification. The
shadow problems are resolved in our study by using training samples that include
shadows around vessels.

METHODOLOGY: AUTOMATIC VESSEL DETECTION IN SATELLITE


IMAGES

The proposed method is to provide a cost-effective method to automatically


detect vessels from optical satellite images for offshore pipeline safety. To achieve
the cost-effectiveness, instead of using expensive high-resolution satellite images, we
proposed to develop a method that can work with regular resolution images, such as
Google Earth images with a resolution ranging from 60 cm to 15 m depend on
location. Given a satellite image, the system evaluates visual features of the image,
and detects vessel objects in the vicinity of pipeline safety zone. This method includes
5 elements as shown in Figure 1.

Identify location Vessel detection Vessel object


of pipelines model labeling
Acquire satellite
image

Generate training Cascade


samples classifier training

Figure 1. Automatic vessel detection.

The fundamental idea of the proposed approach is to train a classifier using a


set of satellite images containing vessel objects. Once established, this classifier can
be used to automatically detect whether vessels present in a new image. To obtain a
large set of satellite images as training samples, high-resolution satellite images may
be acquired from commercial venders, but the cost can be high (e.g. over $1,000 for
an archived image of 100 square kilometers). As mentioned previously, this study
will acquire images from Google Earth to reduce cost and improve efficiency.
The classifier training requires both positive samples and negative samples.
Positive samples are images that contain the objects of interest, i.e. vessels. We must

©  ASCE 4
Computing  in  Civil  Engineering  2015 29

manually mark vessels as ground truth in each of the positive sample set so that the
classifier can be trained later to correctly identify vessel objects in new images.
Negative samples refer to images that do not contain objects of interest, which can
help to minimize false alarms. They contain backgrounds and noises that typically
associated with the presence of vessels (e.g. ocean surface and waves), or non-vessel
objects similar in appearance to vessels, such as small islands, clouds, and oil
platform. Figure 2a and 2b show samples of a positive image and a negative image.

a) Positive sample b) Negative sample c) HOG feature descriptor


Figure 2. Training samples and feature representation.

To achieve machine-learning, vessel objects must be represented numerically


as a set of features or feature vectors, such as edges and width-length ratio. Typically
used feature descriptors include Haar, Local Binary Patterns (LBP), and Histograms
of Oriented Gradients (HOG) among others (MathWorks 2014). Haar and LBP
features have been primarily used for detecting human faces. They work well for
representing fine-scale textures. The HOG features have been used for detecting
objects such as cars and traffic signs. They are reliable and efficient for capturing the
overall shape of an object, such as a vessel. The basic idea of HOG is that an object’s
appearance and shape can often be characterized well by the distribution of local
intensity gradients or edge directions (Dalal and Triggs 2005). A gradient is a
directional change in the intensity or color in an image. In practice, this is
implemented by dividing the image window into very small spatial regions, and for
each region, accumulating a local histogram of gradient directions over the pixels of
the region, such as illustrated in Figure 2c.
The above mentioned positive images with numerically represented features
forms the training set to establish the classifier based on the concept of supervised
learning. Statistically, the detection of vessel presence is a classification problem
which classifies an image region as categories, i.e. either vessel or non-vessel. A
classifier is a mathematical function that maps input data (e.g. image features) to
categories. Each feature ( i) contributes differently to each category membership ( j),
and this contribution may be represented by a weight factor ( ij). The category
membership of a particular image region is a score combining the feature vector and
the corresponding weights, such as j = i ij in the case of linear classifier. The goal
of training a classifier is to minimize classification error by fine-tuning the weight
factors using the training set of already manually recognized images.
A cascade classifier is used in this study (MathWorks 2014). This classifier
consists of training and detection in stages. Each stage is trained using a technique

©  ASCE 5
Computing  in  Civil  Engineering  2015 30

called boosting. Boosting provides the ability to train a highly accurate classifier by
taking a weighted average of the decisions made by preceding stages. Each stage of
the classifier analyzes a portion of an image defined by a sliding window and labels
the region as either positive or negative. The size of the window varies to detect
objects at different scales. During training, if any object is detected from a negative
sample, this is a false positive decision. This false positive is then used as negative
sample and each new stage of the cascade is trained to correct mistakes made by
preceding stages. For detection purposes, positive indicates that an object was found
or otherwise it is negative. When the label is negative, the classification of this region
is complete, and the classier slides the window to the next region. If the label is
positive, the classifier passes the region to the next stage. The classifier confirms a
vessel found when the final stage classifies the region as positive.
During the training phase, several parameters must be determined in order to
achieve acceptable classifier accuracy, such as the number of stages and false alarm
rate. The greater the number of stages, the greater the amount of training data the
classifier requires. The false alarm rate is the fraction of negative training samples
incorrectly classified as positive. The lower this rate is, the higher the complexity of
each stage. These parameters can be fine-tuned experimentally according to a desired
level of accuracy. Once the classifier is satisfactorily trained, it can be used to process
new satellite images to detect vessels.
Offshore pipelines usually spread out over a long distance and the presence of
vessels in a relatively narrow pipeline safety zone (e.g. usually 200 meters along both
side of a pipeline) is rare. Based on this observation, we proposed a unique “color-
coding” scheme to significantly reduce image processing time by focusing on
pipeline safety zone and ignoring its surrounding area and thus noises (e.g. ocean
surface and onshore objects). This is achieved by adding color layers to an image to
segment the image according to the presence of pipelines. For example, as shown in
Figure 3, the green color coded region contains the pipelines while the red zone does
not. In particular, the dark green area (3c) represents the pipeline’s danger zone, and
the light green area (3b) refers to the vicinity of the danger zone. The red color zone
(3a) will be ignored by the classifier, while it focuses its effort on analyzing green
areas, resulting in much shorter processing time.

a) No-pipeline zone b) Vicinity of a danger zone c) Pipeline danger zone


Figure 3. Color-coding scheme to reduce image processing time.

IMPLEMENTATION AND PERFORMANCE

The proposed method was coded in a MATLAB® program and two


applications were referenced in this program—Training Image Labeler (TIL) and

©  ASCE 6
Computing  in  Civil  Engineering  2015 31

Computer Vision System Toolbox (MathWorks 2014). For testing purpose, a total of
755 passive satellite image samples containing various types of vessels were collected.
In addition, 35 negative image samples were also included in the training dataset.
These images were stored in Portable Network Graphics (PNG) format since it is a
raster-based graphic format that supports lossless data compression. These color
images were then converted into gray-scale images to increase the contrast for more
efficient image processing.
For labeling ground truth data in the positive training samples, a MATLAB
application, TIL, was used, and this application allows a user to interactively specify
a rectangular region around a vessel as Regions of Interest (ROIs). A ROI define the
location of a vessel, which are later used as positive samples to train the classifier.
The training of the custom vessel cascade classifier was implemented using
the trainCascadeObjectDetector function in the Computer Vision System Toolbox
(MathWorks 2014). The training parameters were determined using the trial-and-error
approach. The feature descriptor selected was HOG and false alarm rate and the
number of cascade stages were set to 17.5% and 8 respectively.
The color-coding scheme was applied to keep the classifier working only
within the interested regions, i.e. pipeline safety zone. Once the location of an
offshore pipeline location was determined, three sub-zones were defined: Danger
Zone (DZ) in light green, Vicinity of the Danger Zone (VDZ) in dark green, and the
no-pipeline zone in red. Our program is capable of detecting the color segmentations,
ignoring the red color zone, and focusing on the green zones to detect vessels. When
a vessel is found, the program labels the detected object with a yellow rectangular, as
shown in Figure 4a.
The performance of the classifier can be measured by the rates of true
detection (i.e. true positive) and false alarm (i.e. false positive). A true positive occurs
when a positive sample is correctly classified. A false positive occurs when a negative
sample is mistakenly classified as positive. The proposed algorithm was tested on 150
Google Earth satellite images with an average detection rate of 94% and a false alarm
rate of 19.75%. Comparing with past similar studies, Li et al. (2013) used commercial
high-resolution satellite images and achieved an average detection rate of 83.2% and
a false alarm rate of 33.5%, as shown in Figure 4b.

CONCLUSION

Current methods of relying on high-resolution satellite images lead to a high


implementation cost and less efficient image processing. This study proposes a
method of utilizing regular resolution satellite images for vessel detection in offshore
pipeline safety zones. It applies a combination of cascade classifier and a unique color
coding scheme to achieve an accurate and efficient satellite image processing
procedure. This method was tested using publically available free Google Earth
satellite images, and it is cost effective while still achieving a similar detection rate as
previous studies did using high-resolution images.

©  ASCE 7
Computing  in  Civil  Engineering  2015 32

a) Vessel detected and labeling b) Detection rate


Figure 4. Vessel detection and performance.

Despite its advantages, satellite imaging has several limitations. First, it has
low temporal resolution as indicated by the revisit time, which is a measure of the
time interval between two satellite visits to the same location on earth, ranging from a
few days to weeks. For meaningful pipeline surveillance, satellite- and AIS-based
sensing must be integrated. Second, optical image sensors lack the ability of data
capturing in all-weather conditions and during night time. Synthetic Aperture Radar
(SAR) imagery that is based on radio waves should be investigated in future research.

REFERENCES

Dalal, N. and Triggs, B. (2005). Histograms of Oriented Gradients for Human


Detection, Proc. CVPR 2005, San Diego, CA.
GateHouse (2010). “Offshore pipeline surveillance solution.” White paper.
GateHouse, Sundby, Denmark.
Li, Z., Xie, X., Zhao, W., and Liu Y. (2013), “A method of vessel detection in optical
satellite based on saliency map”, Proc. ICTIS 2013, ASCE, Wuhan, China.
MathWorks (2014). MATLAB® R2014a Help for Cascade Classifier, MathWorks,
Natick, MA.
McDonald, M. (2003). “The update of the loss of contaminant data for offshore
pipeline. Publication PARLOC 2001. Mott MacDonald Ltd., Offshore
Operators Association and the Institute of Petroleum, U.K.
National Research Council (NRC) (1994). Improving the safety of marine pipelines,
National Academy Press, Washington D.C.
Qi S., Yanping L., Qulin T. and Sulan Z. “Research on vehicle information extraction
from high-resolution satellite images”, Proc. ICCTP 2009, ASCE, Harbin,
China.
Woodson, R. D. (1990). Offshore pipeline failures.
<http://www.dtic.mil/dtic/tr/fulltext/u2/a259708.pdf > (Sep. 30, 2013).
Zhu C., Zhou H., Wang R., and Guo, J. (2010) “A novel hierarchical method of
vessel detection from spaceborne optical image based on shape and texture
features”, IEEE transactions on geoscience and remote sensing, 48(9).

©  ASCE 8
Computing  in  Civil  Engineering  2015 33

Skeleton-Based Registration of 3D Laser Scans for Automated Quality Assurance of


Industrial Facilities
M. Nahangi1; L. Chaudhary2; J. Yeung3; C. T. Haas4; and S. Walbridge5
1
Graduate Research Assistant, Department of Civil and Environmental Engineering, University
of Waterloo, Canada. E-mail: [email protected]
2
Undergraduate Research Assistant, Department of Civil and Environmental Engineering,
University of Waterloo, Canada. E-mail: [email protected]
3
Graduate Research Assistant, Department of Civil and Environmental Engineering, University
of Waterloo, Canada. E-mail: [email protected]
4
Professor, Department of Civil and Environmental Engineering, University of Waterloo,
Canada. E-mail: [email protected]
5
Associate Professor, Department of Civil and Environmental Engineering, University of
Waterloo, Canada. E-mail: [email protected]

Abstract
Registration of 3D point clouds is one possible way to compare the as-built and the as-
designed status of construction components. Building information models (BIM) contain detailed
information about the as-designed state, particularly 3D drawings of construction components.
On the other hand, using automated and accurate data acquisition methods such as laser scanning
provide reliable and robust information about the as-built status of construction components.
Registration therefore makes it possible to automatically compare the designed and built states in
order to appropriately plan forward and generate the corrective actions required. This paper
presents a new approach for reliably performing the registration with a required level of accuracy
and automation within a substantially improved timeframe. Rather than performing the
computationally intensive registration methods that may not work robustly for dense point
clouds, the proposed framework employs the geometric skeleton of the construction components,
which is extremely less dense and therefore computationally less costly for the processing step
required. The method is experimentally tested for the components extruded along an axis, such
as industrial assemblies (i.e. pipe spools and structural frames) for which a geometric skeleton
represents the components abstractly. The registration of 3D point clouds is performed in a
computationally less intensive manner, and the framework developed has the potential to be
employed for (near) real-time assembly control, quality control and status assessment processes.
Keywords: Laser scans; 3D imaging; Skeletonization; Point cloud registration; Quality control
INTRODUCTION
Problem Statement
Assembly of construction components heavily involves complicated geometries in various
phases such as fabrication, installation, and inspection. Due to the unavoidable errors that
predominantly occur during the fabrication phase or the continual changes that occur during
construction, engineers and construction managers need a tool to track of the built status of the
construction components. Such a tool must provide a sufficient level of accuracy in order to be
1
©  ASCE
Computing  in  Civil  Engineering  2015 34

reliably and timely integrated with the construction processes involved. The discrepancies are
then detected in a timely manner and required corrective actions can be planned and generated
accordingly.
Despite the fact that it might be accurate, using conventional surveying approaches, such as
tape measuring, for as-built dimension measurement, is ineffective and inefficient; because of the
complicated geometry of the components involved, as discussed earlier, and the ineffective way
of data collection. The industrial sector is a major portion of the construction industry according
to the statistics provided by (US Census Bureau 2014). Over $83 billion is spent on industrial
power generations, which is approximately 10% of the total construction output in 2013.
Sophisticated pipe spools and steel structural frames are the dominant assemblies used in
industrial construction. On the other hand, promising approaches for tracking and reconstructing
of construction components, in general, and the industrial sector, in particular, provide the as-
built dimensions accurately and efficiently in order to be used for tracking continuous change in
projects. Using accurate and robust data acquisition methods such as laser scans, digital
photographs/videos, and range images provides the required level of accuracy for as-built
dimensional analysis. Although accurate as-built status is acquired using one of the promising
technologies, extraction of meaningful information (i.e. as-built dimensions for the components)
from the acquired as-built status is still being performed manually. The manual processing for
generating the as-built BIM is performed for further manufacturing designs and engineering
considerations such as inspection, maintenance, and planning for corrective actions where
discrepancy is detected. Manually performing such analyses for a real, complicated industrial
facility, such as a power plant, is time consuming, costly, and therefore causes delays in
construction projects.
Point cloud registration is an automated way of comparing the built status with the designed
drawings. However, due to the inherent challenges with the geometry and complications of the
erection process, point cloud registration requires precise preprocessing. Moreover, point cloud
registration output is a rigid transformation that does not necessarily lead to the correct match
between the as-designed and as-built status. It becomes computationally intensive for dense point
clouds which make its use limited in (near) real-time modeling systems. This paper presents an
automated framework in order to address the industrial need for improving automated inspection
processes and solve the challenges involved in the existing registration techniques. A skeleton-
based registration which makes use of the assemblies abstracted to a wireframe (skeleton) is
proposed. Representing the assemblies by their geometric skeletons is computationally more
efficient and reasonably accurate.
Background
As-built modeling in the construction industry is performed to extract as-built dimensions
from 3D images acquired by an appropriate method. As-built modeling has been employed for
various applications such as progress tracking (Golparvar-Fard et al. 2009; Turkan et al. 2012),
status assessment (Zhu and Brilakis 2010) and quality control (Nahangi et al. 2015). Automated
full construction site retrieval is neither possible nor computationally time effective. However,
detecting, localizing, and characterizing of critical dimensions (“surveying goals”) by the
automated modelling of corresponding shapes and features has been recently attempted and
found to be effective. For example, surveying goals can be automatically extracted from laser
scans of concrete bridges in order to assess the built status of construction components (Tang and
2
©  ASCE
Computing  in  Civil  Engineering  2015 35

Akinci 2012a; Tang and Akinci 2012b). On the other hand, building information can be
employed as a-priori knowledge in order to expedite the comparison step involved. Point cloud
registration is an efficient and automated technique for comparing the as-designed (BIM) with
the as-built (scan) status. 3D point cloud registration is performed based on common features
that exist in different point clouds. Such features include geometric features (e.g. edge, corner),
and invariant mathematical features (e.g. SIFT, SURF). Finding the common features between
the 3D images is a key for point cloud registration. Point cloud registration has been employed in
construction for various purposes such as automated inspection and structural health monitoring.
For example, (Bosché 2012) used a plane-based registration by automatically extracting plains
and roughly matching the point clouds representing the two states for the purpose of automated
progress tracking. They employed the iterative closest point (ICP) method for the fine
registration step.
PROPOSED METHODOLOGY
The proposed methodology for efficient registration of point clouds is illustrated in Figure 1.
As shown in Figure 1, the proposed method consists of three primary steps: (1) preprocessing
that is required in order to make the inputs in an appropriate format and resolution, (2)
skeletonization that results in the extraction of the geometric skeleton of an assembly, and (3)
registration that employs ICP for calculating the rigid transformation for matching the resulted
skeletons from the previous step. These steps are explained in the following sections.

BIM

Skeletonized
BIM
Skeletonization Registration
Find T Apply T on the Registered point
Engine (Transformation) entire point cloud clouds
Skeletonized
Scan

As-Built

Figure 1: Proposed methodology for skeleton-based registration


Preprocessing
The inputs of the efficient registration system proposed here are a 3D image acquired by an
appropriate method (laser scans are employed here), and 3D CAD drawings of the assemblies
that exist in the building information model (BIM). In general, the preprocessing step is to
prepare the inputs of a system in an appropriate condition required by the system. For example,
laser scanned point cloud should be filtered in order to remove the noise. Noise removal is
performed manually after capturing the point cloud. However, automated approaches for the
noise removal step have been recently employed (Kim et al. 2013). On the other hand, 3D
drawings should be converted to point cloud format, for example STL (Stereo Lithography), in
order to perform the ICP. The output of the preprocessing step is the required point clouds for
performing ICP, as mentioned earlier.

3
©  ASCE
Computing  in  Civil  Engineering  2015 36

Skeletonization
Several research studies have recently attempted to extract the skeleton of a geometric shape
(Au et al. 2008; Cornea et al. 2007; Junjie Cao et al. 2010; Tagliasacchi et al. 2009; Zhou et al.
1998). Skeleton extraction is specifically advantageous for pipe spools and structural assemblies
whereby the as-built status is represented by an abstract wireframe (skeleton). Once the point
clouds representing the designed and built states are generated and required preprocessing is
performed, the skeleton of each state is to be extracted in this step. The skeleton extraction
method employed in this study has two primary steps:
(1) Skeletal candidate estimation: based on a Voronoi diagram (Aurenhammer 1991; Okabe
et al. 2009), the point cloud’s skeletal candidates are initially estimated. A Voronoi diagram is an
approach to divide a subspace ( into a number of sub regions equal to the number of points in
( such that the any point in the region is closer to the corresponding point being investigated.
Once the Voronoi diagram is defined, its dual graph can be calculated accordingly. The resulted
dual graph is called Delaunay triangulation. Combining Voronoi and Delaunay regions, results in
the identification of a disk around each point, known as Umbrella. Based on angle and ration
conditions on the resulted Umbrella, defined in (Dey and Zhao 2004), skeletal candidates are
selected.
(2) Skeleton generation: based on a pruning algorithm which employs Laplacian smoothing
(Junjie Cao et al. 2010) of the skeleton candidates, the skeleton of the point cloud is generated.
Laplacian contraction (smoothing) is an iterative procedure that updates the skeletal candidates.
Given a skeletal dataset with candidates, Laplacian smoothing is employed to update the
skeletal dataset by solving a linear matrix equation as follows:

in which, and are the attraction and contraction weights, respectively, and the superscript
represents the iteration. In summary, the final output of this iterative procedure is the skeleton
of the point cloud. The convergence rate of the method relies on the attraction and contraction
parameters. A threshold value is also required to cease the iterative procedure based on the
accuracy required. (Lee et al. 2013) used skeletons of spools for 3D reconstruction of pipe
assemblies accurately and time effectively. For more details on the skeleton extraction technique
used here, the reader is referred to (Junjie Cao et al. 2010; Lee et al. 2013).
Registration

Given a laser scanned point cloud that represents the as-built status ( ), and an originally
designed drawing converted to point clouds that represents the as-designed status ( ),
performing the skeletonization resulted in the abstracted states in the form of skeleton (wire
frame). The skeleton of the as-built status is denoted by and the as-designed status is denoted
by . For finding the rigid transformation required to best match the two skeletons, an ICP-
based registration is employed. The summary of the ICP algorithm is illustrated in Figure 2.

4
©  ASCE
Computing  in  Civil  Engineering  2015 37

Registered
Find the best
Skeletons
Skeletonized Find the transformation T
Apply T and Error is and
as-built and correspondences (Minimize error Yes
calculate error acceptable original
as-designed (closest points) between
point
correpondences)
clouds
No

Figure 2: Proposed ICP-based registration engine and the flow between various
components

As shown in Figure 2, the registration is calculated by finding the best transformation which
minimizes the error between the correspondences in the skeleton datasets. The rigid
transformation calculated ( ) is then applied on the original point clouds. In other words, the
required mathematical manipulations are calculated using the skeletons rather than the original
point clouds that are computationally intensive. Details about the effective parameters and the
subsystems involved in the registration step can be found in (Kim et al. 2013; Nahangi and Haas
2014; Rusinkiewicz and Levoy 2001).
VERIFICATION AND VALIDATION
For the purpose of verifying and validating the proposed methodology explained earlier, a
case study is investigated. As-built status of a pipe spool branch is acquired using a laser scanner.
3D CAD drawings in the point cloud format is the other required input to the efficient
registration system proposed here. The studied pipe spool along with the laser scanned as-built
and 3D drawing is shown in Figure 3. The two point clouds are preprocessed and stored for
further application of the required transformation. These point clouds are then imported to the
skeletonization engine developed. The skeletonized designed and built states are then imported
to the registration engine. The rigid transformation resulted from the registration step is applied
on the original point clouds. The explained procedure is programmed and implemented in an
integrated platform using C++ and MATLAB.

(a) (b)

(c) (d)

Figure 3: The investigated spool branch. (a): As-designed status (3D drawing) of the
branch ( ); (b): 3D drawing converted to point cloud with the resulted skeleton ( );
(c): Laser scanned as-built status ( ); (d): the resulted as-built skeleton ( ).
5
©  ASCE
Computing  in  Civil  Engineering  2015 38

Typical results of the registration is shown in Figure 4. Results of the investigated spool
branch is summarized in Table 1.

(b)
(a)

(c)

Figure 4: Initial positions of non-registered skeletons (a); registered skeletons by


applying ICP and finding the transformation (b); and original point clouds
registered (c), using the calculated from the skeleton registration (Blue: as-
designed state converted to point cloud format, red: laser scanned data).

Table 1: Summary of the results for the proposed skeleton-based registration


Original point Skeletonized point
Parameter clouds clouds

Point cloud size


11643 50000* 1000 1000
(number of points)
Skeletonization time - 6-7 sec
Registration time 253 sec <1 sec
Processing time 253 sec 7-8 sec
RMS
4.83 cm 6.21 cm
(Root Mean Square)
*
Down-sampled point cloud size
As seen in Table 1, the required time for the registration step is substantially decreased. The
as-built laser scanned point cloud is down-sampled in order to expedite the skeletonization
procedure. One distinctive feature of the skeletonization method employed here is that it is
insensitive to the resolution of the original point cloud. In other words, point cloud down-
6
©  ASCE
Computing  in  Civil  Engineering  2015 39

sampling impact on the extracted skeleton results is insignificant. The root mean square
represents the average error for the registration performance. The RMS value of the skeleton-
based registration is also comparable to the regular ICP registration. A post ICP on original point
clouds with significantly less number of iterations is expected to improve the RMS value from
the new transformed states.
CONCLUDING REMARKS
A skeleton-based registration of point clouds was presented in this paper. The point clouds
representing the built and designed states are skeletonized using a Laplacian based contraction.
The skeletonized point clouds are significantly less dense and the required manipulations are
therefore computationally less intensive. For the registration step ICP is applied on the skeletons.
The rigid transformation resulted from the registration step is then applied on the original point
clouds. In order to measure the performance of the proposed registration method, the explained
framework was programmed and a case study was performed on a pipe spool. The results show
that the proposed registration method is performed within a significantly faster timeframe (Table
1). Such improvement in the time effectiveness of the method implies that the skeleton-based
registration developed here can be used for (near) real-time investigation of industrial
assemblies. The registration method is reasonably accurate as well and it has the potential to be
employed for further investigation of industrial assemblies. For example, the registration output
can feed into a discrepancy detection system for localizing and quantifying the discrepancies
incurred and planning the required corrective actions (Nahangi et al. 2015). It should be noted
that the framework developed here is well suited for pipe spools and structural assemblies
whereby skeletons meaningfully represent the assemblies (i.e. it is not well suited for
unsymmetrical cross sections or point clouds with unbalanced density). One potential avenue for
improving the developed framework is to apply a few ICP iterations on the resulting point clouds
after applying the skeleton-based registration. A smaller RMS value is expected after performing
such a post processing step.
REFERENCES
Au, O. K., Tai, C., Chu, H., Cohen-Or, D., Lee, T. (2008). "Skeleton extraction by mesh
contraction." Proc., ACM Transactions on Graphics (TOG), ACM, 44.
Aurenhammer, F. (1991). "Voronoi Diagrams—a Survey of a Fundamental Geometric Data
Structure." ACM Computing Surveys (CSUR), 23(3), 345-405.
Bosché, F. (2012). "Plane-Based Registration of Construction Laser Scans with 3D/4D
Building Models." Advanced Engineering Informatics, 26(1), 90-102.
Cornea, N. D., Silver, D., Min, P. (2007). "Curve-Skeleton Properties, Applications, and
Algorithms." Visualization and Computer Graphics, IEEE Transactions On, 13(3), 530-548.
Dey, T. K., and Zhao, W. (2004). "Approximate Medial Axis as a Voronoi Subcomplex."
Comput. -Aided Des., 36(2), 195-202.
Golparvar-Fard, M., Pena-Mora, F., Savarese, S. (2009). "D4AR – A 4-Dimensional
Augmented Reality Model for Automating Construction Progress Monitoring Data Collection,

7
©  ASCE
Computing  in  Civil  Engineering  2015 40

Processing and Communication." Journal of Information Technology in Construction (ITCon),


14, 129-153.
Junjie Cao, Tagliasacchi, A., Olson, M., Hao Zhang, Zhixun Su. (2010). "Point cloud
skeletons via laplacian based contraction." Proc., Shape Modeling International Conference
(SMI), 2010, 187-197.
Kim, C., Son, H., Kim, C. (2013). "Fully Automated Registration of 3D Data to a 3D CAD
Model for Project Progress Monitoring." Autom. Constr., 35, 587-594.
Lee, J., Son, H., Kim, C., Kim, C. (2013). "Skeleton-Based 3D Reconstruction of as-Built
Pipelines from Laser-Scan Data." Autom. Constr., 35, 199-207.
Nahangi, M., and Haas, C. T. (2014). "Automated 3D Compliance Checking in Pipe Spool
Fabrication." Advanced Engineering Informatics, 28(4), 360-369.
Nahangi, M., Yeung, J., Haas, C.T., Walbridge, S., West, J. (2015) “Automated assembly
discrepancy feedback using 3D imaging and forward kinematics.” Submitted to Journal of
Automation in Construction on July 3, 2014.
Okabe, A., Boots, B., Sugihara, K., Chiu, S. N. (2009). Spatial Tessellations: Concepts and
Applications of Voronoi Diagrams, John Wiley & Sons.
Rusinkiewicz, S., and Levoy, M. (2001). "Efficient variants of the ICP algorithm." Proc., 3-
D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference On, , 145-
152.
Tagliasacchi, A., Zhang, H., Cohen-Or, D. (2009). "Curve skeleton extraction from
incomplete point cloud." Proc., ACM Transactions on Graphics (TOG), ACM, 71.
Tang, P., and Akinci, B. (2012a). "Automatic Execution of Workflows on Laser-Scanned
Data for Extracting Bridge Surveying Goals." Advanced Engineering Informatics, 26(4), 889-
903.
Tang, P., and Akinci, B. (2012b). "Formalization of Workflows for Extracting Bridge
Surveying Goals from Laser-Scanned Data." Autom. Constr., 22, 306-319.
Turkan, Y., Bosche, F., Haas, C. T., Haas, R. (2012). "Automated Progress Tracking using
4D Schedule and 3D Sensing Technologies." Autom. Constr., 22(0), 414-421.
US Census Bureau. (2014). "Construction spending."
<https://www.census.gov/const/C30/release.pdf> (October 02, 2014).
Zhou, Y., Kaufman, A., Toga, A. W. (1998). "Three-Dimensional Skeleton and Centerline
Generation Based on an Approximate Minimum Distance Field." The Visual Computer, 14(7),
303-314.
Zhu, Z., and Brilakis, I. (2010). "Machine Vision-Based Concrete Surface Quality
Assessment." J. Constr. Eng. Manage., 136(2), 210-218.

8
©  ASCE
Computing  in  Civil  Engineering  2015 41

Mobile Proximity Sensing Technologies for Personnel and Equipment Safety in


Work Zones

J. Park1; E. Marks, P.E., Ph.D.2; Y. K. Cho, Ph.D.3; and W. Suryanto4


1
School of Civil and Environmental Engineering, Georgia Institute of Technology, 790
Atlantic Dr. N.W., Atlanta, GA 30332. E-mail: [email protected]
2
Department of Civil, Construction and Environmental Engineering, University of
Alabama, H.M. Comer Hall, Tuscaloosa, AL 35401. E-mail: [email protected]
3
School of Civil and Environmental Engineering, Georgia Institute of Technology,
Mason Building Room 4140B, Atlanta, GA 30332. E-mail: [email protected]
4
School of Civil and Environmental Engineering, Georgia Institute of Technology, 790
Atlantic Dr. N.W., Atlanta, GA 30332. E-mail: [email protected]

Abstract

This paper introduces a new proximity alarming technology for roadway work
zones. Roadway work zones are dynamic in nature and offer workers with limited
work space, contributing to dangerous work environments for construction workers
who are constructing and maintaining the infrastructure. Hazardous proximity
situations can be encountered especially when the ground workers operate in close
proximity to heavy construction equipment. Past research effort has been made in
aiming at providing proximity sensing technologies for construction workers. These
technologies, however, still have limitations that defer extensive deployment, which
include accuracy, cost, required hardware, and ease of use. This study focuses on
creating and evaluating a feasible technology that overcomes the drawbacks found in
other technologies. Using Bluetooth sensing technology, a proximity detection and
alert system was created. Experimental results demonstrate the created system’s
ability to provide alerts to equipment operators and pedestrian workers at pre-
calibrated distance in real-time. The proximity detection and alert device
demonstrated its capabilities to provide, with an appropriate alarm, an additional layer
of hazard avoidance to pedestrian workers in roadway work zone environments.

INTRODUCTION

Roadway work zones often contain multiple construction or maintenance


resources in a limited work space. The dynamic nature and limited work space of
roadway work zones often require pedestrian workers to work in close proximity to
construction equipment which results in hazardous proximity situations. The risk of
injuries and fatalities for pedestrian workers increases as contact collisions between
pedestrian workers and construction equipment occur.
Previous research efforts of hazardous proximity situations in roadway work
zones have focused largely on statistics for worker injuries and fatalities. Minimal
research and experimentation has been conducted on how safety technologies can

©  ASCE 1
Computing  in  Civil  Engineering  2015 42

provide an additional layer of hazard awareness for pedestrian workers and


equipment operators (Teizer et al. 2010). This research is to create and evaluate an
innovative proximity sensing technology using Bluetooth Technology as for potential
proximity detection and alert system. Experiments were also designed and conducted
to emulate typical characteristics and operation scenarios within roadway work zones.
A subsequent discussion of the analyzed experimental results, encountered limitations
and benefits, and required future research work in proximity detection will follow.

LITERATURE REVIEW

Safety is one of the most importance components that need to be successfully


addressed during construction. The work environment in U.S construction industry has
proven to be one of the most dangerous work environments among many other
industrial segments (Marks 2014). Limited work space and dynamic environment
contribute to the densely populated nature of roadway work zones. A multitude of
interactions between pedestrian workers and construction equipment occur in roadway
work zones. These interactions in a limited work space can contribute to hazardous
proximity situations, further leading to worker injury or fatality. Historical incident data
prove that the current safety practice has not been effective in providing protective
working conditions and further improvements are essential for construction safety.
Various technologies and system combinations (Kim 2006) are thought to be
capable of alerting construction personnel in real-time. Initial testing and evaluation has
occurred for proximity detection systems in other industries such as underground mining
(Ruff 2007), the railroad industry (Begley 2006), and manufacturing (Larsson 2003).
Safety technologies can provide workers with a “second chance” by creating an
additional layer of protection for ground workers on construction sites (Teizer et al. 2010).
Several parameters were used to assess each system including detection area,
alert method, precision, size, weight, calibration functionality, power source, ability to
identify people from objects, and others. Benefits and limitations of each technology
were identified. For example, systems utilizing radio frequency identification device
(RFID) technology can be impacted by direct contact with metallic objects (Goodrum
et al. 2006) and experiences multipath or “crosstalk” that limit the system’s ability to
distinguish individual worker proximity breaches (Lázaro 2009, Castleford et al. 2001).
Some of the evaluated systems were incapable of identifying people versus other
objects (Ruff 2007, Teizer et al. 2007, Hallowell et al. 2010). These benefits and
limitations were used to identify a reliable technology capable of detecting and alerting
workers during hazardous proximity situations (Teizer et al. 2007). Results from the
review indicate that proximity detection and alert systems utilizing magnetic field
technology can be reliable in the construction environment with its own limitations.

BLUETOOTH PROXIMITY DETECTION AND ALERT SYSTEM

The main favorable characteristics of Bluetooth wireless technology are rapid


connectivity, low-cost hardware, and minimal individual infrastructure requirements.
These characteristics were identified as benefits for construction applications, specifically
for position tracking of construction vehicles (Lu et al. 2007), and information delivery

©  ASCE 2
Computing  in  Civil  Engineering  2015 43

systems (Behzadan et al. 2008). Furthermore, capabilities of Bluetooth have been used as
wireless sensor networks for resource tracking at building construction sites (Shen et al.
2003). The typical maximum range of one Bluetooth enabled devices was recorded as 50
meters for location tracking purposes (Behzadan et al. 2008). Because Bluetooth has been
successfully evaluated for other construction industry applications, the capabilities of this
system could potentially detect and alert workers during hazardous proximity situations.
Bluetooth sensing technology was selected to create a proximity detection and
alert system for the roadway work zone environment. This system provides a wireless
and rugged technology that is capable of functioning in the harsh outdoor roadway
work zone environment. This Bluetooth technology is thought to be able to (1) provide
intensifying alerts, depending on the degree of dangerousness, in real-time for workers
in hazardous situations, (2) allow for mitigation risk, (3) operate with minimal nuisance
alerts, and (4) create an additional layer of protection for pedestrian workers.
The Bluetooth technology proximity detection and alert system is comprised
of three components (Figure 1) to communicate in real-time and provide alerts to
workers in potentially hazardous situations.

1) Equipment Protection Unit (EPU) which is mounted as several beacons at


various locations on a piece of construction equipment
2) Equipment operator’s Personal Protection Unit (PPU) which is an application
that connects to pedestrian worker’s Personal Protection Unit. This connection
allows a simultaneous alarming on the equipment operator side when the
pedestrian worker’s PPU initiates an alarm
3) Pedestrian worker’s Personal Protection Unit (PPU) which is an application that
functions on any iPad, iPhone, or “smart” device that can be located anywhere
on the pedestrian worker

Figure 1. Bluetooth proximity detection and alert system EPU (beacon) mounted
on a wheel loader (left), and PPU held by a test person (right).

The three components interactively communicate with each other to provide


ground workers with alerts in the form of vibrations and intensifying alarms upon the
breaches of the pre-defined hazard zone. These alert distances (hazard zone) can be
calibrated for specific pieces of construction equipment and site conditions.

OBJECTIVE AND APPROACH


The primary objective was to create and evaluate a proximity detection and
alert system that utilizes Bluetooth technology. This system should have minimal
infrastructure (Figure 2), be capable of calibrating an alert zone, and provide an alert

©  ASCE 3
Computing  in  Civil  Engineering  2015 44

in real-time to pedestrian workers and equipment operators during hazardous


proximity situations. For an evaluation of the created proximity system, several
experiments were designed and conducted with actual construction equipment. The
scope includes proximity issues between construction equipment and pedestrian
workers in a roadway work zone.

Figure 2. Required hardware of wireless sensing technologies.

FIELD TESTS

The EPU of the created proximity detection and alert system contains several
beacons mounted to each side (or large surface) of a piece of construction equipment
(Figure 1). The experimental trials simulated operating functions of a roadway work
zone. Two types of tests were conducted to simulate (1) a scenario with static
equipment and a mobile pedestrian worker, and (2) a scenario with mobile equipment
and a static pedestrian worker (Figure 3). Both RFID and magnetic field proximity
detection and alert systems were also subjected to the same experimental trials to
establish a benchmark for comparison. All experimental trials were conducted
outdoors with clear weather conditions, low wind speed, and a temperature of
approximately 32 degrees Celsius. A clear, flat, paved ground surface with no
obstructions was used as a test bed for all trials.
The coverage area experimental trials were designed to simulate the
interaction between a stationary piece of construction equipment and a mobile
pedestrian worker (Figure 3). These trials assessed the reliability of the Bluetooth
technology sensing system to detect and provide an alert when the mobile pedestrian
worker crossed into the pre-calibrated hazardous proximity zone. Two pieces of
construction equipment, such as 1) a wheel loader and 2) a small dump truck, were
used for the coverage area experiments. The PPU was placed in the pedestrian
worker’s right pocket for all trials. The experimental test bed was outlined as shown
in Figure 3, and tests were run at eight different, equally spaced, angles (45 degree
offsets, 0°, 45°, … , 270°, 315°), 20 times each. The distance at breach into the pre-
calibrated hazardous proximity zone was measured for each of the tests as well as for
each of the proximity sensing systems.
For the mobile equipment and static pedestrian worker test (Figure 3), 20
ground makers were positioned at 1.5 meters intervals along the straight line parallel
to the wheel loader’s travel path. The wheel loader approached the simulated
pedestrian worker (traffic cone) in a forward travel direction at a constant speed of 8

©  ASCE 4
Computing  in  Civil  Engineering  2015 45

kilometers per hour and stopped, and the distance was measured once the EPU alert
was activated. This procedure was repeated for 20 times for the three proximity
sensing systems. All three PPU’s (RFID, magnetic field, and Bluetooth) were
positioned at the static location on top of a traffic code approximately a 1 meter
vertical distance from the ground surface.

Figure 3. Test bed for static equipment and mobile pedestrian worker (left),
mobile equipment and static pedestrian worker experimental test bed (right).

RESULTS AND DISCUSSIONS

Figure 4 shows the average alert distance for the two simulations of static
equipment and mobile worker simulation; one is a simulation with a wheel loader
(left) and the other is with a truck (right). Although the Bluetooth sensing system
offered a relatively stable mean alert distances on average, there exists an unexpected
drop of the mean value at angle 315°. It means that the received signal strength at that
particular angle was weaker than other directions. Potential reasons for this
discrepancy are identified. 1) The battery level may degrade the performance of the
beacon attached at angle 315°, 2) the beacon itself possess not well-functioning signal
transmitters, and 3) the signals from the beacon may have been affected by the
surroundings, such as a boom, its arm, and/or even air. Comparing the three
proximity sensing systems, the magnetic sensing system performed the most reliably
in this set of trials. Comparing with the two different simulation cases, the on-average
alert distances have increased with the simulation of truck. Partial reason this could
be that a truck typical contain more flat surfaces, contributing less to obstruction for
communication between a transmitter and a receiver.
No nuisance alerts were recorded during the experimental trials. Of 320
simulations for conducted on the Bluetooth technology proximity system, a total of 1
false negative alert was recorded, which represented less than 1% of the total test
sample. The recall value for all approach angle trials was 1.0 except 0.95 for the
approach angle 270° with the wheel loader simulation. Neither the RFID nor the
magnetic field proximity detection and alert system experienced any nuisance alerts.
The magnetic field proximity sensing system also recorded no false negative alerts,
however, the RFID system failed to provide alerts 11 times throughout the trials.

©  ASCE 5
Computing  in  Civil  Engineering  2015 46

Figure 4. Average alert distance (coverage area) for static equipment and mobile
worker: wheel loader (left) and truck (right).

Figure 5 displays a box plot of the results from the mobile equipment static
pedestrian worker experiments; the top and bottom black line is the maximum and
minimum alert measured alert distance, the box represents the interquartile range, and
the red line in the box represents the median value. Table 1 below summarizes our
statistical analysis results. None of the proximity sensing systems assessed (RFID,
magnetic, and Bluetooth) experienced any nuisance alerts, however the Bluetooth
system recorded two false negative alerts and the RFID system recorded four false
negative alerts. Although the magnetic field system successfully provided alerts for
all the 20 trials, it is important to note that the alert distances of the magnetic system
were much smaller than the other two systems. Another interesting observation was
that comparing with the static equipment and mobile worker trials, the magnetic
sensing device offered much smaller alert distances, which may not be a sufficient
distance to take a proper action for avoidance of collision.

Table 1. Statistical Analysis of mobile wheel loader with static pedestrian


worker alert distances.
RFID Magnetic Bluetooth
False Negatives 1 0 2
Nuisance Alerts 0 0 0
Recall 0.95 1 0.9
Range 19.8 m 4.6 m 18.3 m
Standard Deviation 4.7 1.7 5.9
Interquartile Range 6.1 m 3.1 m 6.5 m

Figure 5. Box plots for mobile wheel loader and static pedestrian worker trials.

©  ASCE 6
Computing  in  Civil  Engineering  2015 47

During the experimental trials, the research team also collected and analyzed
other metrics including set-up time, calibration time and required infrastructure
(Table 2). The set-up duration and infrastructure required (including exterior power
access and antenna mounting) for the magnetic field and RFID proximity detection
and alert systems were increased when compared to the Bluetooth technology system,
mainly because the Bluetooth technology system does not require an antenna
mounting or access to an external power source. The research team also found the
time to calibrate the proximity alert zone for the RFID and magnetic field system was
much longer than the Bluetooth technology proximity sensing system. This is mainly
due to the created application for set-up and calibration of the Bluetooth technology
proximity sensing system.

Table 2. Comparison of the evaluated proximity detection and alert systems


Comparing RFID Magnetic Bluetooth
Metrics
Calibration Computer software Antenna modifications Mobile device user
interface (20 minutes) (20 minutes) interface (5 minutes)
Power Source Direct power source Direct power source Internal coin battery
required required needed
Set-up Time Medium (30 minutes) Medium (30 minutes) Minimal (10 minutes)

CONCLUSION

This research was to develop and evaluate the reliability and effectiveness of a
Bluetooth proximity detection and alert system when deployed in a roadway work zone
environment. Two sets of experiments were designed to test the Bluetooth technology
proximity detection and alert system for its capability to provide alerts in real-time for
pedestrian workers and equipment operators during hazardous proximity situations.
The tests simulated various interactions between ground workers and construction
equipment on the Bluetooth technology proximity detection and alert system. A
commercially available RFID and magnetic field proximity detection and alert system
were subjected to the same experimental trials to serve as a benchmark for comparison.
Analyzed data demonstrated that the developed Bluetooth proximity sensing
system is acceptable to detect the presence of hazardous proximity situations within
roadway work zones in real-time. The performance of the Bluetooth proximity
sensing system was satisfactory for the both simulations. When compared to the
RFID and magnetic field proximity detection and alert systems, the developed
Bluetooth proximity sensing system required the least amount of infrastructure and
time for calibration. The magnetic field proximity sensing system recorded the
highest reliability and accuracy values when compared to the RFID and Bluetooth
technology proximity sensing system. The major disadvantage with the magnetic
sensing system was, however, a big drop in the alert distance when subjected to
experimental trials with mobile equipment. Also, the research team noted that the
functionality of the Bluetooth proximity sensing system enabled it to continue to alert
during obstructions but with a greater level of signal attenuation. This property was
not found true when using the RFID proximity detection and alert system.

©  ASCE 7
Computing  in  Civil  Engineering  2015 48

REFERENCES

Begley, R. (2006). “Development of autonomous railcar tracking technology using railroad


industry radio frequencies: Research opportunities in radio frequency identification
transportation applications.” Transportation Research Circular, 59-60.
Behzadan, A., Aziz, Z., Anumba, C., and Kamat, V. (2008). “Ubiquitous location
tracking for context-specific information delivery on construction sites.”
Automation in Construction, 17(6), 737-748.
Castleford, D., Nirmalathas, A., Novak, D., Tucker, R. (2001). “Optical crosstalk in
fiber-radio WDM networks: Microwave theory and techniques,” IEEE
Transactions, Vol. 49(10), 2030-2035.
Goodrum, P., McLaren, M. and Durfee, A. (2006). “The application of active radio
frequency identification technology for tool tracking on construction job
sites.” Automation in Construction, 15(3), 292-302.
Hallowell, M., Teizer, J., Blaney, W. (2010). “Application of sensing technology to
safety management.” Proceedings of the Construction Research Congress,
Alberta, Canada.
Kim, C., Haas, C. Liapi, K. and Caldas, C. (2006). “Human-assisted obstacle
avoidance system using 3D workspace modeling for construction equipment
operation.” Journal of Computing in Civil Engineering,” 20(3), 177-186.
Larsson, T. (2003). “Industrial forklift trucks: Dynamic stability and the design of
safe logistics.” Safety Science Monitor, 7(1), 1-14.
Lázaro, A., Girbau, D. Salinas, D. (2009). “Radio link budgets for UHF RFID on
multipath environments.” Antennas and Propagation, IEEE Transactions,
Vol. 57(4), 1241-1251.
Lu, M., Chen, W., Lam, H., and Liu, J. (2007). “Positioning and tracking construction
vehicles in highly dense urban areas and building construction sites.”
Automation in Construction, 16(5), 647-656.
Marks, E. (2014). “Active Safety Leading Indicators for Human-Equipment
Interaction on Construction Sites”, thesis, Georgia Institute of Technology
Ruff, T. (2007). “Recommendations for Evaluating and Implementing Proximity
Warning Systems on Surface Mining Equipment.” Report of Investigations RI
9672. Department of Health and Human Services, CDC.
Ruff, T. (2004). “Advances in proximity detection technologies for surface mining.”
Proceedings of 24th Annual Institute on Mining Health, Health, Safety, and
Research. Salt Lake City, UT.
Shen, X., Chen, W., and Lu, M. (2008). Wireless sensor networks for resources tracking
at building construction sites.” Tsinghua Science & Technology, 13(1), 78-83.
Teizer, J., Allread, B., Fullerton, C., Hinze, J. (2010). “Autonomous pro-active real-
time construction worker and equipment operator proximity safety alert
system.” Automation in Construction, 19(5), 630-640.
Teizer, J., Caldas, C., Haas, C. (2007). “Real-time three-dimensional occupancy grid
modeling for the detection and tracking of construction resources.” ASCE
Journal of Construction Engineering and Management, 133(11), 880-888.

©  ASCE 8
Computing  in  Civil  Engineering  2015 49

Feature Extraction and Classification of Household Refrigerator Operation Cycles: A


Study of Undisturbed Refrigerator Scenario

Zeyu Wang1 and Ravi S. Srinivasan2


1
Graduate Research Assistant, School of Construction Management, University of
Florida, Gainesville, FL 32608. E-mail: [email protected]
2
Assistant Professor, School of Construction Management, University of Florida,
Gainesville, FL 32608. E-mail: [email protected]

Abstract

Household refrigerators are cyclic appliances whose operations comprise of


disturbed and undisturbed cycles. While disturbed cycles are due to the interference
of human activities, undisturbed cycles often occur late at nighttime and until early
morning when there is less or no interference from building occupants. Better
understanding of the operation cycles of refrigerators will not only help in improving
the energy use of the appliance, but also develop the necessary constructs that will
enable efficient operation via Smart Grid technologies. This paper uses an
unsupervised learning, the k-means clustering method, to classify household
refrigerator operation cycles for undisturbed periods. A typical household refrigerator
was monitored for 11 weeks. Two parameters namely, maximum electricity
consumption density and total electricity use for each refrigerator operation cycle
were extracted from the data collected and were used for the classification of
operation cycles. In order to analyze the actual working status (i.e., without human
interference) of the refrigerator, data obtained between 1:00 a.m. to 6:59 a.m. were
used for this study. Results show that the undisturbed refrigerator operation cycles
can be distinctly classified into three groups namely, “general working cycle,” “auto-
defrost working cycle,” and “auto-defrost affected working cycle.”

INTRODUCTION

Refrigerator is one of the residential energy consumers. Household


refrigerators use 8% of U.S. residential electricity consumption in 2012 (U.S. Energy
Information Administration, 2014). Besides, the use of refrigerators, particularly the
refrigerants, adds to the degradation of the environment, specifically the greenhouse
gas emissions and ozone layer depletion (Radermacher, 1994). Since 1990,
manufacturers have been required to comply with the U.S. Department of Energy’s
(DOE) energy conservation standards for the purpose of reducing energy use
associated with household refrigerators and freezers. A series of energy efficiency
standards related to household refrigerators have been put in place since then, and the

©  ASCE
Computing  in  Civil  Engineering  2015 50

most recent standard came into effect on September 15, 2014, and is expected to
significantly reduce the energy use of refrigerators by 25% to save approximately 5.6
quads of energy for the next 30 years ( Energy Efficiency and Renewable Energy
Office, 2011). However, in the U.S., more than 60 million refrigerators currently in
operation are over 10 years old, resulting in a total energy cost of $4.7 billion
annually (Energy Star, 2014). Residents continue to use these less efficient
refrigerators due to economic reasons (replacement cost) as well as a serious lack of
refrigerator energy efficiency evaluation tool.
The current refrigerator replacement strategy is to compare the energy
performance of current refrigerator with the new model or to compare monthly utility
bills among families with similar sizes. However, there is little information of the
refrigerator other than it is Energy Star labelled and the only method to compare is to
use the monthly energy bills. What’s worse, these two information sources have their
own limitations. On one hand, Energy Star label shows the energy efficiency
performance of a refrigerator. This efficiency is determined at the equipment’s peak,
healthy performance and does not relate the performance under faulty conditions.
Examples of faulty conditions include coolant leakage, freezer degradation, etc. On
the other hand, monthly bills are impacted by human behaviors; however, they do not
represent the energy efficiency performance of the refrigerator. What is needed is a
new method that can present both the refrigerator working conditions and energy
efficiency performance. Such a method is beneficial for owners of large residential
portfolio of buildings and other users to know their refrigerator’s health and
performance in a more informed manner to make timely and wise replacement
decisions. To further enhance, machine learning techniques can be employed to
extract features related to health and energy performance.
In recent years, clustering as a main technique to partition data into groups,
has been used to classify building electricity customers (Chicco, Napoli, & Piglione,
2006), predict future building energy demand (Duan, Xie, Guo, & Huang, 2011), and
detect abnormal behaviors (Li, Bowers, & Schnier, 2010). Some researchers have
proposed an appliance signature identification solution using k-means clustering to
identify different appliances (Chui, Tsang, Chung, & Yeung, 2013). Meanwhile,
some researchers used the operation patterns of household appliance to estimate its
energy consumption (Li , Miao, & Shi, 2014). The above research works have proven
load signature and clustering as effective means to classify and characterize different
household appliances, however, there is little research focusing on the operational
pattern classification for household appliances. In fact, a certain appliance should
have different operational pattern as its operational status change, which is significant
if it were to be connected to Smart Grid technologies. This study fills this gap by
introducing k-means clustering technique to classify the operational cycles of typical
household refrigerator. The objective of the research work discussed in this paper is
to aid the evaluation method of refrigerators’ energy performance through classifying
and recognizing different operational patterns during undisturbed cycles.
Household refrigerators are cyclic appliances whose operations are consist of
repeating cycles. A typical refrigerator cycle includes both run time and idle time.
Run time is where the refrigerator turns on to refrigerate or to defrost, while idle time
is where the refrigerator is off and no electricity is consumed. These operation cycles

©  ASCE
Computing  in  Civil  Engineering  2015 51

are further divided into two major groups: disturbed cycles and undisturbed cycles,
based on whether occupants have any impact, such as door opening and
loading/unloading food. Since operation cycles are impacted by occupants’ usages
which typically happen in daytime, the undisturbed cycles occur consecutively in
time period between late night and early morning when there is little or no
interference from occupants. Figure 1 shows a typical operation cycle which typically
occurs without occupant’s interference.
Comparing with disturbed cycles which contain the impact of human
interferences, undisturbed cycles are more persuasive to represent the energy
efficiency of the refrigerator which is affected by the construction of doors, the
insulation technologies, and the refrigeration techniques. In this paper, we focus on
the pattern recognition of undisturbed cycles in order to extract the household
refrigerator’s operation characteristics. The rest of the paper is organized as follows.
In section 2, we present the basic idea of the proposed clustering method. Section 3
describes the process of data collection and pre-processing. The hourly energy
performance analysis and clustering results are shown in section 4. In section 5, we
discuss the conclusion and future works.
700
Energy Intensity (Watt *Min)

600
500
400
300
200
100
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Time (Min)

Figure 1. Typical household refrigerator operation cycle for 24 hour period.

METHODOLOGY

Clustering, an unsupervised learning technique, groups a set of data into


multiple clusters so that objects in the same cluster have high similarity, while they
are very dissimilar to objects in other clusters (Han, Kamber , & Pei, 2012). In other
words, clustering groups the data objects based on whether they are similar or not,
and the similarity and dissimilarity are assessed based on the attributes which are
used to describe the objects.
In this paper, we applied a centroid-based clustering algorithm, k-means
clustering, for pattern recognition as it is efficient for large and multi-dimension data
sets. The basic idea of k-means algorithm is to assign objects to the most similar
cluster and update the center value of the cluster iteratively until the assignment is
stable, that is, the clusters formed in current round are the same as those forms in the
previous round. An error function is introduced to verify the stability of the
assignment and measure the partition quality. In k-means, a cluster is represented in

©  ASCE
Computing  in  Civil  Engineering  2015 52

terms of its centroid which is defined as the mean value of the objects within the
cluster. The difference between each object in the cluster and the cluster centroid is
measured by the Euclidean distance which is defined as follows:

where is the object, is the cluster centroid, is the dimension of the data,
is the ith attribute value of the object, is the ith attribute value of the cluster
centroid.
An error function is defined as:

where is the sum of the squared error for all objects in the data set. The
error function aims to make the resulting k clustering as compact and as separate as
possible. In other words, k-means algorithm can be treated as an optimization
problem which minimizes the error function.

DATA COLLECTION AND PREPROCESSING

Data collection. Data were collected from a typical two story residential
house located in Lincoln, Nebraska. A nonintrusive monitoring system called
eMonitor (PowerWise Systems, 2014) was used to monitor minute-by-minute
electricity use data at the circuit level. To obtain sufficient data, the monitoring period
started from January 16, 2011 to April 15, 2011 with a total number of 83 calendar
days. As the interval equals to one minute, a total of 119,520, i.e., 1,440*83, data
points were collected for each circuit. The minute-by-minute energy consumption
data of refrigerator allows the determination of the paired turn-on and turn-off events
of the refrigerator. Each operating cycle can be identified from such turn-on and turn-
off events. A total number of 2,165 cycles were identified from the monitored data.
Feature Extraction. Since clustering determines the similarity and
dissimilarity based on the attributes describing the objects, appropriate attributes
which can help distinguish objects of different groups should be selected and
extracted from the initial database. In this study, many attributes of the operation
cycle, such as run time, idle time, startup time, shutdown time, total electricity
consumption, average electricity consumption intensity, maximum electricity
consumption intensity, and minimum electricity consumption intensity, are
considered for clustering. Since the refrigerator operation cycles are distinguished
from the perspective of different working status, for example, auto defrost status,
general operating status, and cooling status, certain attributes which are able to
describe such difference should be selected. As shown in the figure 2, a comparison
of electricity consumption flows between different working statuses is conducted in
order to find out appropriate attributes.
The comparison indicates that auto defrost cycle has a significant higher
electricity consumption intensity which is around 650 Wattmin. The run time and idle
time are 27 and 14 minutes respectively. It can be concluded that auto defrost cycle

©  ASCE
Computing  in  Civil  Engineering  2015 53

results in more energy consumption than general operating cycle by increasing run
time, decreasing idle time, and switching to high level. As a result, two attributes
namely, the total electricity consumption which reflects both the run time and average
electricity consumption intensity, and the maximum electricity consumption intensity
which describe the freezer working status are selected as the attributes to describe the
operation cycle.
700
Energy Intensity (Watt *Min)

600

500

400

300

200

100

0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Time (Min)
General Operating Cycle Auto Defrost Cycle

Figure 2. General operating cycle VS. Auto defrost cycle.

RESULTS

A preliminary study focusing on refrigerator hourly energy performance was


conducted in order to understand the daily variation tendency of energy use. Figure 3
depicts the hourly average energy consumption tendency. From figure 3, the
fluctuation of trend line indicates that energy consumption is highly affected by
occupants’ interference. For example, the hourly energy consumption between 0:00
a.m. to 8:00 a.m. is approximately 30% less than that of the rest. The difference
indicates that during midnight where there is less or no interference from the
occupants, the refrigerator consumes less energy than that of the daytime where
occupants will use refrigerator frequently. Another representative example is the peak
load during meal time, three peaks are found in figure 3 at time periods 8 to 9, 10 to
13, and 18 to 20 respectively. These three time periods are highly related to occupants’
meal time, it is not difficult to speculate that occupants will use refrigerator more
frequently at meal time than any other time and as a result of that, more energy will
be consumed.
From this study, it can be noted that the energy efficiency can be evaluated
from time period where less occupants’ interference occurred and refrigerator runs
stably. In this study, we use operation cycles that run between 1:00 a.m. to 6:59 a.m.
as the major source of undisturbed cycles.
As shown in figure 4, the scatter diagram shows the distribution of cycles start
between 1:00 a.m. to 6:59 a.m.. According to the distribution, the operation cycles
can be partitioned into several groups based on the selected attributes. The number of
clusters – k is specified as 3 based on the distribution. Figure 5 depicts the k-

©  ASCE
Computing  in  Civil  Engineering  2015 54

clusstering resullts. It can bee seen from the figure 4 (b) that diffferent operaation cycles
are successfullyy partitionedd and the bouundaries of different
d clussters are cleaar.

9000
((Watt Min))

8000
7000
6000
gy Consumption

5000
p

4000
3000
2000
Energy

1000
0
1 2 3 4 5 6 7 8 9 10 11 12 13
1 14 15 16 17 18 19 200 21 22 23 24
Tim
me
Figure 3.
3 Hourly avverage energgy consump ption.
Based on
o the clusteering results, we summaarize the chaaracteristics of different
clussters in orrder to reccognize diffferent clustters and evvaluate the clustering
perrformance. The
T results are a shown inn table 1. Cyycles in clusster 1 are refferred to as
the general wo orking cyclees as they happen
h mosttly (576 outt of 629) annd steadily.
Cyccles in clusteer 2 are recoognized as auto-defrost
a w
working cyccles since they have the
greatest averag ge maximum m energy inntensity valuue which inndicates the running of
deffrost heater. Cycles in cluster 3 arre identifiedd as auto-deefrost affecteed working
cyccles as they occur right after auto-ddefrost cyclees. In respoonse to the auto-defrost
a
cyccles, these cy
ycles use moore time andd consumes more m energyy than generaal operating
cyccles to recoveer the refrigeerator interioor temperatuure.

Figure 4.
4 Distributiion for cyclees start betw
ween 1:00 a.m.
a to 6:59 a.m.

©  ASCE
Computing  in  Civil  Engineering  2015 55

Figure 5. Clustering
C reesults for cyycles start between
b 1:000 a.m. to 6:559 a.m..

Tabble 1. Energ
gy performaance summaary of differrent clusterss.
Cluuster No. of Aveerage maximmum energy Aveerage total ennergy
N
No. cycles intensity (W
Watt) consummption (Wattt * min)
1 5776 165.922 3516.41
2 277 616.811 6558.74
3 266 194.122 8409.08

CO
ONCLUSION

This stu
udy classifieed and recoognized the undisturbedd refrigeratoor operation
cyccles based onn two attribbutes namelyy, maximum m energy inteensity and total
t energy
connsumption. An
A unsupervvised machinne learning algorithm
a called k-means clustering
wass used for the classiffication. Ressults show that the undisturbed
u refrigerator
opeeration cycles can be distinctly classified
c innto three grroups nameely, general
worrking cycles, auto-defrrost workingg cycles, and a auto-deffrost affecteed working
cyccles, by the proposed
p method. The energy
e efficciency relateed index of each
e group,
succh as maximu um energy intensity,
i tottal energy coonsumption, run time, annd idle time
cann be used fo or refrigerattor energy efficiency
e evvaluation annd fault deteection. The
recoognition of different operation
o paatterns is siignificant inn the researrch area of
estaablishing ap
ppliance loadd profile, annticipating peak
p load, and
a intelligeent control.
Futture work will focus on the ressearch of pattern p recoognition forr disturbed
refrrigerator operation cyclles in orderr to compreehend the household
h reefrigerator’s
eneergy performmance and heealth, and subbsequent inttegration witth smart techhnologies.

AC
CKNOWLEDGEMENT
T

©  ASCE
Computing  in  Civil  Engineering  2015 56

The authors would like to acknowledge Dr. Jonathan Shi for providing data
used in this study.

REFERENCE

Energy Efficiency and Renewable Energy Office. (2011). 2011-09-15 Energy


Conservation Program: Standards for Residential Refrigerators,
Refrigerator-Freezers, and Freezers; Final Rule. Washington: United States
Department of Energy.
Chicco, G., Napoli, R., & Piglione, F. (2006). Comparisons among clustering
techniques for electricity customer classification. Transactions on Power
Systems, 933-940.
Chui, K., Tsang, K., Chung, S., & Yeung, L. (2013). Appliance signature
identification solution using K-means clustering. IECON 2013 - 39th Annual
Conference of the IEEE Industrial Electronics Society , (pp. 8420-8425).
Duan, P., Xie, K., Guo, T., & Huang, X. (2011). short-term load forecasting for
electric power systems using the PSO-SVR and FCM clustering techniques.
Energies, 173-184.
Energy Star. (2014). Refrigerators. Retrieved November 10, 2014, from
http://www.energystar.gov/products/certified-products/detail/refrigerators
Han, J., Kamber , M., & Pei, J. (2012). Data Mining: Concepts and Techniques.
Waltham: Morgan Kaufmann.
Li , M., Miao, L., & Shi, J. (2014, Oct). Analyzing heating equipment's operations
based on measured data. Energy and Buildings, 82, 47-56.
Li, X., Bowers, C., & Schnier, T. (2010). Classification of energy consumption in
buildings with outlier detection. IEEE Transactions on Industrial Electronics,
3639-3644.
PowerWise Systems. (2014). eMonitor - PowerWise System. Retrieved from
http://www.powerwisesystems.com/products/sitesage-emonitor4/
Radermacher, R. (1994). Impact of environmental concerns on refrigerator/freezer
designs. Heat Recovery Systems and CHP, 14(3), 227–238.
U.S. Energy Information Administration. (2014, 6 4). Estimated U.S. Residential
Electricity Consumption by End Use, 2012. Retrieved from
http://www.eia.gov/tools/faqs/faq.cfm?id=96&t=3

©  ASCE
Computing  in  Civil  Engineering  2015 57

Automated Tolerance Analysis of Curvilinear Components Using 3D Point


Clouds for Adaptive Construction Quality Control

Vamsi Sai Kalasapudi1 and Pingbo Tang2


1
Del E. Webb School of Construction, School of Sustainable Engineering and Built
Env. Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ
85287-3005. E-mail: [email protected]
2
Del E. Webb School of Construction, School of Sustainable Engineering and Built
Env. Ira A. Fulton Schools of Engineering, Arizona State University, P.O. Box
873005, Tempe, AZ 85287-3005. E-mail: [email protected]

Abstract

Ineffective tolerance analysis of prefabricated building components has a


significant impact on the quality of accelerated construction projects. Emerging
accelerated construction methods have adopted prefabrication techniques to improve
construction productivity and balance workflows. The complex three-dimensional
relationships among curved and deformed pipes and ductworks bring difficulties of
analyzing how fabrication and installation errors of densely located pipes influence
each other, and how such errors propagate throughout connected curvilinear
components. This paper presents a computational framework that integrates 3D
imagery data with Building Information Modeling to detect and analyze fit-up
problems of curvilinear components. Using the detected deviations, an automated
deviation classification algorithm derives the tolerance information of each individual
component. This process generates a tolerance network describing how geometric
variations of individual components influence each other. This tolerance network
provides a quality control framework that enables adaptive redistribution of
prefabrication and installation errors to resolve fit-up problems during accelerated
construction.

INTRODUCTION

Prefabricated components used in building construction bring various benefits


to construction projects while posing challenges to quality control. Overall,
prefabrication reduces the time of construction, creates opportunities for better
architectural designs, and increases the quality of construction (Wong, Hao, & Ho,
2013). (Sacks, Koskela, Dave, & Owen, 2009 ) pointed out that Building Information
Modeling (BIM) provides virtual assembly capabilities of as-designed objects, but
failed to understand the interrelationships between them during field installations.
Detailed analysis in the field becomes even more difficult because of the busy
schedules and inability of the conventional surveying methods to capture detailed

©  ASCE 1
Computing  in  Civil  Engineering  2015 58

geometric information (Resop & Hession, 2010). Efficient tolerance analysis can
considerably reduce the total amount of wastes during construction, improve
productivity, and contribute to a leaner process (Milberg & Tommelein, 2005, Sacks
et al., 2009). Lack of detailed tolerance analysis of prefabricated Mechanical,
Electrical, and Plumbing (MEP) can bring problems of field fit-ups that lead to
misalignments due to deviations from their as-designed dimensions (Bosché, 2010).
In current practice, Geometric Dimensioning and Tolerancing (GD&T)
provides a platform for communicating engineering tolerances (Henzold, 2006) . It
provides computational supports to aid engineers in designing the geometry of the
component within allowable tolerance variations. American Society for Mechanical
Engineers (ASME) Y14.5-2009 sets standards for present dimensioning and tolerance
controlling, but studies show that these standards are inadequate (Zhang, 1997).
Milberg and Tommelein inferred that the Architectural Engineering Construction
(AEC) industry lacks appropriate tools for reliable tolerance analysis and identifying
the interactions of tolerances between different components (Milberg & Tommelein,
2005). This paper examines a methodology that generates detailed tolerance
information based on dense 3D point clouds and as-designed models to support
construction quality control.
Accelerated construction projects use prefabricated components to improve
productivity and construction quality. However, improper dimensioning and
tolerancing of the prefabricated components often results in ineffective quality control
and pose interferences in construction workflows (Bosché, 2010). Traditional
surveying techniques lack in capturing and visualizing detailed geometric information
of the components that are essential for examining the tolerance information
(Jaselskis, Cackler, Andrle, & Gao, 2003). (Kim, Cheng, Sohn, & Chang, 2014)
stated that presently a certified inspector who uses tapes and calipers for QA executes
the dimensional assessment of precast concrete elements manually. This study also
claims that manual methods are not reliable, time consuming, and costly. Efficient
and effective data storage and management systems are important for reliable
communications between different trades involved in a construction project (Bosché,
2010, Kim et al., 2014).
Milberg and Tolmmelein formalized a tolerance mapping system using
“Tolerance Networks” that analyses tolerance accumulation across “tolerance
networks” that represent the interrelationships between connected components
(Milberg & Tommelein, 2005). They illustrated a conceptual framework of a
“proactive tolerance control system” that can identify and analyze correlated
tolerance information during both product and process design. However, they focused
on formulating a virtual model, while lacking detailed geometric data from the field
to validate the tolerance network models. Another issue is that their study focused on
flat concrete walls and rectangular windows, whereas curvilinear and interwoven
geometries of MEP systems pose higher challenges of misalignments. To overcome
such limitations, in this research, we are using three-dimensional (3D) imagery
techniques to capture high quality data for automated dimensional and quality
assessments of prefabricated/precast components.
Modeling as-is conditions of building structures under construction can be
helpful for efficient progress tracking and dimensional quality control (Akinci &

©  ASCE 2
Computing  in  Civil  Engineering  2015 59

Boukamp, 2003). Bosche and Haas developed a methodology that can automatically
extract 3D Computer Aided Design (CAD) objects from imagery data for reliable
dimensional quality assessments of building components (Bosche & Haas, 2008).
This approach is capable of detecting construction defects, misalignments, and
geometric variations that affect the quality control of the construction process.
Follow-up studies of these researchers used robust Iterative Closest Point (ICP) fine-
registration process of a real time erection of an industrial building’s steel structure
for a dimensional compliance control of the as-built geometries (Bosché, 2010).
However, precise modelling of as-built geometries of building systems from 3D
imagery data requires high level of detail and accuracy (Turkan, Bosché, Haas, &
Haas, 2012).
After capturing detailed 3D geometric information, deriving tolerance
information of the prefabricated components is tedious. It requires intense manual
data processing to interpret the captured data. This paper identifies the challenge of
data processing and proposes an automated framework that identifies the deviations
of the as-built geometries from as-designed conditions. The authors developed an
algorithm that can identify spatial changes of the prefabricated components from their
as-designed models. Using the detected deviations of individual components, the
algorithm creates a tolerance network to understand how prefabrication and
installation errors of components influence each other. The generated tolerance
network represents components as its nodes and the connections (joints) between
components as edges joining those nodes. Every node (vertex) contains the “local
attributes” about prefabrication errors of the object such as deviations in lengths; radii
etc., while the edge joining the vertices contain the “global attributes” about
installation errors around joints. More specifically, the global attributes associated to
edges include the relative orientation between the adjacent vertices (components) and
the position of the edge (Joint) with respect to the origin. Tolerance networks have
the potential to aid engineers to identify critical components that has higher impacts
on error propagation and misalignments in field assemblies. These critical
components act as the centers of a network and their prefabrication/installation errors
will cascade throughout the interconnected network. Hence, identifying such regions
prior to the construction process helps in maintaining the stability of the construction
workflow and significantly reduces reworks and wastes.

PROPOSED METHODOLOGY

The proposed framework (Figure 1) for tolerance checking involves


generating a relational graph of components containing local and global attributes.
Such attributes provide information that helps in deriving tolerance error propagation
and consumption processes. Local attributes, such as length and radius, can help
identify prefabrication errors that occur during fabrication, transport, and storage.
While installation errors are global attributes that are due to orientation and location
deviations of the components from as-designed conditions. The algorithm associates a
1×2 local matrix to each node of the tolerance network. This matrix provides the
values of change in radius ( r) and length ( l) of the component between as-designed
model and as-built data. It then generates a 1×4 global matrix, which provides the

©  ASCE 3
Computing  in  Civil  Engineering  2015 60

change in orientation ( ) and change in position ( x, y, z) of the connections


between curvilinear components. These attributes will provide a framework for
characterizing the tolerance/error accumulation and propagation across connected
curvilinear components (Figure 2). Understanding change propagation can help in
establishing quality control framework and redistribution of tolerance values to
absorb the deviations of the prefabricated curvilinear components.

Figure 1. Proposed Framework for Tolerance Analysis

Figure 2. Example of a Tolerance Network consisting Local and Global


Attributes
Spatial change detection provides information required for automated
deviations classification and tolerance network generation. In order to create a
relational network of the connected components for supporting tolerance analysis, the
proposed approach first identifies spatial changes of the prefabricated components
between as-designed and as-built conditions. The change detection algorithm
developed by the authors (Kalasapudi, Turkan, & Tang, 2014) extracts geometric
primitives from both as-designed (BIM) model and as-built (Point Cloud) data for
generating the relational graph depicting objects and spatial relationships in both data
and models. The authors used RANSAC (Random Sample Consensus) (Yang, Wang,
& Chang, 2010) to extract cylinders that represents the segments of individual pipes.
The research presented in this paper extends that change detection methodology
(Kalasapudi et al., 2014) by segmenting the connected prefabricated components into
different cylindrical sections. This approach first applies the change detection
methodology (Kalasapudi et al., 2014) to match the segmented 3D laser scanning
point clouds against cylindrical sections from the BIM model. It then automatically
derives both the local (changes in lengths and radii) and global attributes (positioning

©  ASCE 4
Computing  in  Civil  Engineering  2015 61

and rotational errors at joints) of the sections of ducts. The algorithm then utilizes the
attributes to generate a tolerance network that could support tolerance analysis, and
provide a framework for visualization of the accumulation of tolerance issues across
the network. The authors validated the approach using data from a real building.

CASE STUDY

In order to validate the developed methodology, the authors used the data
collected from a real building site. The data includes a set of laser scans and as-
designed models of a mechanical room in an under construction Agriculture and Bio-
systems engineering building at Iowa State University. The authors selected a part of
the data (Figure 3(a), 3(b)) that includes seven connected prefabricated components
composed of 14 straight cylindrical segments (Figure 3(c)).

(a) (b)

(c)
Figure 3. (a) As-designed layout of the pipes (b) As-built laser scan (c) As-
designed Model showing different Cylindrical Segments
The change detection algorithm developed in (Kalasapudi et al., 2014) has
successfully matched the sections of pipes extracted from as-built point cloud and the
as-designed model. It automatically computes the local and global information for the
segmented pipes from both model and data. Due to unavailability of superior quality
data to extract the radius information ( r), the authors only used length as a local
attribute as of now. However, both orientation and position of the connection were
analyzed as global attributes. The algorithm then calculates the change in these local
(length) and global (orientation and position) attributes for generating the tolerance
network.) Table 1(a) shows the change in length ( l) of each individual segment
between as-planned model and as-is data. Table 1(b) shows the change in orientation
( ), and position ( x, y, z) of the connections between each individual segments

©  ASCE 5
Computing  in  Civil  Engineering  2015 62

for both model and data. Using this information, the authors generate a tolerance
SEG 12 (-0.08)

SEG 10 (0.05)

SEG 8 (-0.21)

SEG 9 (-0.09)
JOINT 4

SEG 5 (-0.06)
SEG 4 (-0.04)

SEG 6 (0.06)
SEG 1 (0)

JOINT 5 SEG 2 (0.12)

JOINT 2
SEG 7 (-0.05)
SEG 14 (-0.06)

JOINT 9
SEG 3 (-0.05)

SEG 13 (-0.13)
SEG 11 (-0.14)

network (
Figure 4), which can guide engineers in analyzing the propagation of
prefabrication and installation errors and its accumulation across the network.
SEG 12 (-0.08)

SEG 10 (0.05)

SEG 8 (-0.21)

SEG 9 (-0.09)
JOINT 4

SEG 5 (-0.06)
SEG 4 (-0.04)

SEG 6 (0.06)
SEG 1 (0)

JOINT 5 SEG 2 (0.12)

JOINT 2
SEG 7 (-0.05)
SEG 14 (-0.06)

JOINT 9
SEG 3 (-0.05)

SEG 13 (-0.13)
SEG 11 (-0.14)

Figure 4. Generated Tolerance Network (Blue are critical nodes and Red is a
major path for error propagation)

Table 1. Change in Local Attributes of Segments (Length) (b) Change in Global


(Orientation and Position) of Joints

(a) (b)

RESULTS AND DISCUSSION

©  ASCE 6
Computing  in  Civil  Engineering  2015 63

The authors have observed certain characteristics of the generated network,


which can reveal the propagation and accumulation of prefabrication and installation
errors. The node and the edge associated to Segment 1 have no change in its local and
global attributes. For the remaining nodes in the network, the change in their local
attributes has propagated along their branches. Joint 4 that connect three segments (4,
8, and 6) seems to have the highest impact of error propagation. The authors observed
that the consumption of the value of change in the orientation that originated at Joint
4 (highlighted in Table 1) along its connected branches (Seg 8, Seg 10, Seg 9, and
Seg 12) and (Seg 6, Seg 7, Seg 5). Similarly, a smaller change in the orientation at
Joint 3 propagated to Joint 5 and Joint 6, wherein the value of the error increased
from 0.72 at Joint 3 to 1.97 and 2.83 at Joint 5, Joint 6 respectively.
The above results show that there is no change in the attributes of Segment-1
and Joint-1, which connects Segment-1 and Segment-2. However, the increases in the
lengths of Segments 2 and 3 resulted in change of the orientation and position of
Joint-2 that connects Segment-2 and Segment-3. This shows that both local and
global attributes of the segments change accordingly and cause problems in the
alignment of connected components.

CONCLUSION AND FUTURE WORK

Automated change pattern detection of the prefabricated components helps in


generating the tolerance network for adaptive redistribution of prefabrication and
installation errors across the network. This paper provides a framework that utilizes
the spatial change detection approach to automatically classify the deviations between
as-designed and as-built conditions of curvilinear components. That helps in
understanding the tolerance information of individual components. The authors have
used 3D imaging technologies to capture as-built conditions of prefabricated
components for automated construction quality control. The proposed framework use
the change detection approach (Kalasapudi et al., 2014) for automated deviation
detection and tolerance network generation. This automated deviation detection and
classification approach could assist engineers in monitoring the propagation of
tolerance deviations and guides possible realignment planning. In addition,
automatically analyzing the deviations between as-designed and as-built conditions
can help designers and engineers to identify critical components among connected
components that require higher quality control and precise tolerance analysis.
Future work will be along three directions. First, the authors would extend this
methodology by integrating with the laws of fluid dynamics for understanding error
propagation and consumption along certain network. Specifically, the authors plan to
focus on correlating the tolerance deviations of the connected components for
understanding the tolerance accumulation and adaptive redistribution mechanisms.
Based on such correlation, the authors would model flows of errors across connected
components and apply mathematical models for characterizing fluids. Second, the
authors would evaluate additional local and global attributes of prefabricated
components for further understanding about the propagation of diverse tolerance
issues. Finally, detailed tolerance analysis creates a requirement to capture high

©  ASCE 7
Computing  in  Civil  Engineering  2015 64

quality data for supporting it. Adaptive imaging technologies will enable capturing
complex geometries within less time (Song, Shen, & Tang, 2012). Future work will
thus include integrating adaptive imaging techniques with spatial change pattern
analysis to increase the quality of tolerance analysis and realignment planning.

ACKNOWLEDGEMENTS
The authors would like to express their appreciation to Dr Yelda Turkan from
Iowa State University for providing the 3D BIM models and 3D laser scanning data.

REFERENCES
Akinci, B., & Boukamp, F. (2003). Representation and integration of as-built
information to IFC based product and process models for automated
assessment of as-built conditions. NIST SPECIAL PUBLICATION SP.
Bosché, F. (2010). Automated recognition of 3D CAD model objects in laser scans
and calculation of as-built dimensions for dimensional compliance control in
construction. Advanced Engineering Informatics, 24(1), 107–118.
doi:10.1016/j.aei.2009.08.006
Bosche, F., & Haas, C. (2008). Automated retrieval of project three-dimensional
CAD objects in range point clouds to support automated dimensional QA/QC.
Information Technologies in Construction, 13(October 2007), 71–85.
Henzold, G. (2006). Geometrical Dimensioning and Tolerancing for Design,
Manufacturing and Inspection. Geometrical Dimensioning and Tolerancing
for Design, Manufacturing and Inspection (pp. xi–xii). Elsevier.
doi:10.1016/B978-075066738-8/50000-X
Jaselskis, E., Cackler, E., Andrle, S., & Gao, Z. (2003). Pilot study on improving the
efficiency of transportation projects using laser scanning, (January).
Kalasapudi, V. S., Turkan, Y., & Tang, P. (2014). Toward Automated Spatial Change
Analysis of MEP Components using 3D Point Clouds and As - Designed BIM
Models. In 2014 Workshop on 3D Computer Vision in the Built Environment
(p. 8).
Kim, M.-K., Cheng, J. C. P., Sohn, H., & Chang, C.-C. (2014). A framework for
dimensional and surface quality assessment of precast concrete elements using
BIM and 3D laser scanning. Automation in Construction.
doi:10.1016/j.autcon.2014.07.010
Milberg, C., & Tommelein, I. (2005). Application of Tolerance Mapping in AEC
Systems. Proceedings Construction Research …, (April), 5–7.
Resop, J., & Hession, W. (2010). Terrestrial laser scanning for monitoring
streambank retreat: comparison with traditional surveying techniques. Journal
of Hydraulic Engineering, (October), 794–798.
Sacks, R., Koskela, L., Dave, B. A., & Owen, R. (2009). The Interaction of Lean and
Building Information Modeling in Construction, 1–29.
Song, M., Shen, Z., & Tang, P. (2012). Data Quality Qriented 3D laser scan planning.
In Construction Research Congress 2014 (pp. 1–10).

©  ASCE 8
Computing  in  Civil  Engineering  2015 65

Turkan, Y., Bosché, F., Haas, C., & Haas, R. (2012). Toward automated earned value
tracking using 3D imaging tools. Journal of Construction …, 139(4), 423–433.
doi:10.1061/(ASCE)CO.1943-7862.0000629.
Wong, R. W. M., Hao, J. J. L., & Ho, C. M. F. (2013). Prefabricated Building
Construction Systems Adopted in Hong Kong City University of Hong Kong.
Yang, S., Wang, C., & Chang, C. (2010). RANSAC Matching : Simultaneous
Registration and Segmentation. In Robotics and Automation (ICRA), 2010
IEEE International Conference on (pp. 1905 – 1912).
Zhang, H.-C. (1997). Advanced Tolerancing Techniques (p. 587). John Wiley & Sons.

©  ASCE 9
Computing  in  Civil  Engineering  2015 66

A Spatial Analysis of Occupant Energy Efficiency by Discrete


Signal Processing on Graphs
Rishee K. Jain1 , Rimas Gulbinas2 , José M. F. Moura3 , John E. Taylor4
1
Dept. of Civil and Environmental Engineering, Stanford University, 473 Via Ortega
Way, Stanford, CA 94103. [email protected]
2
Jacobs Technion-Cornell Institute, Cornell Tech, 111 8th Ave, New York, NY 10011.
[email protected]
3
Dept. of Electrical and Computer Engineering, Carnegie Mellon University, 5000
Forbes Avenue, Pittsburgh, PA 15213. [email protected]
4
Dept. of Civil and Environmental Engineering, Virginia Tech, 750 Drillfield Drive
Blacksburg, VA 24061. [email protected]

ABSTRACT
Approaches to increase the energy efficiency of commercial buildings face the
unique challenge of reconciling the discordant spatial scales at which centralized build-
ing systems, organizations, and individual occupants operate and utilize a building.
Despite the importance of understanding spatial variability within a building, methods
to spatially analyze data from wireless sensor networks (WSN) and building manage-
ment systems (BMS) are limited. In this paper, we introduce the application of novel
and highly scalable techniques from the field of discrete signal processing on graphs
(DSPG ) to the challenging problem of understanding the spatial variability of individ-
ual and organizational energy efficiency within commercial buildings. We collect and
process individual-level electricity consumption data for two floors of a commercial
building in Denver, Colorado and demonstrate the merits of our approach on the em-
pirical dataset. Preliminary results indicate that occupants in different organizations
exhibit largely different spatial patterns of energy efficiency compared to those in a
single organization.
Keywords: Energy efficiency, energy analysis, signal processing, spatial analysis

INTRODUCTION
Commercial buildings account for nearly 20% of energy usage in the United States
(U.S. Department of Energy 2010). The proliferation of low-cost and ubiquitous wire-
less sensor networks (WSN) has enabled us to gather large amounts of data on how
buildings and more importantly how occupants within such buildings consume energy.
Recent research has utilized sensor-based data at different scales to understand how

1
©  ASCE
Computing  in  Civil  Engineering  2015 67

human dynamics relate to commercial buildings (Gulbinas and Taylor 2014; Azar and
Menassa 2014) and to benchmark building energy efficiency (Woo and Gleason 2014).
While whole building analysis provides value, commercial buildings data must be an-
alyzed at a more granular level to effectively account for the discordant spatial scales
at which organizations and individuals operate within a building. Related work on a
sub-building level has been focused on dynamic frameworks to sync occupancy and
thermal preferences with HVAC set points (Schoofs et al. 2011; Jazizadeh et al. 2013),
to optimize the scheduling of meetings to enhance energy efficiency (Majumdar et al.
2012) and to develop analytics of plug-level electricity data to target areas for energy
efficiency improvements(Ortiz et al. 2012). Therefore, a deeper exploration of energy
data spatial analysis is warranted. In this paper, we introduce a novel technique from the
field of discrete signal processing on graphs to spatially analyze the energy efficiency
of commercial building occupants. Understanding the spatial variability of energy ef-
ficient behavior among occupants can inform more effective operational strategies and
other interventions.

PROBLEM STATEMENT
We consider the general problem of understanding the spatial variability of energy
efficiency in a commercial building. This problem is unique and challenging due to the
unstructured nature of spatial data (i.e., spatial data does not have a natural index and
therefore does not fit or order cleanly into a vector or matrix). We propose representing
occupants or spaces in a building as a graph G = (V, A) where N is the number of
occupants or individual spaces within a building and V = [v0 ...vN 1 ] is the set of
nodes corresponding to such individuals or spaces. We define A as the graph weighted
adjacency matrix of the graph that describes the physical relationship (i.e., distance)
between nodes. The graph G is defined generally, but in this case we restrict the form
to an undirected graph, as the relationship between nodes based on physical distance
yields a symmetric matrix A (i.e., Aij = Aji ), and we construct an edge between nodes
in the network. Using this graph as a basis, we define the input data as a graph signal:

s : vn 7! sn (1)

We assume that each element sn is a real number, representing in this application the
energy consumption or occupancy values associated with a node. Each signal can be
represented as a vector as follows:

s = [s0 , ..., sN 1]
T
2 RN (2)

Figures 1 and 2 illustrate how this structure translates to a real commercial office
building. Figure 1 is for a floor with multiple organizations with each organization
indicated by the color of the node. Figure 2 is for a floor with a single organization.
Nodes are representative of where occupant workstations are located, and the adjacency
matrix is constructed by calculating the physical distance between nodes. We note that
this graph and signal structure are highly scalable and extensible to large dimensional
building data corresponding to thousands of occupants well beyond what is depicted in
Figures 1 and 2.

2
©  ASCE
Computing  in  Civil  Engineering  2015 68

Figure 1. Graph Structure Superimposed on the Building Floorplan (Fourth


Floor). Colors represent organizations on the floor

Figure 2. Graph Structure Superimposed on the Building Floorplan (Fifth Floor)

DSP ON GRAPHS
We propose the use of the DSPG framework introduced in (Sandryhaila and Moura
2013) to spatially analyze and understand energy efficiency within a commercial build-
ing. As with classic discrete signal processing (DSP), DSPG utilizes a Fourier trans-
form to expand a signal into a Fourier basis in the graph spectral domain. Generally, in
DSPG a graph Fourier transform corresponds to the Jordan decomposition of the graph
adjacency matrix A but in the case of a symmetric graph adjacency matrix A 2 RN ⇥N ,
A is diagonizable and eigendecomposition of A is possible. For simplicity, we assume
the eigenvalues of A are distinct. The distinct eigenvalues o , 1 , ..., N of the adja-
cency matrix A are the graph frequencies and form the spectrum of the graph. The
eigenvector that corresponds to a frequency n is the frequency component correspond-
ing to the nth frequency.
The eigendecomposition is given as follows:

A = Q⇤Q 1
(3)

3
©  ASCE
Computing  in  Civil  Engineering  2015 69

with the ith column of matrix Q being the eigenvector qi corresponding to eigenvalue
i . The graph Fourier transform is given as follows:

ŝ = Fs (4)

where F = Q 1 is the graph Fourier transform matrix. The values sˆn of the signal’s
graph Fourier transform (4) characterize the frequency content of the signal s.
In the graph spectral domain, the ordering of frequencies is often difficult to discern
because the frequencies can be complex valued for instances the adjacency matrix is
not symmetric. For this reason, the DSPG framework introduces the concept of total
variation on a graph (T VG ) which is based on the concepts of graph shift and total
variation from classical DSP. The total variation of an eigenvector of the matrix A takes
the form:
T VG (v) = 1 ||v| |1 (5)
max

where ||v| | is the L1 norm of the eigenvector v.


The T VG value for each proper eigenvector takes a value between 0 and 2 and is
used to order the frequencies from low to high. A low frequency is characterized by
small variation of the corresponding eigenvector across the graph and a high frequency
is characterized by a large variation across the graph. The detailed theoretical formula-
tion of T VG can be found in (Sandryhaila and Moura 2014).
Transforming the graph signal into the graph spectral domain and ordering frequen-
cies allows for us to easily assess the spatial variability of electricity consumption and
occupant efficiency across the graph. A signal with the majority of its power in the
lower frequencies will suggest that the signal varies slowly across the graph (i.e., nodes
in close proximity have similar values). We utilize the peak-to-side lobe ratio (PSLR),
a common metric in DSP analysis, as a means to quantify and compare signals in the
spectral domain to each other. Larger values of PSLR represent less spatial variation
of the signal, as most of the signal’s power can be found in the lower frequencies. The
peak-to-side lobe ratio is defined in (6) and given in decibels (dB) in accordance with
conventional DSP analysis:
Pmain
PSLR = 20 log (6)
Pside
where Pmain is the peak magnitude of the signal in the main lobe (i.e., the lobe contain-
ing the maximum power) and Pside is the peak magnitude of the signal in the side lobes
(i.e., lobes that are not the main lobe).

DATA COLLECTION
We collected energy-use data for 27 occupants in a commercial building in Denver,
Colorado using BizWatts (Gulbinas et al. 2014a), a socio-technical energy feedback
system, from April 17th to July 9th, 2013 in order to test and demonstrate the merits of
our approach. Individual plug-load monitors were installed at each individual’s work-
station and recorded time, location, real power (W), current (A), voltage (V), and power
factor data every 5 minutes. The interval data was uploaded to a central server every 15

4
©  ASCE
Computing  in  Civil  Engineering  2015 70

minutes. Because occupants in the building were not completely stationary, this data
provides a typical location or ”snapshot” of occupant energy usage in a space. How-
ever, it should be noted that the framework proposed in this paper is flexible and would
allow for more dynamic non-stationary data to be analyzed. The building occupants
were distributed over two floors (i.e., fourth, fifth floors) and were full-time employ-
ees who typically occupied the building during regular working hours; from 9:00am to
5:00pm from Monday to Friday. The fourth floor consisted of occupants from multiple
organizations, while the fifth floor consisted of occupants from a single organization.
Typically connected appliances included computers, monitors, space heaters, and elec-
tronics chargers. Additional details regarding the sensor network architecture and data
collection software can be found in (Gulbinas et al. 2014a).
After the data collection period was finished, a processing algorithm was applied
to quantify the energy efficiency of each building occupant. We define an individ-
ual building occupant’s energy efficiency, EE, as the percentage of time spent in low
energy-use states over a specified period and time-range. EE values range from 0 to 1,
with 1 representing the most energy efficient occupants. We compute an EE value for
each occupant across the entire study period for two non-work hour ranges: morning
(i.e., pre-work hours) and evening (i.e., post-work hours). We formally define EE with
the following equation:
Pr
[hl ⇥ p(LCl ) ⇥ W D] + 24 ⇥ p(LCn ) ⇥ NWD
EE = l=1 Pr (7)
l=1 [hl ⇥ W D] + 24 ⇥ NWD

where r is the number of workday non-work hour ranges, hl is the number of hours
for each non-work hour range l, and p(LCl ) is the probability that the workday non-
work hour range is assigned to a low energy cluster. Low energy clusters are clusters
that possess a center with a mean energy value below a defined threshold ( ). In this
analysis, was set to 7Wh per hour to reflect estimates of average power consumed
by connected appliances in the off state. WD and NWD are the number of workdays
and non-workdays, and p(LCn ) is the probability that a non-workday is assigned to low
energy clusters.
A more detailed explanation of the energy efficiency metric, the algorithms used to
derive EE values and illustrative examples can be found in (Gulbinas et al. 2014b).

RESULTS AND CONTRIBUTIONS


Initial results of applying the DSPG framework to the energy efficiency values in
the morning and evening hours for a floor with multiple organizations and a floor with
a single organization are plotted in Figure 3 and Figure 4, respectively. The horizontal
axis in Figures 3 and 4 refer to the frequencies ordered from low to high based on T VG
values. The number of frequencies for a graph corresponds to the number of nodes in
the graph (i.e., the multiple organization (fourth) floor graph has 15 nodes and frequen-
cies ordered from zero to 14). In Figure 3, it can be seen that in the morning hours (i.e.,
pre-work hours), the majority of the signal for both types of floors is concentrated in the
low frequencies that have small variations. This is also the case in Figure 4 where for
the evening hours (i.e., post-work hours) the signal is concentrated in the low frequen-
cies. Moreover, the PSLR values are also consistent for both time periods with the

5
©  ASCE
Computing  in  Civil  Engineering  2015 71

multiple organization floor having values approximately 10 dB lower than the single
organizational floor in both cases.

Figure 3. Graph Spectral Plots of Efficiency During Morning (Pre-Work) Hours

Figure 4. Graph Spectral Plots of Efficiency During Evening (Post-Work) Hours

The above observations suggest that for both floors and time periods the signal
varies slowly across the graph, with occupants located close to each other having similar
energy efficiency values. In other words, occupants in close proximity to each other are
observed to have consistent energy consumption behavior. Comparing the two floors,
the single organization can be seen to have a significantly higher PSLR value than the
multiple organizations during both time periods. This suggests that energy efficiency
(EE) values vary spatially more for occupants from multiple organizations than those
in a single organization. This observed result is also consistent with the notion that

6
©  ASCE
Computing  in  Civil  Engineering  2015 72

occupants within a single organization share similar behaviors and culture (Hofstede
1980) and could be exploited to successfully diffuse energy efficient practices to other
occupants within a commercial building. Because energy efficiency values are based
on the percentage of time spent in low energy level clusters, similar spatial similarity
within an organization could also indicate that energy efficient behavior (i.e., turning
off one’s computer before leaving work) is more consistent within a single organization
rather than across multiple organizations. Characterizing both organizations and occu-
pants through the proposed spatial analysis could prove to be valuable in the formation
of proactive commercial building and organizational energy management strategies.
Overall, this initial work represents a key first step towards introducing methods
to spatially analyze electricity consumption data in commercial buildings. This paper
contributes a framework to analyze electricity consumption and other building occupant
data spatially. While the data set analyzed in this study was small, the framework
proposed is extensible and highly scalable to datasets of large commercial buildings
with several thousand occupants. Furthermore, we demonstrate the framework’s initial
applicability on real electricity consumption data collected from a test-bed building.
Our work contributes to the growing body of literature on analytics of commercial
building data (Schoofs et al. 2011; Majumdar et al. 2012; Ortiz et al. 2012; Jazizadeh
et al. 2013) by extending such work to incorporate spatial analytics.

FUTURE WORK AND CONCLUSIONS


Several pathways exist to expand this initial analysis by applying the DSPG frame-
work to additional energy efficiency, energy entropy and occupancy metrics. Analyzing
additional metrics would provide insight into how consistent and predictable energy
consumption (i.e., entropy) is spatially and how a building is being utilized (i.e., occu-
pancy) spatially. Further analysis can also be undertaken to understand how such spatial
patterns differ across organizational types at different scales (i.e., does a floor with a
single organization have less variation in occupancy hours than a floor with multiple
organizations?). Peer or social influence has been observed to impact energy consump-
tion behavior in residential settings (Jain et al. 2013) and therefore understanding such
organizational dynamics could have a significant impact on how energy efficient prac-
tices diffuse among commercial building occupants. In the end, we aim to develop tools
that can analyze commercial building data and easily assess spatial variance, under-
stand organizational dynamics of energy usage and classify occupant energy behavior.
Such tools will in turn allow for more informed energy management strategies that will
be able to successfully reconcile the discordant spatial scales between a commercial
building’s occupants, organizations, and building systems.

ACKNOWLEDGMENTS
Researchers were partially or wholly supported by the Department of Energy Build-
ing Technologies Program, National Science Foundation under Grants No.1142379,
No.1461549 and the Air Force Office of Scientific Research under Grant No. FA95501210087.
We also thank the Center for Urban Science + Progress, New York University for its
support for Jain and Moura during the development of this paper.

REFERENCES

7
©  ASCE
Computing  in  Civil  Engineering  2015 73

Azar, E. and Menassa, C. C. (2014). “A data collection and analysis framework to


improve the performance of energy-intensive commercial buildings.” Computing in
Civil and Building Engineering (2014), ASCE, 1142–1149.
Gulbinas, R., Jain, R., and Taylor, J. (2014a). “Bizwatts: A modular socio-technical
energy management system for empowering commercial building occupants to con-
serve energy.” Applied Energy.
Gulbinas, R., Khosrowpour, A., and Taylor, J. E. (2014b). “Segmentation and classifi-
cation of commercial building occupants by energy-use efficiency and predictability.”
IEEE Transactions on Smart Grid accepted for publication.
Gulbinas, R. and Taylor, J. E. (2014). “Effects of real-time eco-feedback and organi-
zational network dynamics on energy efficient behavior in commercial buildings.”
Energy and Buildings, 84, 493–500.
Hofstede, G. (1980). “Culture and organizations.” International Studies of Management
& Organization, 15–41.
Jain, R. K., Gulbinas, R., Taylor, J. E., and Culligan, P. J. (2013). “Can social influence
drive energy savings? detecting the impact of social influence on the energy con-
sumption behavior of networked users exposed to normative eco-feedback.” Energy
and Buildings, 66, 119–127.
Jazizadeh, F., Ghahramani, A., Becerik-Gerber, B., Kichkaylo, T., and Orosz, M.
(2013). “Human-building interaction framework for personalized thermal comfort-
driven systems in office buildings.” Journal of Computing in Civil Engineering,
28(1), 2–16.
Majumdar, A., Albonesi, D. H., and Bose, P. (2012). “Energy-aware meeting schedul-
ing algorithms for smart buildings.” Proceedings of the Fourth ACM Workshop on
Embedded Sensing Systems for Energy-Efficiency in Buildings, BuildSys ’12, New
York, NY, USA, ACM, 161–168, <http://doi.acm.org/10.1145/2422531.2422560>.
Ortiz, J., Noh, Y., Saldanha, G., Su, D., and Culler, D. (2012). “Towards real-
time, fine-grained energy analytics in buildings through mobile phones.” Proceed-
ings of the Fourth ACM Workshop on Embedded Sensing Systems for Energy-
Efficiency in Buildings, BuildSys ’12, New York, NY, USA, ACM, 42–44,
<http://doi.acm.org/10.1145/2422531.2422540>.
Sandryhaila, A. and Moura, J. M. F. (2013). “Discrete signal processing on graphs.”
IEEE Transactions on Signal Processing, 61(7), 1644–1656.
Sandryhaila, A. and Moura, J. M. F. (2014). “Discrete signal processing on graphs:
Frequency analysis.” IEEE Transactions on Signal Processing, 62(12), 3042–3054.
Schoofs, A., Delaney, D. T., O’Hare, G. M. P., and Ruzzelli, A. G. (2011). “Copolan:
Non-invasive occupancy profiling for preliminary assessment of hvac fixed timing
strategies.” Proceedings of the Third ACM Workshop on Embedded Sensing Systems
for Energy-Efficiency in Buildings, BuildSys ’11, New York, NY, USA, ACM, 25–
30, <http://doi.acm.org/10.1145/2434020.2434027>.
U.S. Department of Energy (2010). “Buildings Energy Data Book,
<http://buildingsdatabook.eren.doe.gov/>.
Woo, J.-H. and Gleason, B. (2014). “Building energy benchmarking with building in-
formation modeling and wireless sensor technologies for building retrofits.” Comput-
ing in Civil and Building Engineering (2014), ASCE, 1150–1157.

8
©  ASCE
Computing  in  Civil  Engineering  2015 74

Pavement Surface Permanent Deformation Detection and Assessment Based on


Digital Aerial Triangulation
Su Zhang1; Susan M. Bogus2; and Christopher D. Lippitt3
1
Ph.D. Candidate, Department of Civil Engineering, University of New Mexico,
Albuquerque, NM 87131-0001. E-mail: [email protected]
2
Associate Professor, Department of Civil Engineering, University of New Mexico,
Albuquerque, NM 87131-0001. E-mail: [email protected]
3
Assistant Professor, Department of Geography and Environmental Studies,
University of New Mexico, Albuquerque, NM 87131-0001. E-mail:
[email protected]

Abstract

Pavement surface distress information is essential to pavement management.


Assessment of surface permanent deformation is particularly important given
pavement’s ubiquitous use and that permanent deformation significantly influences
its integrity and performance. However, current assessment methods are expensive,
labor-intensive, time-consuming, and potentially dangerous to inspectors. Digital
aerial triangulation (DAT) and unmanned airborne systems (UAS) provide a new
method for detecting and assessing permanent deformation that hold potential to
overcome these limitations. Using hyperspatial resolution multispectral digital aerial
photography (HRM-DAP) acquired from an UAS as input, DAT was employed to
generate digital surface models (DSMs) of the pavement surface. Permanent
deformation distress estimates extracted from these DSMs were compared to ground
reference data collected using standard protocols via orthogonal regression analysis.
The results show that the aerial triangulation of HRM-DAP provides consistent and
reliable data for permanent deformation detection and assessment.

INTRODUCTION

Pavement surface permanent deformation is an unrecoverable deformation


visible as a depressed channel in the wheel path of the roadway (McGennis et al.
1994). It is a progressive movement of the materials caused by static or cyclic loads
in the top pavement layer or the underlying layers (Paterson 1987). Permanent
deformation is one of the main distresses for flexible pavements (Vaitkus et al. 2014).
The ability to detect and assess permanent deformation conditions is critical to
making decisions within any pavement management system. The collected condition
data are used by transportation agencies to make maintenance and repair decisions.
Currently, most transportation agencies use either manual evaluation or
automated evaluation for permanent deformation assessment. For both methods, data
are collected on the ground, which is expensive, labor-intensive, time-consuming, and
potentially dangerous to inspectors (Qiu 2013). These limitations have prevented
pavement researchers from establishing sound scientific principles in materials
modeling and performance prediction of pavement systems (Qiu 2013). With

1
©  ASCE
Computing  in  Civil  Engineering  2015 75

advances in remote sensing, such as digital aerial triangulation (DAT) and unmanned
airborne systems (UAS), there is the potential for a new method to be used for
permanent deformation evaluation. Using hyperspatial resolution (less than 1-
centimeter or 0.5-inch) multispectral digital aerial photography (HRM-DAP) acquired
from an UAS as input, DAT was employed to generate hyperspatial resolution digital
surface models (DSMs) for pavement surfaces. Specifically, it is the intent of this
study to examine if pavement permanent deformation can be detected through
analysis of the hyperspatial resolution DSMs generated from DAT, and if so, how
well those estimates correlate to those from existing evaluation protocols.

BACKGROUND

Pavements can be categorized into flexible pavements (i.e. asphalt concrete)


and rigid pavements (i.e. Portland cement concrete). Pavement surface condition
degrades over time, and different types of pavements exhibit different types of
distresses. Pavement surface condition information is essential to its maintenance and,
therefore, transportation agencies at all levels dedicated a large amount of time and
money to routine evaluation as part of their management systems (Haas et al. 1994).
Flexible pavement (i.e. asphalt concrete) surface permanent deformation has
long been the interest of pavement engineers and transportation agencies due to its
ubiquitous use, significantly influences pavement integrity, pavement performance,
and traffic conditions (Qiu 2013). The occurrence of permanent deformation reduces
the useful service life of the pavement, and, by affecting vehicle handling
characteristics, creates serious hazards for roadway users (Wang 2007). In the context
of pavement engineering, “permanent deformation”, “rutting”, and “transverse
deformation” are interchangeable terms, all of which refer to pavement depression
phenomenon (Qiu 2013). The Highway Performance Management System (HPMS)
Field Manual uses the term rutting to describe this type of deformation. Therefore, the
term rutting is used in this study. The most common rutting is at intersections, bus
stops, and in heavy vehicle loaded roads where vehicle acceleration, deceleration,
slow movement, or static loading are common (Vaitkus et al. 2014).
The earliest rutting assessment was completed in the late 1950s and early
1960s for the American Association of State Highway Officials (AASHO) Road Test
(Qiu 2013). Since then, rutting depth was typically measured with “boots on the
ground” by having experts visually inspect the condition with a level and a measuring
tape (Figure 1). Many transportation agencies are still using this method (Qiu 2013).
However, considering the nature of manual survey, this method is expensive, labor-
intensive, time-consuming, and potentially dangerous to the inspectors. Despite safety
precautions (safety training and high-visibility garments), this can still be dangerous
work, especially in high traffic volume sections. In addition, data collected by
different inspectors can exhibit a high degree of variability (Bogus et al. 2010)
More recently, many sophisticated and automated devices integrated with
high-performance sensors (e.g. 3D laser scanners) have gradually been developed for
rutting assessment. These devices are typically mounted to vehicles to perform the
evaluation (Figure 2). However, it is more difficult and expensive to deploy than
manual evaluation, requiring specialized staff. In addition, the automated evaluation

2
©  ASCE
Computing  in  Civil  Engineering  2015 76

system must be driven over each pavement segment to assessed, which results in a
time-consuming process because a single data image can only cover a small area
which is usually less than five square meters (McGhee 2004). It is also potentially
dangerous to inspectors because these vehicles need to be operated on roadways. It
should be noted that despite the improved efficiency and effectiveness of automated
rutting data collection, most of the rutting data collected from the field remain
incomplete, inaccurate, and inconsistent (Qiu 2013).

Figure 1. Manual rutting Measure Figure 2. Automatic rutting measure


(Source: NMDOT) (Source: International Cybernetics)

In order to overcome these limitations, this study proposes a novel method of


assessing rutting depth based on DSMs created by DAT from HRM-DAP acquired by
UAS. Aerial triangulation (AT) is the basic method for analyzing aerial images in
order to calculate the 3-dimensional coordinates of objects (Yuan et al. 2009). Along
with the transition from analog to digital aerial imagery, AT also shifts from analog
AT to DAT, which is a prerequisite for photogrammetric products including DSMs
and orthophotos. DAT, also known as structure from motion (SFM) in the computer
vision field, is a photogrammetric process of determining X, Y, Z ground coordinates
of individual points based on measurements taken from a series of overlapping digital
aerial photographs (Zomrawi et al. 2013).
In recent years, UAS have emerged as an import platform for collection of
hyperspatial resolution aerial data at sub-centimeter – a trend that is all but certain to
continue. For now, due to a wide variety of regulatory and safety concerns, the legal
use of UAS is severely restricted in the United States. In anticipation of establishment
of regulatory environment and availability of UAS for pavement surface data
collection over broad areas in the near future, this research used a tethered helium
balloon system to simulate the UAS hyperspatial resolution aerial data collection
system. HRM-DAP acquired by UAS provides the near-term potential of collecting
pavement surface data from the air, which enables extensive coverage at high detail.
DAT, on the other hand, provides the ability to accurately estimate surface height
computationally and therefore eliminate the need for field survey of rutting.
DAT traditionally requires the identification of thousands of control points
linking images to one another and to a reference dataset to enable least squares
estimation of the optimal triangulation model. Novel computer vision techniques and
graphical processing unit (GPU) based processing have enabled the automation of
DAT and the expansion of the number of triangulated XYZ locations to millions up to
hundreds of millions, ultimately enabling the estimation of feature height (i.e. Z) at
approximately the spatial resolution of the input images. When coupled with HRM-
DAP acquired by UAS, this technology holds the potential to permit the estimation of
rutting depth at sub-centimeter scales. As an added benefit, the GPS acquisition

3
©  ASCE
Computing  in  Civil  Engineering  2015 77

locations of the input images enable far more precise geo-registration than
traditionally supported through low-cost code based GPS systems and, therefore,
minimizes the geo-registration processing time.
DAT has been used to facilitate research in many fields, such as forestry, land
cover change, coastal management, and agriculture (Okuda et al. 2003; Turner et al.
2012; Kim et al. 2014; Grenzdörffer 2014). However, it has not been applied to
pavement surface 3D reconstruction to permit the assessment of rutting depth. We
therefore explore the application of DAT to HRM-DAP to supplement or replace
current rutting depth measurement methods. With UAS, it is possible to acquire sub-
centimeter scale aerial photography. With DAT, it is possible to generate sub-
centimeter scale DSMs for standardized evaluation of rutting distresses, potentially
reducing cost and duration of surveys while improving the comparability of results.

METHODOLOGY

Using HRM-DAP acquired from a low-cost unmanned remote sensing system


(approximately $500) as input, DAT was employed to generate DSMs for the
pavement surfaces. Rutting depths measured from the DSMs were compared to
ground reference rutting depths measured manually using standard protocols.

Data Acquisition and Preparation

A small format Canon digital camera (SX260 HS) affixed to a modified


tethered weather balloon was used to collect all imagery. The camera has a high
density detector array (12 megapixels), a rugged case with protected natural color
lens, built-in GPS unit, and capability of intervalometer controlled acquisition (up to
one image per two seconds). It also can be easily mounted to the balloon rigging.
Data were collected for five sites (i.e. road sections) on New Mexico Highway
0333 near Albuquerque. The ground size for each site is 20-meters by 15-meters.
Approximately 200 overlapping aerial photos were acquired for each site at about 40
meters above ground level (AGL). A Real Time Kinematic (RTK) surveying system
was used to collect the coordinates of the ground control points (GCPs) on the
pavement surface with an reported accuracy of 0.004-meter horizontally and 0.006-
meter vertically. Sixteen GCPs were acquired for each site and detailed photos of
each GCP were collected simultaneously. Ten GPCs were used for the bundle
adjustment computation while the other six were used to assess the horizontal and
vertical accuracy of the DSMs. A two-person crew went to the preselected data
collection sites to perform manual evaluation based on the standard protocol adopted
by the HPMS Field Manual. Accordingly, rutting depth was measured for only the
rightmost driving lane for both inner and outer wheel paths at three locations along
the wheel path within each site and then the depth averaged for each wheel path.

Digital Aerial Triangulation (DAT)

Approximately 150 overlapping aerial images per site were used for DAT
after blurry and oblique images were excluded. Overlapping images were combined

4
©  ASCE
Computing  in  Civil  Engineering  2015 78

into a single image mosaic and used to estimate terrain height through DAT. The
commercial software Agisoft performs DAT with minimal human intervention and at
a significantly low cost. For each site, millions of control points were identified to
build a dense cloud, and then a triangulated irregular network (TIN) mesh was
generated based on the identified control points. Once these processes were
completed, the DSMs and orthophotos were exported as raster datasets in tiff format.
DSMs and orthophotos were generated at a spatial resolution of 3-milimeters. When
compared to 6 independent GCPs collected by RTK, the overall horizontal accuracy
(root-mean-square-error [RMSE]) for all data collection sites is 0.004 meters while
the vertical accuracy is 0.006 meters. The number of image frames used and accuracy
information for each data collection site is reported in Table 1.

Table 1. Images Frames and DSM Accuracy of Data Collection Sites


Site Identification Horizontal Accuracy Vertical Accuracy
Image Frames
Number (ID) (in meters) (in meters)
Site 1 122 0.002 0.006
Site 2 135 0.005 0.004
Site 3 183 0.005 0.003
Site 4 177 0.004 0.009
Site 5 181 0.003 0.007
Total 798 0.004 0.006

Rutting Depth Measurement from DSMs

DSMs were used to reconstruct the 3D pavement surfaces and only points
matching the manual evaluation locations on the DSMs were used for comparison.
When measured manually, rutting depth was measured with a wooden bar and a
measuring tape. The minimum scale for the tape is 0.001-meter. The length and width
for the wooden bar is 1.2192-meters and 0.02-meter. Figure 3 shows the DSMs with
the actual measure points and wooden bars overlaid. Using the measure point as the
center, two polygons with a size of 0.6096 meters by 0.02 meters were created to
simulate the position of the wooden bar. The height information within the polygons
was extracted to find the highest points in each of the two polygons to be used later
for rutting depth calculation. Figure 4 illustrates the calculation process.
As shown in Figure 4, we consider the two highest points of the rutting
section points A and B and the distance from Point C to D the rutting depth. Points A,
B, and C will have the same height if the heights of Points A and B are equal.
However, under most circumstances the heights of A and B are different. Therefore, a
weighted average method was used to estimate C:

HC = (HA * DA + HB * DB) / (DA + DB) (Equation 1)


RD = HC - HD (Equation 2)

Where HC represents the height of Point C, HA represents the height of Point


A, and HB represents the height of Point B. DA represent the horizontal distance from
Point A to Point D, while DB represents the horizontal distance from Point B to Point
D. RD represents the rutting depth. HA and HB can be determined based on the DSMs,
while the DA and DB can be determined based on the orthophotos.

5
©  ASCE
Computing  in  Civil  Engineering  2015 79

Figure 3. Actual Measure Points Figure 4. Rutting Depth Calcualtion

Rutting Depth Comparison

Rutting depth measured from the DSMs was compared to manual results to
examine the feasibility of using the DAT-based method to detect and assess the
rutting depth. Linear regression revealed that depths measured by these two methods
fit closely to the regression line, but a paired t-test was not performed because these
data clearly violate the assumption that there is no linearity between the two groups of
sample values (Carroll and Ruppert 1996). Rutting depths measured from these two
methods were therefore compared with orthogonal regression analysis since it does
not assume independence between variables. Orthogonal regression examines the
linear relationship between two continuous variables and is often used to test whether
two instruments or methods are measuring the same thing (Staiger and Stock 1997).

RESULTS AND DISCUSSION

The manually-evaluated and DAT-evaluated rutting depths are summarized in


Table 2. It should be noted that the results are organized by inner and outer wheel
paths for each data collection site. Table 3 shows the linear and orthogonal regression
results. Linear regression results revealed that the depth measured by these two
methods fit closely to the regression line (R2 > 0.9).
Interpretation of the regression results should be focused on the confidence
interval (CI). Zero is contained in the CI for the intercept (-0.000520, 0.00277) and
one is contained in the CI for the slope (0.819150, 1.03708) (minitab.com). Therefore,
no evidence exists to show that the measurements for the two methods are statistically
different from one another, and ultimately, the results show that DAT-based rutting
depth measurement works as effectively as the manual evaluation does. Given the
horizontal and vertical accuracy of the DSMs, the discrepancy between the manual
depth measurement and the DAT-based depth measurement could be from errors in
either method. However, rutting depth measured by different inspectors can exhibit a
high degree of variability (Bogus et al. 2010) because this method relies on subjective
visual observation. These results can be interpreted to indicate that the DAT-based
rutting measurement method is more accurate than manual method.

6
©  ASCE
Computing  in  Civil  Engineering  2015 80

As an added benefit, the proposed DAT-based rutting measurement method


does require the manually intensive measuring procedures. Operationally the rutting
depth could be measured by subtracting the heights of two points on the DSMs – a
point inside the rutting distress zone and an outside point. It is, therefore, possible to
rapidly measure the height of any point and report the depth.

Table 2. Manual Evaluated and DAT Evaluated Rutting Depths


Inner Wheel Path (in meters) Outer Wheel Path (in meters)
Site Manual DAT Manual DAT
ID Measure Error Measure Error
Depth Depth Depth Depth
Point (1-2) Point (3-4)
(1) (2) (3) (4)
1 0.022 0.025 -0.003 1 0.007 0.006 0.001
Site 1 2 0.020 0.022 -0.002 2 0.005 0.003 0.002
3 0.020 0.024 -0.004 3 0.005 0.007 -0.002
1 0.017 0.018 -0.001 1 0.010 0.005 0.005
Site 2 2 0.020 0.019 0.001 2 0.010 0.009 0.001
3 0.025 0.023 0.002 3 0.008 0.005 0.003
1 0.015 0.016 -0.001 1 0.010 0.009 0.001
Site 3 2 0.014 0.015 -0.001 2 0.005 0.007 -0.002
3 0.016 0.017 -0.001 3 0.005 0.006 -0.001
1 0.015 0.016 -0.001 1 0.011 0.012 -0.001
Site 4 2 0.020 0.019 0.001 2 0.005 0.004 0.001
3 0.015 0.016 -0.001 3 0.011 0.010 0.001
1 0.020 0.019 0.001 1 0.015 0.013 0.002
Site 5 2 0.020 0.019 0.001 2 0.011 0.013 -0.002
3 0.020 0.018 0.002 3 0.020 0.018 0.002

Table 3. Orthogonal and Linear Regression Results


Orthogonal Regression
Variables Coef SE of Coef Z Value P Value 95% CI
Intercept 0.00112 0.0008383 1.3396 0.180 (-0.000520, 0.00277)
DAT Depth 0.92812 0.0555968 16.6937 0.000 (0.819150, 1.03708)
Linear Regression
Variables Coef SE of Coef t P Value R2
Intercept 0.8880447 0.0008383 2.08 0.000
0.9088
DAT Depth 0.0016746 0.0555968 16.70 0.047
Note: for both orthogonal regression and linear regression, the dependent variable is the ground reference rutting
depth measured by manual method; Coef indicates coefficient; SE indicates Standard Error; CI indicates
confidence interval; and DAT Depth indicates the independent variable of rutting depth measured by DAT method.

CONCLUSIONS

Current pavement surface rutting distress assessment methods are expensive,


labor-intensive, time-consuming, and potentially dangerous to inspectors (Qiu 2013).
In order to overcome these limitations, we present a novel approach for rutting
distress detection and assessment by using DAT to HRM-DAP acquired from a low-
cost unmanned remote sensing system. Our results have revealed that rutting depths
measured by the manual method and the DAT-based method are not statistically
different from each other and is likely more accurate than manual methods. The
proposed DAT-based method could be used to directly measure rutting depth in
situations where field inspectors cannot evaluate except with considerable labor (e.g.
sections in remote areas). It is likely this proposed method will completely replace
field pavement surface inspection in the future due to its high accuracy and low-cost.
7
©  ASCE
Computing  in  Civil  Engineering  2015 81

REFERENCES

Bogus, S. M., Song, J., Waggerman, R., and Lenke, L. (2010). “Rank correlation
method for evaluating manual pavement distress data variability.” Infrastructure
Systems, 16(1), 66 – 72.
Carroll R. J., Ruppert, D. (1996). “The use and misuse of orthogonal regression in
linear error-in-variable models.” The American Statisticians, 50(1), 1 – 6.
Grenzdörffer, G. J. (2014). “Crop height determination with UAS point clouds.” The
International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Science, Volume XL-1, ISPRS Technical Commission I Symposium,
Denver, Colorado.
Haas, R., Hudson, W.R., and Zaniewski, J. (1994). Modern pavement management,
Krieger, Malamar, Fla.
Kim, B. O., Yun, K., and Lee, C. (2014). “The use of elevation adjusted ground
control points for aerial triangulation in coastal areas.” KSCE Journal of Civil
Engineering, 18(6), 1825 – 1830.
McGennis, R.B., Anderson, R.M., Kennedy, T.W., Solamanian, M. (1994).
Background of superpave asphalt mixture design and analysis, Report No.
FHWA-SA-95-003, Federal Highway Administration, Washington, D.C.
Okuda, T., Suzuki, M., Adachi, N., Quah, E.S., Hussein, N. A., Manokaran, N. (2003).
“Effect of selective logging on canopy and stand structure and tree species
composition in a lowland dipterocarp forest in peninsular Malaysia.”
Forest Ecology and Management, 175, 297 – 320.
Paterson, W.D.O. (1987). Road deterioration and maintenance effects: models for
planning and management, The Johns Hopkins University Press, Baltimore, MD.
Qiu, S. (2013). Measurement of pavement permanent deformation based on 1mm 3D
pavement surface model (Doctoral dissertation). Oklahoma State University.
Staiger, D., and Stock, J. H. (1997). “Instrumental variables regression with weak
instruments.” Econometrica, 65(3), 557 – 586.
Turner, D., Lucieer, A., and Watson, C. (2012). “An automated technique for
generating georectified mosaics from ultra-high resolution unmanned aerial
vehicle (UAV) imagery, based on structure from motion (SfM) point clouds.”
Journal of Remote Sensing, 4, 1392 – 1410.
Vaitkus, A., ygas, D., and Kleizien , R. (2014). “Research of asphalt pavement
rutting in Vilnius city streets.” Proceedings of The 9th International Conference
“Environmental Engineering”, Vilnius, Lithuania.
Wang, Z. Z. (1990). Principle of photogrammetry (with remote sensing). Publishing
House of Surveying and Mapping, Beijing, China
Wang, Y. (2007). Digital simulation test of asphalt mixtures using finite element
method and X-ray tomography images (Doctoral dissertation). Virginia Tech.
Yuan, X., Fu, J., Sun, H., and Toth, C. (2009). “The application of GPS precise point
positioning technology in aerial triangulation.” ISPRS Journal of
Photogrammetry and Remote Sensing, 64, 541 – 550.
Zomrawi, N., Hussien, M. A., and Mohamed, H. (2013). “Accuracy evaluation of
digital aerial triangulation.” Intl J. of Eng. and Tech., 2(10), 7 – 11.

8
©  ASCE
Computing  in  Civil  Engineering  2015 82

A Case Study of Construction Equipment Recognition from Time-Lapse Site


Videos under Low Ambient Illuminations

Xiaoning Ren1; Zhenhua Zhu2; Chantale Germain3; Bryan Dean4; and Zhi Chen5
1
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, QC, Canada H3G 1M8. E-mail: [email protected]
2
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, QC, Canada H3G 1M8. E-mail: [email protected]
3
Direction Ingénierie de Production Hydro-Québec, 855 Ste-Catherine est, Montréal,
QC, Canada H2L 4P5. E-mail: [email protected]
4
Dir. Administration et Contrôle, Hydro-Québec Équipement, 855 Ste-Catherine est,
8 étage Montréal, QC, Canada H2L 4P5. E-mail: [email protected]
5
Department of Building, Civil, and Environmental Engineering, Concordia
University, Montreal, Canada H3G 1M8. E-mail: [email protected]

Abstract
High definition construction cameras have been increasingly placed on
construction sites to record jobsite activities into time-lapse videos. These videos
have been used in several research studies to promote construction automation related
to productivity analysis, site safety monitoring, etc. However, most videos tested in
these studies were collected at daytime. It is not clear whether the time-lapse site
videos collected at night are still useful, considering construction work might be
performed both day and night for various reasons. The main objective of this paper is
to investigate the effectiveness of recording jobsite activities with the time-lapse
videos collected by construction cameras at night or other low ambient illuminant
conditions through a case study. The construction site of the Romaine Complex
project in Quebec is selected as the test bed. The cameras have been placed on the site
to record the jobsite activities from day to night. All the collected site videos are
classified based on their ambient illuminant conditions. Then, the videos under low
ambient illuminations were tested with the object recognition techniques that have
been commonly used in existing construction research studies. The recognition results
indicated that the videos collected at night or other low ambient illuminant conditions
are still useful and could be considered as an important source for site data sensing
and analysis.

Keywords: Case study; Construction equipment recognition; Low ambient


illuminations; Imaging techniques

©  ASCE 1
Computing  in  Civil  Engineering  2015 83

INTRODUCTION
In recent years, the object recognition technology of utilizing high resolution
construction cameras has been increasingly recognized as an effective and efficient
way to retrieve construction site information. The cameras could record jobsite
activities. This activity information, when being retrieved, is useful for monitoring the
construction performance (Leung et al. 2008), analyzing the construction productivity
(Gong and Caldas 2010), and maintaining control work zone safety (Yang et al. 2010).
However, to the authors' knowledge, most site videos tested for the retrieval
of construction site information in existing studies were collected at daytime with
good luminance. As concerns the real projects, construction work might be performed
both day and night in order to catch up with the project schedule and shorten the
construction duration. Therefore, it is not clear whether the time-lapse site videos
collected at night or under other low illuminant conditions are still useful for the
recognition of project information retrieval, such as the recognition of project-related
entities (e.g. construction equipment) on the construction site.
The objective of this paper is to investigate the effectiveness of construction
site videos collected under low ambient illuminant conditions on the construction site
information retrieval. In order to achieve the objective, a high resolution construction
camera was mounted to record the jobsite activities, the Romaine Complex project in
Quebec. The Romaine Complex project is going to build a 1,550-MW hydroelectric
complex comprising four hydropower generating stations on the Rivière Romaine.
The collected videos are first classified in accordance with the specific illuminant
conditions. For those videos with low illuminant, they are extracted into images in
every 1 minute. Then, images are transformed to equalized-histogram ones though
grayscale conversion. At the end, existing object recognition techniques could be
used to recognize project-related entities on construction sites based on the equalized
histogram images. The recognition results indicate that the site videos with low
luminance are applicable for the equipment recognition. And the findings of the case
study have the potential to enrich the site data analysis source.

BACKGROUND
High definition cameras are increasingly adopted to monitor construction
performance on-site. Considering that the cameras could capture a wealth of
construction site information for both decision-makers and construction professionals.
For example, in construction performance monitoring, cameras are instrumental in
monitoring the real-time construction progress, allowing for early detection of
problems (e.g. construction deviations) and better planning for following tasks (Bohn
and Teizer 2010). Everett et al. (1998) proposed an innovative time-lapse videos
application to document and observe construction projects. Navon (2006) conducted
the research on automated measurements of project performance using time-lapse
videos for monitoring and control.
In resource tracking and recognition, Brilakis and Soibelman (2005) lodged a
novel method for open site tracking and site communication with construction
cameras based on machine vision. Brilakis et al. (2008) presented a novel method for
open site tracking with construction cameras based on machine vision. Chi and

©  ASCE 2
Computing  in  Civil  Engineering  2015 84

Caldas (2011) proposed a method for automated object detection for the moving
equipment and personnel using video cameras for heavy-equipment-intensive
construction sites. The on-site camera system was employed to fulfill the three-
dimensional tracking of construction resources (Park et al. 2012).
In safety control, Teizer and Vela (2009) had tested and demonstrated the
feasibility of tracking personnel on construction sites using the snapshots captured by
both statically placed and dynamically moving cameras. Yang et al. (2010) developed
a machine-learning-based multiple workers tracking scheme using video frames for
the onsite safety concerns. Vision-based motion capture techniques were introduced
to detect unsafe actions in site videos (Han et al. 2012). Thus, the adoption of high
resolution construction cameras is not only beneficial to project decision-makers,
namely project scale and scope control, work zone safety monitoring, etc, but
instrumental to the promotion of automation in civil engineering. However, most
existing studies were investigated using data in the daytime with good illuminant
conditions based on the aforementioned study review. In hence, to determine the
feasibility of the equipment recognition using videos collected under low luminance
is the key motivation of this study.
The common limitation of current studies is that videos used for the object
recognitions were captured in the daytime with good illuminant conditions. On the
other hand, researchers, in computing field, have investigated the image processing
techniques on the basis of night version images for decades. Waxman et al. (1997)
presented a night vision colorization technique for processing the multispectral night
vision imagery. Teo (2003) proposed the effect of the Contrast Limited Adaptive
Histogram Equalization (CLAHE) process on night vision and thermal images.
However, few researchers extend the image processing techniques on the night
version images to the field of engineering. Consequently, to apply the image
processing techniques on the night version images is another motivation of this study.

OBJECTIVE AND SCOPE


The main objective of this paper is to investigate the effectiveness of the time-
lapse videos collected by a typical construction camera at night or under low ambient
illuminant conditions. The videos collected during the night were tested by the widely
accepted object recognition algorithm. The recognition results indicate that the videos
collected at night or other low ambient illuminant conditions are still useful and could
be also considered as an important source for site data sensing and analysis.
In this study, the videos are firstly extracted into images in every minute.
Images are divided into two categories, training set and testing set. The recognition
models are trained using the Histogram of Oriented Gradients (HOG) method,
because of its robust performance, on the training dataset. Specifically, 180 images
captured at night are the input of the training dataset. The HOG detector is to develop
a vector of object features from the training images. Then it is used to search for the
target objects in the testing images. 480 testing images in total are selected from the
video frames, none of which was used for training. The equipment recognition is
focused on two most common used and typical types of machinery on construction
sites, tipper trucks and excavators. In order to improve the recognition accuracy, the
training samples of the excavator are divided into two sets, excavator-front and

©  ASCE 3
Computing  in  Civil  Engineering  2015 85

excavator-back, because the position of the excavator boom substantially affects the
recognition result.
IMAGE PROCESSING FRAMEWORK
In order to achieve the goals mentioned above, a novel image processing
framework has been proposed. The framework includes three main steps. Any input
color image with low illuminant conditions is pre-processed though grayscale
conversion and histogram equalization. Then, existing object recognition techniques
could be used to recognize project-related entities on construction sites. The
framework has been illustrated in Figure 1. The construction equipment could be
recognized based on its HOG features.

Figure 1: Flowchart of Image Processing

Specifically, the first step in the proposed framework is to convert the video
frames into 8-bit grayscale. A grayscale image, in computing version, is often the
result of measuring the intensity of light at each pixel, and each pixel is a shade of
gray, normally from 0 (black) to 255 (white). In this study, the image processing is
mainly on the basis of grayscale images.
The second step is to transform the grayscale images so that the histogram of
images is equalized. The histogram equalization method is instrumental to analyze
and improve the visual quality of images, thereby increasing the contrast of images
with backgrounds and foregrounds that are both bright or both dark. Through the
adjustment, intensities can be better distributed on the histogram, which allows local
areas with lower contrast to obtain a higher contrast.
The last step is to implement the object recognition algorithm based on the
equalized histogram images. In order to improve the recognition accuracy, the
training samples of the excavator are divided into two sets, excavator-front and
excavator-back, because the position of the excavator boom substantially affects the
detection result. And among the 480 test images, there are 480 mages for excavator-
front, 178 for excavator-back and 71 containing the tipper truck.

CASE STUDY

Project Background
For the sake of securing the required data, the construction camera was
mounted on the test bed, the Romaine complex project funded by Hydro- Québec
which is going to build four hydropower generating stations on the Rivière Romaine.
Since there are two work shifts on the construction jobsite from day to late night, the
construction camera system featuring a 12 megapixel SLR digital camera was used to
record the jobsite activities all day along. The core high resolution camera provides
clearer images with greater coverage. The images are determined by the number of
pixels. And more pixels would improve the sharpness of the image, which are

©  ASCE 4
Computing  in  Civil  Engineering  2015 86

insttrumental to
o the recognition results. As shown in figure 2,, (a) and (b)) shows the
settting-up of th
he constructtion camera and the im
mage sample from the sttandardized

view
wpoint is dissplayed in (cc).
(a) (b) (
(c)
F
Figure 2: (a) camera settiing-up; (b) standardized
s viewpoint; (c) camera setting-up
s

Imp plementatio on and Resu ults


The imaage processiing techniquues mentioneed in the prooposed fram mework was
impplemented using
u MATL LAB R20144a. The overall framew work was operated
o in
Winndows 7 Entterprise 64 bits
b environm ment. The deesktop used to t run the framework is
Delll Precision T1700 equippped with Inntel® Core-i7-4770 CPU U @3.40GH Hz and 8.00
GBB memory.
Figure 3 illustrates the general recognitionn process wiith respect too the tipper
trucck: (a) the original
o RGB B image; (b)) grayscale image;
i (c) thhe histogram
m-equalized
prooduction; (d) the recogniition result of
o the tipper truck. The green box inndicates the
recoognition outtcome for thee tipper truckk.

(a) (b)

(c) (d)
F
Figure 3: (a) original RGGB image; (bb) grayscale image (c) hiistogram equualization
imagge; (d) tipperr truck recoggnition resultt.

©  ASCE 5
Computing  in  Civil  Engineering  2015 87

Similarlly, the recoggnition proceess for the exxcavator is displayed


d beelow (figure
4&5). Particulaarly, figure 4 shows the recognition process for the viewpoiint of front-
exccavator, whille the detection result saample for thaat of back-exxcavator is displayed
d in
figuure 5.

(a) (b)

(c) (d)
F
Figure GB image; (b) grayscale image;
4: (a) original RG i (c) hiistogram equualization
image;; (d) front-exxcavator recoognition resuult.

(a) (b)

(c) (d)
F
Figure 5: (a) original RG
GB image; (b) grayscale image;
i (c) hiistogram equualization
image;; (d) back-exxcavator recoognition resuult.

©  ASCE 6
Computing  in  Civil  Engineering  2015 88

The preliminary test results indicated that the construction site videos
collected at night or under other ambient illuminant conditions are still useful with
appropriate image processing techniques. In an addition, a comparison experiment
was investigated to indicate the effectiveness of the videos. More precisely, the
identical recognition algorithm was used to recognize the same type of equipment,
tipper trucks and excavators, in the original RGB images. Take the recognition result
for the tipper truck as an example, the result for the tipper truck in the images
processed by the proposed method is 14 out of 71, whereas the recognition result for
the tipper truck in the RGB images is 3 out of 71. The results in the processed images
are 5 times better than those in the raw images.
Table 1 shows the detailed comparison results of each equipment. 137 out of
480 equalized-histogram images containing the excavator-front can be recognized.
On the other hand, the recognition results in the RGB images are 21 out of 480.
Manifestly, the proposed image processing framework is effective for improving the
construction equipment recognitions results, which in turn is instrumental for the
retrieval of construction site information under low illuminance conditions.
Table 1: Comparison Experiment Results
Recognition results
Equipment type
In RGB images In equalized-histogram images
Tipper truck 3/71 14/71
Excavator-back 0/178 34/178
Excavator-front 21/480 137/480

CONCLUSIONS AND FUTURE WORK


Construction camera has been recognized as a desirable means of recording
the real-time construction performance and providing the valuable site data. So far
little work was found in the site data analysis using images or videos captured under
low illuminant conditions. This paper presented the construction equipment
recognition using the time-lapse videos collected at night. Such recognition results
testified the feasibility of the application of the construction camera under low
ambient illuminant conditions. Besides, it as well showed the potential of establishing
a data-rich construction site using the construction camera. This is particularly
suitable for monitoring the construction performance where the construction work is
executed both day and night. And the goal of extending this research is to enrich the
site data analysis source with videos collected under low ambient illuminant
conditions. Considering the accuracy of the recognition results, the future work
should be focused on exploiting more cutting-edge image processing techniques to
improve the accuracy of recognition results based on the study.

ACKNOWLEDGEMENT
This paper is based in part upon work supported by the National Science and
Engineering Research Council (NSERC) of Canada. Any opinions, findings, and
conclusions or recommendations expressed in this paper are those of the author(s) and
do not necessarily reflect the views of the NSERC.

©  ASCE 7
Computing  in  Civil  Engineering  2015 89

REFERENCES
Brilakis, I., Cordova, F., and Clark, P. (2008). “Automated 3D vision tracking
for project control support.” Proc., Joint US-European Workshop on Intelligent
Computing in Engineering, 487–496.
Brilakis, I and Cordova, F and Clark, P Automated 3D Vision Tracking for
Project Control Support. In: Joint US-European Workshop on Intelligent Computing
in Engineering, 2008-7-2 to 2008-7-4, Plymouth, UK pp. 487-496
Brilakis, I., Park, M.-W., and Jog, G. (2011). “Automated vision tracking of
project related entities.” Adv. Eng. Inf., 25(4), 713–724
Bohn, J. and Teizer, J. (2010). “Benefits and Barriers of Construction Project
Monitoring Using High-Resolution Automated Cameras.” J. Constr. Eng. Manage.,
136(6), 632–640.
Chi, S. and Caldas, C. (2011). “Automated object identification using optical
video cameras on construction sites.” Computer-Aided Civil and Infrastructure
Engineering, 26(5): 398-380.
Everett, J., Halkali, H., and Schlaff, T. (1998). ”Time-Lapse Video
Applications for Construction Project Management.” J. Constr. Eng. Manage., 124(3),
204–209.
Han, S., Lee, S., and Peña-Mora, F. (2012) Vision-Based Motion Detection
for Safety Behavior Analysis in Construction. Construction Research Congress 2012:
pp. 1032-1041.
Gong, J. and Caldas, C.H. (2010). “Computer Vision-Based Video
Interpretation Model for Automated Productivity Analysis of Construction
Operations.” Journal of Computing in Civil Engineering, 24(3): 252-263.
Gonzalez, R.C., Wood, R.E. s, Eddins, S.L., (2004). “Digital Image
Processing Using Matlab”, Prentice Hall, Upper Saddle River, New Jersey
Navon, R. (2006). “Research in automated measurement of project
performance indicators.” Autom. Constr., 16(2), 176–188.
Park, M.W., Makhmalbaf, A., and Brilakis, I. (2011). “Comparative study of
vision tracking methods for tracking of construction site resources.” Autom. Constr.,
20(7), 905–915.
Rezazadeh Azar, E. and McCabe, B. (2012). ”Automated Visual Recognition
of Dump Trucks in Construction Videos.” J. Comput. Civ. Eng., 26(6), 769–781.
Song, J., Haas, C., and Caldas, C. H. (2006). “Tracking the location of
materials on construction job sites.” J. Constr. Eng. Manage., 132(9), 911–918.
Teo, C.K., (2003). “Digital Enhancement of Night Version and Thermal
Image.”, Master’s thesis, Naval Postgraduate school Monterey.
Teizer, J., Lao, D., and Sofer, M. (2007). "Rapid automated monitoring of
construction site activities using ultra-wideband." Conf. Proc. of the 24th Int.
Symposium on Automation and Robotics in Construction, IAARC.
Yang, J., Arif, O., Vela, P. A., Teizer, J., and Shi, Z. (2010). “Tracking
multiple workers on construction sites using video cameras.” Adv. Eng. Inform.,
24(4), 428–434.
Waxman, A.M., Gove, A.N., Fay, D.A., Racamato, J.P., Carrick, J.E., Seibert,
M.C. Savoye, E.D., (1997). Color Night Vision: Opponent Processing in the Fusion
of Visible and IR Imagery, J. Neural Networks 10(1), 0893-6080

©  ASCE 8
Computing  in  Civil  Engineering  2015 90

A BIM-Enabled Platform for Power Consumption Data Collection and Analysis

C. T. Chiang1; T. W. Ho2; and C. C. Chou3


1
Graduate Research Assistant, Department of Civil Engineering, National Central
University, Taiwan. E-mail: [email protected]
2
Graduate Research Assistant, Department of Civil Engineering, National Central
University, Taiwan. E-mail: [email protected]
3
Associate Professor, Department of Civil Engineering, National Central University,
300 Jhongda Rd., Jhongli, Taoyuan 32001, Taiwan. E-mail: [email protected]

Abstract

Comprehensive analysis of power consumption data for assorted home


appliances or electrical devices can help identify a household’s electricity usage
patterns as well as provide potential energy saving suggestions. Currently, modern
metering techniques, such as smart meters, non-intrusive load monitoring, and Wi-Fi
smart power outlets, can be used to collect the power consumption data in a timely
and effective way. However, the data collected have not been analyzed in the context
of the encompassing building, which certainly affects its energy use requirements
such as heating and cooling. In addition, past research has showed that behavioral
interventions are necessary to encourage a resident to save his or her utility bills,
which requires disclosure of a household’s and the neighbors’ electricity usage
patterns for comparison. Such the behavioral approaches have been successfully
utilized by companies like Opower; nevertheless, they might invade one’s privacy if
not properly managed. Thus, this research aimed at development of data fusion
platforms with capability not only to integrate the power consumption data with the
building data from a building information modeling (BIM) tool, but also to protect
residents’ privacy concerns regarding their electricity usage patterns. Research work
includes: (1) development of a tool to automatically transform a Revit file into
Unity3D so that residents can see when and where the power is consumed and
potential energy saving suggestions in a more interactive way; (2) development of a
local platform, called local Real-Time Replay Platform (RTRP), to call a BIM-based
energy analysis tool, Ecotect, to create the energy use baselines for residents for
comparison; (3) development of a central platform, called central RTRP, to receive
the aggregated power consumption data from several local RTRPs of different
households so as to compile the neighborhood energy use baselines for all residents to
compare with. Preliminary results are presented with research conclusions and future
work discussions.

INTRODUCTION

©  ASCE 1
Computing  in  Civil  Engineering  2015 91

Effective management of energy use in a building is the key to create a


sustainable society. Most of the current technologies for monitoring, managing or
reducing a building’s power consumption concentrate on collection and analysis of
electricity consumed by assorted home appliances or electrical devices. However,
comprehensive analysis of power consumption data may also need another data
source, mainly the building information modelling (BIM) data, to further understand
how the building or the environment interacts with the devices while functioning. For
example, the DOE report pointed out that 48% of the average household’s energy use
goes to space heating and cooling (DOE 2014). While an air conditioner device does
not know how much sunlight its room will receive, such information or the amount of
its power consumption has been analyzed or estimated in the design phase of a
building (Azar and Menassa 2012), which can be now retrieved from a BIM tool.
With the advance of the BIM technology, more and more construction project owners
would like their buildings digitalized in a BIM format. Synthesizing the power
consumption and BIM data might thus be needed.
A BIM file representing a building may contain various aspects of data useful
for energy saving analysis. In addition to the electricity usage baseline data derived
from building performance analysis (BPA) previously noted, the geometry-related
attributes of a BIM element hosting a home appliance could serve as the location
dimension for its power consumption data, that is, the amount of the electricity
consumed will be recorded according to the device’s location with a timestamp. In
this way, not only traditional power consumption-related charts such as the electricity
consumed by each type of device per hour or per week can be generated, but a more
interactive, video-game-based environment might be created to replay when and
where each home appliance is utilized and to display potential energy saving
suggestions on demand. Currently, these energy saving tips are usually provided in a
text-based format by utility companies, with low persuasibility (Laskey and
Kavazovic 2010).
In fact, in the demand-side management (DSM) field of residential power
consumption, there are three major strategies to consider: load shifting, energy
efficiency, and energy conservation (Davito et al. 2010). All the strategies require
involvement of residents, and human behavior is very diverse (Taherian et al. 2010).
In a noteworthy paper by Allcott and Mullainathan (2010), they argued that although
price-based interventions such as subsidies for energy-efficient goods derive from
traditional economic models of rational choice, behavioral interventions may have
more potential for reducing energy use. Companies such as Opower have
successfully demonstrated use of behavioral interventions for reducing energy use.
They require collect each household’s power consumption data in a community, so as
to create the norms for residents to compare. However, as reported by several
researchers (Taherian et al. 2010), such feedback systems, if not properly designed,
may pose serious security and privacy problems. Therefore, comprehensive
collection and analysis of a household’s power consumption data for the behavioral
interventions-based energy saving approach may also need to accommodate the
security and privacy requirements mentioned.

©  ASCE 2
Computing  in  Civil  Engineering  2015 92

Thus, this research aimed at development of two data fusion platforms with
capability not only to integrate the power consumption data with the BIM-related data,
but also to protect residents’ privacy concerns regarding their electricity usage
patterns. Research work includes: (1) development of a tool to automatically
transform a Revit file into Unity3D so that residents can see when and where the
power is consumed and potential energy saving suggestions in a more interactive
way; (2) development of a local platform, called local Real-Time Replay Platform
(RTRP), to call a BIM-based energy analysis tool, Ecotect, to create the energy use
baselines for residents for comparison; (3) development of a central platform, called
central RTRP, to receive the aggregated power consumption data from several local
RTRPs of different households so as to compile the neighborhood energy use
baselines for all residents to compare with. This paper presents the preliminary
research results, which cover the first work item, and discusses the future work plan
and challenges. Section 2 reviews related work in the literature. Section 3 describes
the implementation of RTRP. Section 4 discusses RTRP’s limitations, and Section 5
presents conclusions and future work.

RELATED WORK

Methods for collection and analysis of power consumption data for residential
energy saving have been proposed and examined in the literature. Mattern et al.
(2010) classified these methods as two categories: (1) Single Sensor Approach, where
a sensor will be placed in a circuit to measure its electricity usage or be placed in a
main switch to measure for the entire power demand of a household. Other advanced
methods such as non-intrusive load monitoring (NILM) can be used to disaggregate
the total consumption data so as to provide more specific information about electricity
consumption at device level (Hart 1992). Kolter and Johnson (2011) provided a
public data set for this type of energy disaggregation research. (2) Multiple Sensor
Approach, where a sensor will be installed for every device, and it is mostly
expensive in the form of smart power outlets such as Belkin WeMo Insight Switch,
which can simplify the deployment work and was therefore utilized in this research.
In sum, collection of real-time power consumption data is now feasible, but the
second type of approaches has difficulty in aggregating each device’s data while the
first type of approaches has some challenges pertaining to disaggregating data.
Behavioral interventions for residential energy saving have been investigated
in the literature as well. Allcott and Mullainathan (2010) reported the experiment
results conducted by Opower to prove the concept. However, other researchers such
as Alahmad et al. (2012) reported a statistically insignificant reduction in mean
electrical consumption in houses when compared to a randomly selected control
sample by using three different sensors with feedback functionality. Indeed, the
human behavior is very complex, and Sorrell et al. (2009) argued that occupants
might change their energy usage characteristics by adopting bad consumption habits,
or the so-called rebound effect. In sum, the information presented to residents for
energy saving may need careful design and comprehensive testing before actual
deployment.

©  ASCE 3
Computing  in  Civil  Engineering  2015 93

A similar work done by Khan and Hornbæk (2010) aimed at development of


Project Dasher, a visualization tool that can overlay the collected sensor data on BIM
for real-time and historical monitoring. They argued that non-experts can quickly
understand the data being presented in the BIM environment, and occupants can
directly understand the relationship between their activities and the cost to the
environment. In sum, visualization of the power consumption data into the space
created by a BIM tool might facilitate the feedback process so that residents can
understand and actually apply the proposed energy saving suggestions.

RESEARCH METHODOLOGY

The proposed RTRP utilizes Revit, the Unity3D engine, and many Wi-Fi
smart power outlets (e.g. D-Link Wireless Smart Plug or WeMo) to create a local
platform that is capable of replaying any series of events recorded by the sensors in a
building. Different from using surveillance cameras for infrastructure monitoring,
applying RTRP not only can provide richer, digitalized details for analysis, but also
can avoid privacy concerns should there be confidential areas within a building. As
shown in Figure 1, the following paragraphs describe the main components of RTRP.
Database: The sensor database can handle problems such as the huge volume
of sensor data. Three tables were created for sensor data: Sensor Attributes, Sensor
Records, and Material Attributes. Sensor Attributes is the table that stores the
attributes of sensors, such as identification number, the highest amount of wattage
that a sensor can undertake, etc. Sensor Records is the table that stores all the data
sent back by a sensor at any time, such as timestamp, current position, wattage and
temperature. Material Attributes is the table that lists all the behaviors for each
infrastructure material type, like the material type’s burning, explosion or breaking
temperature. The open source database, PostgreSQL with PostGIS, was utilized here
for storing the sensor data.
Web-based Management Interface: A user-friendly, Web-based
Management Interface was created to enable users’ interactions with the sensor
database. With the interface, users do not need to log onto the database as it is not a
straight-forward task for those who do not have experience of database management.
This also insures the database security as the users cannot manipulate the data tables
directly.
In addition, the Web-based Management Interface was coded in PHP,
Hypertext Preprocessor, which is an intuitive web programming language that
supports most web functions and asynchronous JavaScript and XML (AJAX), not
only avoiding the full page reloads upon requests, but also reducing the page
processing time. The compatibility of PHP for SQL is well established and most
database systems such as Microsoft SQL Server, MySQL and PostgreSQL are all
compatible with PHP. To insure the database security and sensor data privacy, users
of the Web-based Management Interface are required to log in.
BIM Material Standardization: The objects in Revit, such as rebar, wall,
column and furniture all can be customized and modified. From a designer’s point of
view, in addition to ensuring the consistency of the object parameters, a designer can
edit these objects more efficiently in Revit. However, the materials in Revit are

©  ASCE 4
Computing  in  Civil  Engineering  2015 94

proprietarily defined as “Autodesk Material” and can only be read by Autodesk


software such as 3ds Max, etc. Thus, directly importing a Revit model into Unity3D
via the FBX file format will cause loss of the parameter information. For example,
the material color and the texture (Kumar et al. 2011) are all missing after importing.

Figure 1. Framework of RTRP.


To solve the problem while migrating a proprietary model into the virtual
reality simulating platform, the materials standardization process was designed. A
typical infrastructure 3d model may include approximate 50 types of materials, and
all might be with different parameters (e.g., color, texture, reflectance, and
transmittance), making it difficult to standardize materials manually. Hence, an
Autodesk 3ds Max plugin, Material Transformation, was designed to automatically
identify all the Autodesk material types inside Revit, to standardize them to Standard
Material, which can be read by most 3d software tools, and to attach them to the
corresponding Revit objects.
Database Synchronization: How to synchronize the sensor data received
with the sensor database without any delay is the most significant crux for RTRP.
Therefore, the IEnumerator function in C# was utilized along with the multithreaded
codes to run a simulation in RTRP in the foreground and to download the data from
the sensor database in the background. RTRP synchronizes the sensor data by
sending requests via HTTP to the PHP web page on the sensor database server.
Real-Time Simulation: Because targeted electrical devices should be plugged
into Wi-Fi smart power outlets, their power consumption data, as well as the current
temperature of the sensor, can be collected and conveyed into the sensor database via
a customized tool. After the building is rendered in Unity3D, with each sensor
location and device information, RTRP will display the current power usage status of
each electrical device by using the following coloring scheme: if the current wattage
is above the normal, or if the usage duration is above the normal, the sensor on
Unity3D will be highlighted in red. In addition, if the current wattage is above some
threshold value, an electrical incident, e.g., fire, may occur. RTRP will display
Unity3D special effects such as burning, breakage, or explosion, based on the
temperature and the material type around the sensor. For example, as the material
temperature increases, flammable materials such as cloth and wood furniture will

©  ASCE 5
Computing  in  Civil  Engineering  2015 95

burn; non-flammable materials such as glass will break; electronic devices will
explode. To simulate the status change of an object, the physical particle effects is
applied for material burning and explosion and for the mesh cutting algorithm with
random forces to split an object for material breakage.

REAL-TIME REPLAY PLATFORM

In this section, implementation details of RTRP are described. Specifically,


issues such as BIM materials standardization, real-time replay engines, and
management of the sensor database are addressed.
BIM Materials Standardization: To implement the simulation in RTRP, one
of the initial steps is to generate a standard infrastructure model. As explained in
Section 2, using the model directly from Revit will cause the loss of material
parameters, as shown in Figure 2. The plugin program, Material Transformation, was
developed for Autodesk 3ds Max to resolve this problem. The result of materials
standardization is shown in Figure 3.
Real-Time Replay: After the BIM materials standardization, importing the
BIM model into RTRP can be achieved as shown in Figure 4 without losing any
object parameters (e.g. textures). When RTRP starts, it will automatically connect
with the sensor database, pre-simulate all objects, and generate the simulation time
frame. The time frame illustrated at the bottom of Figure 4 is for users to play, pause,
fast forward (or backward) the simulation. It is also worth noting that the left window
in Figure 4 is for sensor data editing. Should a user needs to revise the sensor data
(e.g. due to outliers) in the sensor database, he or she would not need to log onto the
server and can modify the data directly in RTRP. The right window in Figure 4 is for
database querying in which a user can setup specific conditions to retrieve all related
records. Figure 5 shows the retrieved sensor data and clicking any sensor’s play
button in Figure 5 will jump the simulating to the indicated time frame for that sensor
and adjust the simulation camera to where the sensor is located.

Figure 2. Losing parameter information of nonstandard materials.


Web-based Database Management Interface: The web-based database
management interface was designed for RTRP users to interact with the collected
sensor data easily. Because the interface is web-based, users can log onto the
interface for maintaining at anytime and anywhere using a browser. Indeed for data

©  ASCE 6
Computing  in  Civil  Engineering  2015 96

security, only users who have the authorization are allowed to log onto the system.
RTRP has a logon interface and after logon users can see the function menu on the
left, which includes Materials Management (material parameters setup), Sensor
Management (sensor parameters setup) and Time Table Management (sensor data
managing). Materials Management is for defining all material parameters, which
include a material’s burning, explosion and breakage temperatures. Sensor
Management is to setup the type and the tripped watts of each sensor inside the
infrastructure. Time Table Management is about the real-time sensor data, such as
the relative position of the infrastructure, the temperature or the humidity the sensor
collected.

Figure 3. Before and after materials standardization comparison.

Figure 4. Query form of RTRP. Figure 5. Query results of RTRP.

DISCUSSION

RTRP is meant to be an efficient visualization environment for the monitoring


and management of a building’s energy consumption data. In addition to converting
sensor data into 3d virtual reality, RTRP can be regarded as a sensor data query tool.
Nevertheless, there are some limitations or future work discussed here so as to better

©  ASCE 7
Computing  in  Civil  Engineering  2015 97

reflect the ultimate goal of achieving residential energy saving. First, the simulation
of material burning and related smoke creation is critical to improving the realism
aspect, as they capture the airflow near the surrounding objects. Such simulation
requires a lot of computational time and cannot reflect the real-time infrastructure
condition. For this reason, only the physic particle system was utilized to simulate
material burning. Second, the availability of sensors that can withstand high
temperature and can still transmit data is limited. Currently the range of temperature
sensors is from below -200°C to well over 2000°C. Considering that the highest
temperature of a fire scene is typically around 1250°C, the only challenge is how to
combine the heat-resistant sensors with wireless data transmission capability. Third,
setting up sensors inside an existing building might not always be feasible but
constructing a sensory building is very much doable nowadays. Finally, data yielded
by the sensors might include some error and the data accuracy is still yet to be
improved as the technology advances in the future.

CONCLUSIONS

RTRP was proposed for management and rendering of a building’s power


usage data. In the next stage of research, the baselines data of energy use from
Ecotect will be incorporated into RTRP. The central RTRP will be created to handle
the aggregated information of each household’s power consumption among a
community. A questionnaire can be designed to assess how resident react to the
energy saving tips suggested by RTRP when they face such the real-time simulation
environment, compared with the text-based energy saving suggestions.

ACKNOWLEDGMENTS

The authors would like to thank the Ministry of Science and Technology in
Taiwan for financial support under grant numbers: MOST104-3113-E-008-002 &
MOST103-2221-E-008-054-MY3, as well as Prof. Ken-Yu Lin from University of
Washington at Seattle for her invaluable suggestions.

REFERENCES

Alahmad, M.A., Wheeler, P.G., Schwer, A., Eiden, J. and Brumbaugh, A. (2012). “A
Comparative Study of Three Feedback Devices for Residential Real-Time
Energy Monitoring.” IEEE Transactions on Industrial Electronics, 59(4),
2002-2013.
Allcott, H. and Mullainathan, S. (2010). “Behavior and Energy Policy.” Science, 327,
1204-1205.
Azar, E. and Menassa, C.C. (2012). “Agent-Based Modeling of Occupants and Their
Impact on Energy Use in Commercial Buildings.” Journal of Computing in
Civil Engineering, 26(4), 506-518.
Davito, B., Tai, H., and Uhlaner, R. (2010). “The smart grid and the promise of
demand-side management.” McKinsey on smart grid in summer 2010,

©  ASCE 8
Computing  in  Civil  Engineering  2015 98

<https://www.smartgrid.gov/document/smart_grid_and_promise_demand_sid
e_management> (Dec.1, 2014).
Hart, G.W. (1992). “Nonintrusive appliance load monitoring.” Proceedings of the
IEEE, 80(12), 1870.
Khan, A. and Hornbæk, K. (2011). “Big Data from the Built Environment.” Large’11,
ACM, Beijing, China.
Kolter, J.Z. and Johnson, M.J. (2011). “REDD: A Public Data Set for Energy
Disaggregation Research.” SustKDD 2011, ACM, San Diego, California,
USA.
Kumar, S., Hedrick, M., Wiacek, C., and Messner, J.I. (2011). “Developing an
experienced-based design review application for healthcare facilities using a
3d game engine.” Journal of Information Technology in Construction, 16, 85-
104.
Laskey, A. and Kavazovic, O. (2010). “Opower: Energy efficiency through
behavioral science and technology.” XRDS, 17(4), 47-51.
Mattern, F., Staake, T., and Weiss, M. (2010). “ICT for Green – How Computers Can
Help Us to Conserve Energy.” e-Energy’10, ACM, Passau, Germany.
Rüppel, U. and Schatz, K. (2011). “Designing a BIM-based serious game for fire
safety evacuation simulations.” Advanced Engineering Informatics, 25(4),
600-611.
Sorrell, S., Dimitropoulos, J., and Sommerville, M. (2009). “Empirical estimates of
the direct rebound effect: A review.” Energy Policy, 37(4), 1356–1371.
Taherian, S., Pias, M., Coulouris, G., and Crowcroft, J. (2010). “Profiling Energy Use
in Households and Office Spaces.” e-Energy’10, ACM, Passau, Germany.
U.S. Department of Energy (DOE). (2014). Energy Saver: Tips on Saving Money &
Energy at Home, U.S. Department of Energy.

©  ASCE 9
Computing  in  Civil  Engineering  2015 99

Wavelet Transform on Multi-GPU for Real-Time Pavement Distress Detection

K. Georgieva1; C. Koch2; and M. König1


1
Chair of Computing in Engineering, Faculty of Civil and Environmental Engineering,
Ruhr-University Bochum, Building IC, Room 6-75, Bochum 44780. E-mail:
[email protected]; [email protected]
2
Department of Civil Engineering, Faculty of Engineering, University of Nottingham,
Room B27 Coates Building, University Park, Nottingham NG7 2RD
E-mail: [email protected]

Abstract

To ensure traffic safety, pavement conditions should be evaluated and distress


(cracks, potholes, etc.) should be timely detected. In recent years, methods based on
the analysis of digital images have been proposed to automatically detect pavement
distress. However, most of these methods process the images offline and therefore
require a large amount of data to be stored until actual processing. To enable real-
time analysis of the images and to reduce the amount of stored data, a highly
performant and computationally inexpensive implementation of a current analysis
method is required. This paper presents a Graphics Processing Unit (GPU)
implementation of an image analysis method for pavement distress detection. GPUs
have been recently utilized for high-performance computing in diverse scientific
fields. The pavement distress detection method presented in this paper is based on the
wavelet transform. The implementation is carried out using the Open Computing
Language (OpenCL). To evaluate the performance, the method was tested on 30
pavement images. The results show that a significant improvement in performance
can be achieved by utilizing GPUs.

INTRODUCTION

The condition of the surfaces of municipal roads has deteriorated rapidly in


recent years, leading to pavement defects, such as cracks and potholes (Levitz, 2014).
As a result of these defects, also known as pavement distress, vehicles driving on the
roads are damaged and accidents are caused. To maintain a good road condition, it is
required to monitor the road surface and observe changes in the pavement. Parts of
the roads with detected distress can then be repaired and costs due to vehicle damage
and accidents can be reduced.
Several approaches to detect pavement distress have been lately proposed and
applied. For example, manual observation is currently the most often used approach
towards pavement distress detection. However, it is hazardous, time-consuming and
highly subjective. To overcome these drawbacks, automated methods have been

©  ASCE 1
Computing  in  Civil  Engineering  2015 100

developed, which are capable of detecting distress with little or no human


intervention. These methods are usually based on sensors or video cameras mounted
on vehicles (NCHRP, 2004). These sensors and cameras collect pavement data which
is analyzed with regard to changes relevant to the condition of the road surface. Yet,
most of these methods do not operate in real time. The data is first stored persistently
until it is actually processed. Taking into account the length of the municipal road
network in Germany (approximately 610 000 km according to DStGB, 2014)) and the
frequency with which such pavement distress detection procedures need to be carried
out, the amount of stored data is enormous. To reduce this amount, it is necessary to
develop methods, capable of processing the data in real time. If a real-time analysis is
performed, data which captures road sections where no distress has been detected can
be discarded, and only data relevant to pavement distress needs to be stored for
further actions.
With the recent advances in CPU technology, we are now capable of
executing algorithms much faster than 20 years ago. According to Moore’s Law, the
number of transistors on a CPU doubles every year. However, the computational
power of state-of-the-art CPUs is still not sufficient for the purpose of real-time
analysis of pavement images. To obtain images with good quality on municipal roads
while driving with appropriate speed (approximately 60 to 80 km/h), a high-
frequency camera is required. Taking into account the time required to perform some
mandatory operations on the images, such as Bayer pattern demosaicing, the time
span which remains for performing the distress analysis is very limited.
Nevertheless, there exist processors capable of fulfilling the requirements of
real-time processing, namely Graphics Processing Units. GPUs have gained
popularity among scientists from diverse fields of studies in recent years. They have
been applied not only for graphics applications, but also for other tasks, such as the
computation of simulation parameters or data mining. By utilizing GPUs, the
execution of analysis methods on pavement images can be highly accelerated, thus
enabling real-time pavement distress detection.
In this paper, we present a GPU implementation of a particular pavement
distress detection method. The implementation is carried out using the Open
Computing Language (OpenCL), a standard for cross-platform, parallel programming
(Khronos OpenCL Working Group, 2013). The implemented method is based on the
wavelet transform and was proposed by Zhou (Zhou, 2006). Zhou also proposed
statistical criteria for the analysis of pavement images with respect to distress. To
calculate the values of these statistical criteria, the first level of wavelet
decomposition is required. Several implementations of the wavelet transform on GPU
have already been proposed (Stürmer, 2012; Sharma, 2010). However, these
implementations compute all levels of the wavelet transform and they calculate the
values of all wavelet coefficients. Specifically for the method proposed by Zhou, only
the values of the first level of wavelet decomposition are required, as will be
described in the next section of this paper. In addition, statistical criteria are
computed based on these values. These criteria are not part of the wavelet transform
and, accordingly, they are not calculated in the existing implementations of a GPU-
enabled wavelet transform.

©  ASCE 2
Computing  in  Civil  Engineering  2015 101

WA
AVELET TRANSFOR
RM FOR PA
AVEMENT DISTRESS
S DETECTIION

The wavelet transsform decoomposes ann image innto differennt-frequency


subbbands. The elements off these subbaands are referred to as wavelet coeffi ficients. The
trannsform is peerformed by applying low w pass and high
h pass fillters to an im
mage. After
onee pass, the image
i is deecomposed as a shown inn Figure 1 and a Figure 2, 2 where L
inddicates a loww pass filter and
a H indicates a high passp filter. The
T LL subbband is also
callled the apprroximation subband,
s andd HL, LH, and
a HH are called detaiil subbands.
Thee detail subb bands containn the horizontal, vertical and diagonnal details off the image.
Eacch of the su ubbands has a resolutionn of ½ of thhe image width by ½ off the image
heigght. The app proximation subband is usually
u furthher decompoosed into fouur subbands
unttil it consistss of exactly one
o elementt. These stepps in the deccomposition process are
callled levels. Inn Figure 2, thhe 3-level wavelet
w transfform is show wn.

There exist several forms of thee wavelet traansform. Thee most comm mon are the
Dauubechies (Daaubechies, 1990) and thee Haar waveelet (Haar, 19910). The Haar H wavelet
is chosen
c in thiis work for pavement
p disstress detecttion because it is highly suitable for
parrallel implemmentation. Innvented by Alfred
A Haar,, the Haar wavelet
w is peerformed by
calcculating the average difffference and the average sum of a paair of input values.
v The
summ is stored in the approoximation subband,
s whhile the diffe
ference is stored in the
detaail subband. In case of twwo-dimensioonal data, suuch as imagees, the proceedure is first
appplied to the rows
r of the image and after
a that to the columnns of the imaage or vice-
verrsa. The resu
ulting proceddure for one level
l of decoomposition is
i shown in Figure
F 3.

Figure 1: Pavemen
nt crack imaage Figurre 2: 3-level wavelet traansform of
a cracck image

Zhou ap pplied the wavelet


w transsform on paavement imaages and obbserved that
the homogeneo ous backgrouund is transfformed into the approxiimation subbband, while
disttress is repreesented in thhe detail subbbands. Baseed on this obbservation, he
h proposed
stattistical criteeria, which allow for analysis
a of the detail subbands. The T criteria
prooposed by Zh hou are highh-frequency energy perccentage (HFE EP), standarrd deviation
(STTD), and hig gh-amplitudee wavelet cooefficient peercentage (HHAWCP). According to
Zhoou’s publicaation, when tested on 81 images, HFEP H detectted 98% of the images
witth distress co orrectly (i.e. 2 % of the images withh actual distrress were noot identified
as images with h distress), while
w STD and HAWC CP detected 100 % of the images

©  ASCE 3
Computing  in  Civil  Engineering  2015 102

correctly. However, STD incorrectly isolated 2.6 % of the images which actually do
not contain distress as distress images , while HAWCP did not isolate any image
wrongly. Hence, HAWCP is used in the work presented in this paper.

Figure 3: 2D wavelet transform

HAWCP is a measure of the number of those coefficients in the detail


subbands that are larger than a threshold defined as an index for distress. To calculate
HAWCP, first the sum of the squared coefficients in the vertical, horizontal and
diagonal subbands are summed:

(1)

where M is called the wavelet modulus. Afterwards, the modulus is binarized


by being compared to a threshold value Cth. Cth is estimated by wavelet thresholding.

HAWCP is then calculated according to the following equation:

where W is the width and H is the height of the image.


The value of HAWCP lies between 0 and 1, where low HAWCP values
indicate a good pavement surface, and high HAWCP values indicate pavement
distress.

WAVELET TRANSFORM ON GPU

In Figure 3, A, B, C, etc. denote pixels. As can be seen, each wavelet


coefficient depends on four pixels of the image. There exist no dependencies between
the four pixels needed to calculate one wavelet coefficient and the four pixels needed
to calculate the next wavelet coefficient. As a consequence, the calculation of wavelet
coefficients can be performed in parallel for all quadruples of pixels.

©  ASCE 4
Computing  in  Civil  Engineering  2015 103

The methodology of the GPU implementation is shown in Figure 4. In


OpenCL the blocks of instructions that will run on the devices are called kernels
(Khronos OpenCL Working Group, 2013.). In case of the wavelet transform for
pavement distress detection, the kernel computes the horizontal, vertical, and
diagonal coefficients of a quadruple of pixels. If an image with a total number of N
pixels is considered, the kernel needs to be executed N/4 times. Theoretically, if we
are able to execute the kernel for all quadruples in parallel, we will achieve a
performance improvement of N/4. However, additional operations with the data are
required and such an improvement cannot be achieved. To perform the wavelet
transform, the data first has to be loaded in an appropriate form. OpenCL makes use
of memory arrays that are used to store the kernel arguments. The pixels of the input
image are loaded into an array on the CPU. To be able to work with the data, the GPU
requires that the data be copied to the GPU (transfer data). The data is stored in the
GPU in a so called memory buffer. The wavelet kernel is executed on the elements of
this memory buffer and the coefficients are calculated. For further processing, the
output values have to be transferred back to the CPU.
However, the coefficient values are not needed for pavement distress
evaluation. Only the resulting HAWCP value is required. Hence, the HAWCP value
is computed by the kernel and only this value is sent back to the CPU. As defined in
equation 3, HAWCP is calculated as a sum of elements. It could be considered an
example of a reduction operation. Reduction operations take a vector of data and
reduce it to a single element. AMD have proposed implementations of reduction
operations, such as the sum reduction (Catanzaro, 2010). However, we realized that
using built-in OpenCL atomic functions, such as atomic_inc, instead of reduction
operations, makes it easier for the developer and satisfies the time requirement. In
future implementations, if an even faster execution is required, the HAWCP
computation could be performed using reduction operations.

Figure 4: GPU implementation of the wavelet transform and HAWCP


calculation

MULTI-GPU WAVELET TRANSFORM

Multiple GPU devices can be utilized for further performance enhancement.


There exist two strategies to partition the work between multiple GPU devices in case
of image manipulations. Intra-frame load balancing refers to using naturally
independent data pieces like video frames or multiple image files to distribute them
between different devices for processing. Inter-frame load balancing refers to splitting
the data that is currently being processed between the devices. When using intra-
frame load balancing, no synchronization is required. As a result, the intra-frame load
balancing leads to better performance (Intel Corporation, 2013). Figure 5 presents the

©  ASCE 5
Computing  in  Civil  Engineering  2015 104

methodology developed for multi-GPU processing for the case of two devices. Intra-
frame load balancing is applied by distributing the images between the two devices.
First, two images are loaded sequentially into arrays by the CPU. After the pixel
values were transferred on the GPU devices, the kernels are executed on both devices
in parallel for all quadruples of pixels. The schema can be extended for more than two
devices by adding more parallel kernels.

Figure 5: Multi-GPU implementation of the wavelet transform and


HAWCP calculation

PERFORMANCE TESTS

Samples of 30 images were used to test the performance of the


implementation presented in this paper. The images have a size of 256x256, 512x512,
1024x1024 and 2048x2048 pixels. The CPU used for the tests is a 2.10 GHz Intel
Core i7-4600 CPU. Different GPUs are utilized for the performance tests, because the
execution time of the wavelet transform depends highly on the GPU used. An
integrated Intel HD Graphics 4400 GPU and two Nvidia Tesla C2070 GPUs are used.
The two Nvidia GPUs are used standalone as well as together to compare single-GPU
and multi-GPU execution times. To guarantee accurate results, an average of 10
executions is used. The time required to execute the wavelet transform for the first
level and to calculate HAWCP for the whole sample of 30 images is recorded. The
OpenCL execution time also includes the time needed to transfer the image data from
the CPU to the GPU.
As can be seen in Figure 6, the execution time is compared to a sequential
C++ implementation, an OpenCL CPU implementation, OpenCL on the integrated
GPU (Intel GPU), OpenCL on one Nvidia GPU (Nvidia GPU), and OpenCL on two
Nvidia GPUs (Nvidia Multi-GPU). The GPU is faster for all image sizes.
Interestingly, the integrated Intel GPU is faster than the Nvidia GPUs for 256x256
and 512x512 images. This is due to the fact that in case of integrated GPUs the time
required to transfer the data is shorter. However, the Multi-GPU implementation was
the fastest one in the cases of 1024x1024 and 2048x2048 images. In case of
2048x2048, an execution time of 5.8 milliseconds was achieved. This execution time
also involves the time required for transferring the data from the CPU to the GPUs.

©  ASCE 6
Computing  in  Civil  Engineering  2015 105

Such wavelet transform execution time allows for real-time processing of videos with
a frame rate of up to 170 frames per second.
The speed-up of the OpenCL implementation compared to the sequential C++
implementation is:

Speed-up = Sequential C++ time / Best OpenCL time.

Execution time for 30 images

6000
1200 Sequenti
5000 al C++
1000
Milliseconds

4000
800
OpenCL
3000
600 CPU
2000
400
1000 Nvidia
200
0 GPU
0
2048x2048
256x256 512x512 1024x1024

Figure 6: Performance results for 30 images

As shown in Figure 7, the GPU implementation is approximately 28 times


faster than the sequential C++ implementation for an image of size 2048x2048.

Speed-up
30
25 Sequential Sequential
C++ time/ C++ time /
20 Sequential Nvidia
Intel GPU Sequential
15 C++ time/ C++ time / Multi-GPU
time
Intel GPU Nvidia time
10
time Multi-GPU
5
time
0
256x256 512x512 1024x1024 2048x2048

Figure 7: Speed-up achieved by the best OpenCL implementation

SUMMARY AND CONCLUSIONS

In this paper, a GPU implementation of a method for pavement distress


detection was presented. Based on the wavelet transform the method identifies
distress in pavement images by using statistical criteria. To accelerate the execution
of the method and to enable real-time analysis of pavement images, the method is

©  ASCE 7
Computing  in  Civil  Engineering  2015 106

implemented on a GPU. Performance tests show that a significant speed-up is


achieved by utilizing the Open Computing Language. An speed-up of 28 times for
multi-GPU against sequential C++ implementation is achieved for the case of
2048x2048-sized images.
Yet, the image acquisition and analysis processes involve further steps, such
as noise filtering and background illumination correction. In the future, noise filtering
methods and shadow removal algorithms are to be also implemented on Graphics
Processing Units.

REFERENCES

Daubechies, I. (1990). The Wavelet Transform, Time-Frequency Localization and Signal Analysis.
IEEE Transactions on Information Theory, vol. 36, no. 5, pp. 961 – 1005

Levitz, D. (2014). Across the U.S., the Worst Pothole Season in Recent Memory. Online available at:
http://www.citylab.com/work/2014/03/across-us-worst-pothole-season-recent-memory/8735/ Accessed
03.12.2014

Haar, A. (1910). Zur Theorie der Orthogonalen Funktionensysteme. Mathematische Annalen, vol. 69,
pp. 948 - 956

Intel Corporation (2013). Intel® SDK for OpenCL* Applications 2013 R2 Optimization Guide.
Document Number: 326542-003US

Khronos OpenCL Working Group (2013). The OpenCL Specification, Version: 2.0. Document
Revision 19

National Comparative Highway Research Program (NCHRP) (2004). Automated Pavement Distress
Collection Techniques: A Synthesis of Highway Practice. NCHRP synthesis 334, Transportation
Research Board, Washington, USA

Deutscher Städte- und Gemeindebund (DStGB) (2014). PKW-Maut für alle Straßen richtiger Ansatz –
Beteiligung der Kommunen an den Einnahmen unverzichtbar. Online available at:
http://www.dstgb.de/dstgb/Home/Pressemeldungen Accessed 03.12.2014

Sharma, B. and Vydyanathan, N. (2010). Parallel Discrete Wavelet Transform using the Open
Computing Language: a performance and portability study. 2010 IEEE Int. Symp. Parallel and
Distributed Processing, Workshops and Ph.D. Forum (IPDPSW), pp.1 - 8

Stürmer, M. et al. (2012). Fast wavelet transform utilizing a multicore-aware framework. PARA'10
Proceedings of the 10th international conference on Applied Parallel and Scientific Computing, vol. 2,
pp. 313-323, Springer-Verlag Berlin, Heidelberg

Catanzaro, B. (2010). OpenCL™ Optimization Case Study: Simple Reductions. Online available at:
http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-
case-study-simple-reductions/ Accessed 03.12.2014

Zhou, J. et al. (2006). Wavelet-based pavement distress detection and evaluation. Optical Engineering,
vol. 45, no. 2

©  ASCE 8
Computing  in  Civil  Engineering  2015 107

Real-Tıme Locatıon Data for Automated Learnıng Curve Analysıs of Lınear


Repetıtıve Constructıon Actıvıtıes

H. Ergun1 and N. Pradhananga2


1
Graduate Student, OHL School Of Construction, Florida International University,
Engineering Center, 10555 West Flagler Street, Miami, FL. E-mail:
[email protected]
2
Assistant Professor, OHL School Of Construction, Florida International
University, Engineering Center, 10555 West Flagler Street, Miami, FL. E-mail:
[email protected]

Abstract
Learning curve analysis has been used for decades in construction industry
to study the effect of experience on productivity, especially in repetitive jobs.
Accurate and reliable estimate of time and cost are the benefits of performing a
learning curve analysis. While learning curves exist in all levels of a project,
detailed analysis of workers’ learning curve for short-term activities (that only
span for a couple of minutes) require meticulous manual observation. This
demand in manual effort overshadows the economic benefits from such analyses.
Also, manual observations are time consuming, subjective and prone to human
errors. This research outlines how data for automating such learning curve
analyses can be gathered from a construction site. For this purpose, real-time
location data using Global Positioning System (GPS) technology is collected from
the workers as they perform their regular activities. Data acquired from GPS
technology is used with occupancy grid analysis to calculate the amount of time
spent by workers in specific area, which is used to demonstrate the spatio-
temporal analysis of learning curve. A linear construction activity is presented as a
case study. Results include automatic generation and visualization of learning
curves for workers. The proposed method enables minute analysis of learning
curves in activity level which can be directly associated to project level by
following work breakdown structure. The method can be used in construction
field for improving estimation, scheduling and training by project managers.

INTRODUCTION
One of the important aspects in construction management field is
estimation. Estimation can be for time, budget or labor effort. It has been observed
that the resources being used for repetitive construction activities exhibit a
learning behavior and time taken to perform the same activity decreases over time.
The reasons have been attributed to increased worker familiarization, improved
equipment and crew coordination, improved job organization, better engineering
support, better day-to-day management and supervision, development of more
efficient techniques and methods, development of more efficient material supply
system and stabilized design leading to fewer modifications and rework (Thomas

©  ASCE
Computing  in  Civil  Engineering  2015 108

et al., 1986). Because of these effects, creating a learning curve became an


advantage in order to determine the best predictive model, understand the factors
affecting the rate of learning, estimate the learning curve model parameters and
quantify the effect of delays upon performance (Thomas et al., 1986).
Learning curve method can be used to describe both, individual and group,
levels of learning (Yelle 1979). While individual level is a concern for
psychologists (Anzai and Simon 1979), organizational level is investigated by
economists (Yelle 1979). Organizational learning curves mainly focus on
cumulative functions resulted from increase in performance or decrease in cost
with an increasing output rate for comparable units of production. (Schillinget al.,
2003)
Most of the previous studies focused on comparison of learning curves
involving different methods and time and learning relation for long-term works.
On the contrary, this study focuses on the automatic process of data collection for
short-term works that exist within rapid processes.

BACKGROUND
Log linear, the plateau, the Stanford-B, the DeJong and S-model are five
widely known types of learning curves (Yelle, 1979). Among these, the log linear
curve can be divided into three parts in which first part constitutes to a little
improvement in productivity with previous work experience, second part results in
a sharply drop in the time taken with the help of gained experience and final part
is demonstrated by a horizontal line which indicates no additional improvement
observed (Couto & Teixeira, 2005). Standard form of this learning curve is
calculated as;
b
y = ax
where, y is the number of direct labor hours required for xth unit of production and
a is the amount of labor hours need for first unit production, x is the cumulative
number for amount of production and finally b is the learning rate (Schilling et al.,
2003). In practice the value for learning rate varies, it can also be negative
affecting the productivity negatively (Chen and Taylor, 2014). Techniques like
improved Line-of-Balance for attaining goals under strict deadlines are also used
in some cases. (Zhang et al.,2014)
The main concern in all kinds of learning curves’ formation is collection
and implementation of data pertaining to the curve. Information associated with
activities should be collected accurately and the forecasted results should be
validated against real world case. Current methods of data collection for learning
curves are manually recorded observations and are prone to error. Manual method
also consumes enormous time and resources. In order to avoid this, Ultra
Wideband (UWB) technology was used which could collect real-time location
data from the construction resources (workers, equipment and material) (Cheng et
al., 2011). Additionally, Radio-Frequency Identification (RFID) technology was
used to track material transportation, work crew and control labor productivity,
which is a major reason for excessive cost resulting from time waste. Data
required for site monitoring and field mobility analyses were gathered using

©  ASCE
Computing  in  Civil  Engineering  2015 109

passive RFID tags installed on construction resources (Costin et al, 2012). Global
Positioning System (GPS) (Pradhananga and Teizer 2012) and vision-based
techniques (Park et al., 2011) are other technologies tested and proven in
construction environment for resource tracking.

OBJECTIVES AND SCOPE


This paper focuses on the automating learning curve generation process for
linear short term repetitive works. The main objective for this research is
demonstrate how location tracking technology can be leveraged for learning curve
analysis. Unlike previous studies, this paper deals with observation and evaluation
of learning effect on short term construction activities with real data from the site.
This paper only deals with linear construction activities that propogate in a
specific direction with respect to time. The presence of the crew at a specific
section of site is considered as a base for identifying active work area. This paper
also ignores workers’ prior experience on the learning curve. Other effects like
workers’ fatigue, stress level and unexpected interruptions has not been filtered
out. These factors can contribute to a significant change in productivity analysis
and learning behavior. Data is collected using low-cost GPS devices and limited
by technology, only outdoor activities could be considered to collect data.

AUTOMATED LEARNING CURVE ANALYSIS PROCESS


The methodology followed for automated learning curve analysis is shown
in Figure 1. This section outlines the details of each section of the methodology.

Figure 1: Research Methodology

GPS Data Collection


Data collection was done by installing GPS tags on workers’ hard hats.
Details about the tags used and recorded data is discussed in Pradhananga and
Teizer 2012. The John and Mary Brock Football Practice Facility situated at
Georgia Institude of Technology, Atlanta GA was selected for data collection. The

©  ASCE
Computing  in  Civil  Engineering  2015 110

facility had indoor and outdoor practice facilities amounting to over 80,000 square
foot area. During the data collection, ten workers tagged with GPS tags were
actively involved in installing metal stripes on the roof of the facility. The workers
had to stop a couple of times in a day to haul material. Other than that, the
workers spread in a linear pattern from east to west along a stripe on the roof and
on succesfull installation of the stripes, the entire crew moved together from north
to south. Figure 1 shows the distribution of workers and direction of their
progress. One GPS tag was placed on each workers and data was recorded at the
rate of 1Hz for their entire working shift which lasted from 7:00 AM in the
morning to 6:00 PM in the evening.

Figure 2: Data collection site

Data Filtering & Analysis


Data was downloaded at the end of the day and no online filtering methods
were implemented. For post-processing, the data points were converted from
WGS (World Geodatic System) to Universal Transverse Mercator (UTM). This
conversion was done to ease occupany grid analysis discussed in the following
section. Filtering the collected data required scrutinization of technological
limitations like read range and multipath and human errors, for instance if a
worker leaves the site without turining the GPS tags off. Data pertaining to such
events were filtered out manually by observation as well as automatically by
geofencing. Detail on GPS accuracy, errors and filtering process can be found in
Pradhananga and Teizer (2012).

©  ASCE
Computing  in  Civil  Engineering  2015 111

Occupancy Grid Analysis


The active working area for the workers tagged with GPS was limited to
the boundary of the roof. The roof was divided into square grids (3m x 3m), as
shown in Figure 3, for occupancy grid analysis. The arrow in Figure 3 shows the
direction of progress of work. Occupancy grid analysis has been used for
detecting and tracking resources for jobsite awareness using sensing technologies
(Teizer et al. 2007). This analysis deals with quantifying the ocupancy of each cell
of the grid by the workers. It is a map of frequency of the cell being occupied by
the workers. Cheng et al. (2011) explains the process of occupancy grid analysis
and its significance in construction operations analysis. Occupancy analysis was
performed for complete working shift of each worker.

Figure 3: Working teritory divided into grids

Learning Curve Analysis


Learning curve is the study of time versus productivity. In other words,
learning curve can be studied by analyzing time taken to perform a repetitive task.
In Figure 3, moving from north to south, if time taken to progress through each
row in the grid is analyzed, a learning curve is generated. This way, the generation
of learning curve can be simplified by linearly adding the occupancy of each row
in Figure 3. The result of such an analysis is discussed in the section “Preliminary
results”.

©  ASCE
Computing  in  Civil  Engineering  2015 112

Recommendation and Optimization


The results obtained from learning curve analysis can be utilized in
estimating future production. Recommendations can be made for optimizing the
resources and maximizing productivity. This paper does not deal with predictive
models but only shows the feasibility of obtaining learning curve from real-time
location data. Hence, the statistical details and predictive model development has
not been included in the scope and not presented in the following section.

PRELIMINARY RESULTS
Figure 4 shows the learning curve for a worker. Three spikes in time were
observed while the workers hauled material to the roof. Four distinct observations
were made from the figure. Each observation has been discussed through a trend
line in the Figure 4. The nature and properties of the trend lines are not discussed
in this paper but the slopes of the lines are of interest. Figure 4 represents the
result from a small sample set with limited number of data points. The aim of this
study is the exploratory analysis of the potential of generation of learning
behavior. An extensive data collection and rigorous analysis will be needed to
validate the concept statistically. The preliminary results only investigates the
positivity and negativity of the trend and not the validity of the trend itself.

Figure 4: Sample learning curve analysis


Trend line 1: Trend line 1 in Figure 4 represents the overall fit for the learning
curve. Negative slope of the trend line indicates a learning behavior. Trend line 1
shows that the speed of progress is higher as the day progressed because the
workers took less time to progress through the grids from north to south.
Trend line 2: Trend line 2 in Figure 4 shows the learning behavior of the workers
during material hauling process. The trend line only fits the points on the spikes.
This line indicates that material hauling got faster as the worker repeated the task
multiple times on the same day. Further investigation needs to be done on the
amount of material hauled during each spike. An argument can be that same
amount of material was not hauled each time. These were also the times when the

©  ASCE
Computing  in  Civil  Engineering  2015 113

workers took breaks. So, although a negative trend is seen, it is not certain that it
is because of the learning effect.
Trend line 3 and 4: Figure 4 shows the normal working pattern of the workers.
The normal pattern was disturbed by material hauling. Trend line 3 and Trend line
4 shows the trends while the workers performed their regularly scheduled tasks.
Both trend lines 3 and 4 have negative slopes indicating learning behviors. It is
also interesting to observe that the frequency at the end of trend line 3 is higher
than the start of trend line 4. This reinforces the concept of learning and forgetting
curve. It can be assumed that the harmony of the worker did not continue after a
distraction due to material hauling. As a result, the progress was retarded during
the start of another regular session. The worker, however, seem to pick up speed
and overcome the effect of disturbance quickly.

CONCLUSION
This paper includes the real case study of a roof construction examined in
order to show the learning effect on repetitive jobs. The jobsite was divided into
grids and total amount of time that each worker spends inside the grid cells were
determined. Since the roof construction job is examined under two kinds of job,
which were hauling material to the roof and time for material installation, the
response obtained for one worker was investigated in multiple parts. This paper
shows the learning effect on short term activities in which improved equipment
and crew coordination and worker familiarization bring about decrease in time
consumption. Further studies need to be done for predictive estimation for the
time and cost needs for future activities based on real case data. Owing to learning
curve studies, time, budget and manpower resource can be scheduled for short
term activities. It can be used by estimators and managers to improve resource
allocation and accurate understanding of productivity.

ACKNOWLEDGEMENT
The authors would like to acknowledge the support of Barton-Malow
Company during data collection. Any opinions, findings, or conclusions included
in this paper are those of the authors and do not necessarily reflect the views of
Barton-Malow.

REFERENCES
Anzai, Y. and Simon, H. A. (1979). The theory of learning by
doing.Psychological review, 86(2), 12.
Chen, J. and Taylor, J. E. (2014). Modeling Retention Loss in Project Networks:
Untangling the Combined Effect of Skill Decay and Turnover on
Learning. Management in Engineering , 1-10.
Cheng, T., Venugopal, M., Teizer, J. and Vela, P. A. (2011). Performance
valuation of ultra wideband technology for construction resource location

©  ASCE
Computing  in  Civil  Engineering  2015 114

tracking in harsh environments. Automation in Construction, 20(8), 1173-


1184.
Costin, A., Pradhananga, N. and Teizer, J. (2012). Leveraging Passive RFID
Technology for Construction Resource Field Mobility and Status
Monitoring in a High-Rise Renovation Project. Automation in
Construction, 1-15.
Couto, J. P. and Teixeira, J. C. (2005). Using linear model for learning curve
effect on highrise floor construction. Construction Management and
Economics, 23(4), 355-364.
Park, M. W., Koch, C. and Brilakis, I. (2011). Three-dimensional tracking of
construction resources using an on-site camera system. Journal of
Computing in Civil Engineering, 26(4), 541-549.
Pradhananga, N. and Teizer, J. (2013). Automatic spatio-temporal analysis of
construction site equipment operations using GPS data. Automation in
Construction, 29, 107-122.
Schilling, M. A., Vidal, P., Ployhart, R. E. and Marangoni, A. (2003). Learning by
Doing Something Else: Variation, Relatedness and the Learning Curve.
Management Science, 39-56.
Teizer, J., Caldas, C., and Haas, C. (2007). Real-Time Three-Dimensional
Occupancy Grid Modeling for the Detection and Tracking of Construction
Resources.” J. Constr. Eng. Manage., 133(11), 880–888
Thomas, H. R. (2009). Construction Learning Curves. Practice Periodical on
Structural Design and Construction, 14-20.
Yelle, L. E. (1979). The Learning Curve: Historical Review and Comprehensive
Survey . 302-328.
Zhang, L., Zou, X. and Kan, Z. (2014). Improved Strategy for Resource
Allocation in Repetitive Projects Considering the Learning Effect.
Construction Engineering Manament.

©  ASCE
Computing  in  Civil  Engineering  2015 115

An Empirical Comparison of Internal and External Load Profile Codebook


Coverage of Building Occupant Energy-Use Behavior

Ardalan Khosrowpour1; Rimas Gulbinas2; and John E. Taylor1


1
Charles E. Via Jr., Department of Civil and Environmental Engineering, Virginia
Tech, Blacksburg, VA 24061. E-mail: [email protected]; [email protected]
2
The Joan and Irwin Jacobs Technion-Cornell Innovation Institute, Cornell University,
New York, NY 10011. E-mail: [email protected]

Abstract

Technology independent elements such as the behavior of building occupants


are significant factors responsible for energy consumption and associated CO2
emissions of the buildings. Predicting building occupants’ energy-use has been
identified as one of the most challenging processes due to high intra-class variability
of occupant behavior. Previous research has benefited from classification techniques
that serve to simplify the energy profiling by creating a codebook of energy-use
patterns for each user. Nevertheless, the optimum level and type of energy-use
behavior classification system (i.e., energy-use codebook) remains unknown. In this
paper, we build on previous work and compare the relative effectiveness of various
approaches to simplified representations, or codebooks, of building occupant energy-
use behavior. We introduce the methodologies behind the construction of the
codebooks and conclude that individually generated codebooks are a more accurate
approach to model occupants’ energy-use compared to the globally generated
codebooks. This contributes to the creation of more reliable energy-use profiling
methods that can enhance the efficacy of automated energy efficiency programs.

INTRODUCTION

Buildings account for more than 40% of CO2 emissions in the United States
(U.S.EIA 2014) and recent research has indicated that CO2 emissions of commercial
buildings are expected to increase faster than all other types of buildings (except
industrial buildings) at an annual rate of 0.8% (U.S.EIA 2014). Several strategies at
both the utility and building-scale are being developed to improve energy efficiency
and ultimately reduce the rate of energy-use associated with buildings. One approach
that has seen broad adoption is the ability to benchmark building energy-use
performance relative to a large set of other similar buildings. Through this approach,
public tools like the EPA Portfolio Manager (EnergyStar 2014) enable buildings to
identify potential opportunities and set goals for energy efficiency improvements.
However, even as energy-use data standards become increasingly granular on a
temporal scale ( Green Button Data, 2014), they are usually limited to whole-building
spatial granularity. Therefore, energy efficiency improvement identification

©  ASCE 1
Computing  in  Civil  Engineering  2015 116

opportunities are similarly constrained to the building level. In this paper, we


introduce a new methodology for classifying, grouping, and interpreting the energy-
use behaviors across a set of building occupants over time that enables new energy
efficiency improvement opportunities.

BACKGROUND

The widespread deployment of smart-metering infrastructure has enabled the


development of analytical tools that utilize time-series energy-use interval data in
many different ways. While a significant amount can be learned about building-level
operations based on whole building interval energy-use data (Granderson 2013), most
academic research has focused on leveraging this time-series data for optimizing grid-
level operations through demand management (e.g. peak load shaving, demand
balancing). Efforts in applying statistical approaches to energy-use interval data have
primarily focused on classifying power utility customers based on their load profiles.
Initial research investigated different segmentation and classification methods
(Figueiredo, Rodrigues et al. 2005, Mutanen, Ruska et al. 2011), and has expanded to
understand how customer classification can be applied to inform the structure and
targeting strategies of energy efficiency programs (Smith, Sewell et al. 2004) (Albert,
Rajagopal et al. 2011, Kwac, Flora et al. 2014). These recent efforts have also
explored the development and application of load-profile ‘codebooks’ that contain
representative energy-use patterns for large sets of power utility customers and enable
an encoding mechanism to assign tags to customers that exhibit frequent energy-use
patterns. The tags are then associated with different intervention strategies that are
designed to motivate customers to shift or change their energy-use behavior.

While classification algorithms that utilize whole-building time-series energy-


use data are relatively well established, due to the lack of more spatially granular
data, sub-building and occupant-level classification methods are far less common.
However, recent advancements in sensor technologies that enable cost-effective
monitoring of plug-loads have spawned a new area of building occupant energy-use
research. Gulbinas et al. (Gulbinas et al. 2015) developed novel algorithms that
leverage occupant-level energy data to classify individuals by energy-use efficiency,
intensity, and predictability. In this paper, we build upon this research by
investigating various methods of developing a building occupant energy-use behavior
codebook. First, we introduce a new definition of energy-use behavior, based on a
probabilistic interpretation of energy-use over the course of a day that enables
comparative analysis of the effectiveness of different codebook styles in representing
occupant behavior. We then expand upon the methodologies introduced in previous
utility-scale codebook formulation research (Kwac, Flora et al. 2014) and investigate
new methods of developing codebooks that may be more representative of individual
occupants. We conclude by providing an analysis of the different codebook
formulations and providing insight into how these codebooks can be applied in
energy efficiency strategies that target individual building occupants.

©  ASCE 2
Computing  in  Civil  Engineering  2015 117

ME
ETHODOLO
OGY

We ben nefit from a K-means clustering algorithm to classify occupants’


eneergy-use beh haviors on a local andd global scaale. K-meanns clusteringg algorithm
classsifies N po oints in M dimensionss into K cluusters. The main objecctive of K-
clusstering is to
o minimize thet Euclideaan distance error
e betweeen data pointts and their
assiigned clusteer centers. The
T algorithm m starts the classificatioon process by
b initiating
multiple random m cluster ceenters and uppdates the model
m througgh an iteratiive process.
Thee procedure is to searchh for K-partiitions with loocally optimmal Euclideaan distances
by moving poin nts from one cluster to another (Haartigan and Wong
W 1979)). We build
on previous workw and immplement thhe data collected at a workstationn-level in a
commmercial buiilding locateed in Denverr, CO with more
m than 1000 occupantts (Gulbinas
andd Taylor 201 14). The dataa was collectted through a series of high
h resolutiion wireless
smaart meters that facilitate minute--level data analysis. Occupants’
O energy-use
bahhavior is classsified at tw
wo scales of local
l and gloobal and a codebook
c off energy-use
pattterns is creaated for eacch individuaal at each sccale. Figure 1 illustratees a sample
inddividual energy-use codebook.

Figuree 1. An indivvidual energgy-use codebook

In the lo
ocal energy--use classificcation, each occupants’ energy-use data d is used
inddependently tot create an energy-use codebook.
c W cluster a wide range of potential
We
coddebook sizess for each inndividual too assure thatt we have captured
c eveery possible
behhavior combination. Asssuming that we w have L = 78 occupannts and eachh individual
hass a potentiaal of havingg between Klocal = {2,3,…,50} different d behhaviors, we
gennerated L × 505 energy-use profiles using
u K-meaans clusteringg algorithm.. Therefore,
eacch individuaal has 50 ennergy-use prrofiles Klocal = {2,3,…,550} generated by their
eneergy consum mption data. The K-meaans clusterinng input is structured
s a a Q × P
as
mattrix where Q is equal to the length of o the study (i.e. 87 days) and P is equal to 25
dataa points (i.ee. 24 hours)) per day. The
T results of o each clustering reporrt the error,
clusster centers, and daily ennergy-use paatterns-clusteer centers asssociation.

Previou
us research has
h implemeented a globbal approachh to classifyy the users’
eneergy consum
mption patternns. The main reason behhind global clustering iss to provide

©  ASCE 3
Computing  in  Civil  Engineering  2015 118

a unified, codebook of representative energy-use patterns that spans all occupants. In


the global energy-use clustering algorithm, all the occupants’ energy-use data are
merged to a single pool and used as an input for the clustering algorithm. Based on
the same annotation introduced in the previous paragraph, we ran Kglobal =
{2,3,…,100} different K-means cluster algorithms using an input matrix with (L×Q)
rows and P columns containing all occupants energy-use data. Therefore, we have
100 global codebooks with Kglobal = {2,3,…,100}, and for each occupant. We then
create multiple local codebooks by associating global codebook centers with
corresponding occupants and days of the week. Thus, each occupant will have 100
local codebooks whose cluster centers are extracted from the global codebook with
sizes varying based on the number of global cluster centers and their shapes. Figure 2
depicts an overview of both (local and global) codebook generating methods while
illustrating a side by side comparison of one occupant’s local codebooks generated
from both approaches. In the following section, we report the clustering results,
compare the accuracies between global and local clustering approaches, and analyze
the distribution of clustering size and classification accuracy.

Figure 2. Local codebook generation; (a) Independent local codebook


generating process; (b) Extracting local codebooks from global codebook;
Occupant #81 local codebooks, (c) extracted from a global codebook, and (d)
independently generated

RESULTS & DISCUSSION

©  ASCE 4
Computing  in  Civil  Engineering  2015 119

The ressults obtaineed from thee clustering algorithms enable the analysis of
occcupants’ eneergy-use coddebook sizess and their associated
a e
errors. We conducted
c a
sidee by side comparison between
b inddependently (i.e. individdually) geneerated local
coddebooks (i.e.. locally gennerated codebbooks) and the
t local coddebooks extrracted from
the global cod debooks to investigate the effect of o various codebook
c siizes on the
coeefficient of variance
v callculated withh respect to the groundd truth data (i.e. energy
dataa captured by
b smart metters). Figure 3 depicts tw wo boxplots tot visualize the trend of
erroors for both algorithms. The error asssociated witth locally generated codeebooks is at
its minimum whenw the num mber of patterns equalss 12. Howevver, the variance of the
erroor decreasess by increasiing the size of the codeebooks. Onee of the implications of
thiss is that thhe larger thee size of thhe codeboook, the highher the connsistency of
occcupants’ energy estimatiion. On the other hand, globally gennerated codeebooks also
shoow a decreassing trend in error by inccreasing the codebook siize and the overall
o error
of clustering iss relatively higher thann independeent local coddebooks witth a higher
stanndard deviattion. However, the variaance of the errror does noot decrease significantly
by increasing th he size of gloobal codebooks.

Figgure 3. Loca
al codebooks error (i.e. coefficient of variance (CV)) versus the size
of the
t codeboo ok; (a) indep
pendently geenerated coodebooks; (bb) extracted
d local
coddebooks fromm global dicctionary

To analyze the size of globally extracted loccal codebooks, with resppect to their
global codeboo ok size, we plotted
p a color-coded suurface graphh to visualize the trend.
Figgure 4 showss a cone-shaaped graph with
w a higheer density off the same siize globally
gennerated codeebooks, whenn the size of o the globall codebook is i low. In laarger global
coddebook sizees, a wider range off local coddebooks are uniformlyy covered.
Furrthermore, th
he graph shoows a non-liinear trend where
w the sizze of globallly extracted
locaal codebookks are asympptotic to 35 where
w the gloobal codeboook size goess up to 100.
Thiis trend has important implicationss regarding global codeebook saturaation where
more global clu uster centerss will not im
mprove the accuracy
a of the
t behaviorr modeling,
andd instead deccreases the unity
u and robbustness of the global codebook.
c Fuurthermore,
the trend can provide
p infoormation reggarding the predictabilitty of occupants whose

©  ASCE 5
Computing  in  Civil  Engineering  2015 120

local codebook size stays low even when the global codebook size increases
substantially.
In order to further investigate the trend of error variation in both clustering
algorithms, we compared the independently generated local codebooks, to the same
size globally generated local codebooks for each person and fitted a normal
probability distribution function to measure the variation of the errors for independent
local codebooks, globally extracted local codebooks, and the difference between the
two of them. Figure 5 illustrates the CV value trends for each local codebook. When
comparing Figure 5 (a) and (b) it is noticeable that the range of globally made local
codebooks are higher than independent local codebooks. Also, the mean of the
distributions in (b) are higher than (a). When looking at the longitudinal distribution
trends in (a) and (b), both variances decrease as the size of the codebooks increase,
however the local codebooks probability density function demonstrate a higher
density and a lower CV mean which makes it more desirable compared to the
globally extracted local codebooks. Furthermore, the results in Figure 5 (c) implies
that the variance of differences between (a) and (b) will increase as the global
codebook expands and suggest the reduction of reliability in occupants’ energy-use
behavior modeling.

Figure 4. Number Occupants with the same globally extracted local codebook
size at each specific global dictionary size

Based on the presented results, individually generated local codebooks could


easily outperform the globally generated local dictionaries while providing a lower
variance in error values. The lower CV values and variance of individual codebooks,
suggest a more reliable and accurate energy-use profiles for occupants which enhance
the task of energy analysis and prediction. Despite all the advantages of individually
generated codebooks, the method lacks unity among occupants’ energy-use profiles
to enable a standard occupant level comparison for the sake of automated energy
efficiency programs. Therefore, further research should be done to develop a global

©  ASCE 6
Computing  in  Civil  Engineering  2015 121

algorithm where independent local codebooks could be compared among each other
without significantly increasing error. Also, there is room for further progress in
determining the effect of data resolution on local and global energy-use clustering.

Figure 5. CV Value Distributions; (a) independent local codebooks; (b)


globally extracted local codebooks; (c) the difference between CV values

CONCLUSION

In this paper, we build on previous work (Gulbinas et al. 2015) and compare
the relative effectiveness of various approaches to simplified representations, or
codebooks, of building occupant energy-use behavior. Specifically, we compared the
degree of error associated with the application of a codebook that is constructed from
an aggregated pool of building occupants versus one that is an aggregation of
codebooks constructed for each individual occupant. We introduced the
methodologies behind the construction of the codebooks and their comparative
analysis. Due to high intra-class variability of occupants’ energy-use behavior, the
study showed that generating local codebooks using individuals’ energy data is a
more effective approach to energy-use profiling and facilitate the development of
automated energy efficiency programs in buildings. There is a great potential in
targeted energy efficiency programs using occupants energy-use behavior models
where each occupant receives the relevant information with respect to the observed
inefficiencies in their behavior. Thus, the optimization of energy-use profiles is an
important step to enable the underlying algorithms of such EE programs to perform
reliably.

ACKNOWLEDGMENTS

©  ASCE 7
Computing  in  Civil  Engineering  2015 122

This material is based upon work supported by the Department of Energy


Building Technologies Program and the National Science Foundation under Grant No.
1142379. Any opinions, findings, and conclusions or recommendations expressed in
this material are those of the authors and do not necessarily reflect the views of the
DOE or the NSF. The authors would like to thank the Alliance for Sustainable
Colorado for hosting the study that provided the building occupant data used in the
experimental data analysis in this paper.

REFERENCES

Green Button Data, (2014). "GreenButton." from http://greenbuttondata.org/.

Albert, A., et al. (2011). Segmenting consumers using smart meter data. Proceedings
of the Third ACM Workshop on Embedded Sensing Systems for Energy-Efficiency
in Buildings, ACM.

EnergyStar (2014). from http://www.energystar.gov/buildings/facility-owners-and-


managers/existing-buildings/use-portfolio-manager

Figueiredo, V., et al. (2005). "An electric energy consumer characterization


framework based on data mining techniques." Power Systems, IEEE Transactions on
20(2): 596-602.

Granderson, J. (2013). "OpenEIS (Energy Information System)." from


http://eis.lbl.gov/openeis.html.

Gulbinas, R.; Khosrowpour, A.; Taylor, J. (2015), "Segmentation and Classification


of Commercial Building Occupants by Energy-Use Efficiency and
Predictability," Smart Grid, IEEE Transactions on (DOI:
10.1109/TSG.2014.2384997)

Gulbinas, R. and J. E. Taylor (2014). "Effects of real-time eco-feedback and


organizational network dynamics on energy efficient behavior in commercial
buildings." Energy and Buildings 84(0): 493-500.

Hartigan, J. A. and M. A. Wong (1979). "Algorithm AS 136: A k-means clustering


algorithm." Applied statistics: 100-108.

Kwac, J., et al. (2014). "Household energy consumption segmentation using hourly
data." Smart Grid, IEEE Transactions on 5(1): 420-430.

Mutanen, A., et al. (2011). "Customer classification and load profiling method for
distribution systems." Power Delivery, IEEE Transactions on 26(3): 1755-1763.

Smith, E. M., et al. (2004).System and method for energy management,Google Patent

U.S.EIA (2014). Annual Energy Outlook 2014 Early Release Overview

©  ASCE 8
Computing  in  Civil  Engineering  2015 123

Challenges in Data Management When Monitoring Confined Spaces Using BIM


and Wireless Sensor Technology

Z. Riaz1; M. Arslan2; and F. Peña-Mora3


1
Fulbright Fellow, Department of Civil Engineering & Engineering Mechanics,
Columbia University, 500 West 120th Street, New York, NY 10027. E-mail:
[email protected]
2
Department of Computing, School of Electrical Engineering and Computer Science
(SEECS), National University of Sciences and Technology (NUST), Sector H-12
Islamabad, Pakistan. E-mail: [email protected]
3
Department of Civil Engineering & Engineering Mechanics, 628 SW Mudd 500
West 120th Street, New York, NY 10027.
E-mail: [email protected]

Abstract

According to US Bureau of Labor Statistics (BLS), in 2013 around three


hundred deaths occurred in the US construction industry due to exposure to hazardous
environment. A study of international safety regulations suggests that lack of oxygen
and temperature extremes contribute to hazardous work environments particularly in
confined spaces. Real-time monitoring of these confined work environments through
wireless sensor technology is useful for assessing their thermal conditions. Moreover,
Building Information Modeling (BIM) platform provides an opportunity to
incorporate sensor data for improved visualization through new add-ins in BIM
software. In an attempt to reduce Health and Safety (H&S) hazards, this work reports
upon an integrated solution based on BIM and wireless sensor technology. A
prototype system has been developed that: collects real-time temperature and oxygen
data remotely from wireless sensors placed at confined spaces on a construction site;
notifies H&S managers with information needed to make decisions for evacuation
planning; and ultimately attempts to analyze sensor data to reduce emergency
situations encountered by workers operating in confined environments. However,
fusing the BIM data with sensor data streams will challenge the traditional approaches
to data management due to huge volume of data. This work reports upon these
challenges encountered in the prototype system.

INTRODUCTION

According to U.S. Bureau of Labor Statistics (BLS), a total of 796 fatal


injuries were recorded in the construction industry in 2013 out of which 14% of
fatalities were due to the exposure to hazardous environment and 2% were caused by
fire and explosions (BLS, 2013). One area of safety concern, on construction sites, is
workers operating in confined spaces. Occupational Safety and Health Administration
(OSHA) has defined a confined space as “any space that has limited means for entry
or exit and that is primarily not designed for continuous worker occupancy” (OSHA,
2014). OSHA (2015) estimates that there are 53 worker deaths and injuries, 4,900 lost
workday cases and 5,700 non-lost time accident annually. Lack of oxygen in confined
spaces is the main cause of accidents faced by construction workers (IACS, 2007).

©  ASCE
Computing  in  Civil  Engineering  2015 124

OSHA suggests that in confined spaces oxygen content should be continuously


monitored both during and prior to starting work. If oxygen content in air drops by
19.5% by volume or increases from 22% by volume then entry in confined spaces
should be avoided (OSHA, 2011). Consequently, there is a need of an effective ‘real-
time’ monitoring of oxygen and temperature levels in a confined space environment
in order to improve worker safety.

APPLICATION OF TECHNOLOGY FOR ENVIRONMENTAL


MONITORING

BIM has gained extensive interest by the Architectural, Engineering and


Construction (AEC) industry and provides a platform for sharing information
throughout a building lifecycle (Vanlande et al., 2008). Moreover, Wireless Sensor
Network (WSN) has been a much used technology facilitating smart environment in
automation of building systems (Cook and Das, 2004). Some significant attempts of
integrating BIM with different sensing technologies have been made in recent past for
improved environmental monitoring and facility management. Initial efforts in this
integration tried to create sensor based aware environment (Piza, et al., 2005). This
was followed by the concept of energy consumption and building performance
(Guinard et al., 2009; Katranuschkov et al., 2010; Attar et al., 2011). Recent
developments in BIM and sensor technology integration mainly involve energy and
facility management (Cahill et al., 2012; Ozturk et al., 2012) and safety risk analysis
of buildings using acquired building performance data (Setayeshgar et al., 2013).

However, it is apparent from literature that the potential of BIM-Sensor


technology has not been fully explored for the purpose of H&S, particularly when
BIM is becoming a popular platform in the AEC industry. The most relevant work
found in this area is by Shiau and Chang (2012), in which BIM environment is used to
develop a fire control management system to provide safe living environment to
building residents. Also, Guven et al. (2012) provide guidance for evacuation in
emergency situations. These studies provide a basic motivation to develop a BIM-
Sensor based solution to improve H&S of workers on construction sites in general and
confined spaces in particular. Therefore, this research explores integration and the
resulting challenges of BIM and WSN with an objective of monitoring the
environment of confined spaces, an often ignored area where the physical condition
and safety of operators is at high risk. A prototype system entitled Confined Space
Monitoring System or CoSMoS was developed and evaluated (Riaz et al., 2014). This
paper reports upon continuation of the same work and improving the prototype
system into a superior solution by taking into consideration the limitations of the
initial prototype system. The paper highlights current challenges faced due to
collection of real time sensor data and the resulting data management issues.

RESEARCH METHOD

CoSMoS was designed after a detailed literature review followed by a series of


semi-structured interviews carried out with industry experts. This was followed by a
prototype development as a proof of concept. The development environment that was
used for prototype development comprised of: Crossbow's TolesB mote (Wireless
Sensors); Autodesk Revit Architecture 2013 (BIM Software); Visual Studio.Net
(Software Development Environment); and SQL Server (Database Management
System). The choice of these components and development environments was due to

©  ASCE
Computing  in  Civil  Engineering  2015 125

ease of availabiility, open source andd/or low coost. The devveloped sysstem was
succeessfully testeed on a con
nstruction site and evaluuated througgh focus gro oups with
indusstry experts. This work focuses
f on thhe challengees that are enncountered in order to
extennd the existting prototyppe into a ffull-scale syystem with multiple
m sennsors and
conseequent issues related to data
d manageement.

COSM
MOS OVER
RVIEW

CoSMoS was designned with foour basic ellements to achieve thee goal of
visuaalizing real-tiime sensor data
d of confiined spaces (Riaz
( et al., 2014). Thesse element
were: BIM softw ware; BIM database;
d Seensing motes; and Appllication Proggramming
Interffaces (API) with
w Revit Add-In.
A The prototype syystem focuseed on the folllowing:

m
monitoring teemperature and
a oxygen levels
l using wireless sennsing motes;
agggregating th he sensing motes
m valuess to a centralized WSN ggateway mote;
saaving the aggregated sennsor values tto external database;
exxtracting dattabase sensoor values for visualizing a BIM modeel with colorr codes;
generating nootifications iff sensor valuues exceed thhe defined thhreshold lim
mits; and
A
Android baseed mobile application forr H&S manaagers for rem mote monitorring.

COSM
MOS DATA
A MANAGE
EMENT

Before discussing the chaallenges


assocciated with sensor data managemennt, it is
important to examine
e th
he CoSMoSS data
aggreegation, storage
s a
and manaagement
frameework (show wn in figure 1). Here it iss crucial
to higghlight that the
t self-updaating BIM m model in
CoSM MoS dependds on a dataabase link bbetween
Revitt external application and a database d
systemm. Therefoore, overalll applicatioon data
manaagement perrformance not n only deaals with
centralized sensoor data storrage but w will also
include BIM inteegration andd visualizatioon. The
systemm has been divided into o three mainn layers
that are
a discussedd below:

Sensoor Data Acq


quisition:

The dataa sensing laayer consistts of a Figure


F 1: CoSM
SMoS Data Fraamework
netwoork of wireeless sensorss that meassure the
envirronmental attributes off a physicall confined space.s CoSM MoS implem mentation
consiists of commmercially avaailable, openn source and IEEE 802.15.4 compliaant TelosB
wirelless sensors (motes) for real-time daata acquisitiion. Once deeployed in a confined
spacee, motes willl initialize annd implemennt operationss such as neiighbor discovery, data
sensinng, sensor data
d processsing and sennsor data traansmissions.. For CoSM MoS, these
sensinng motes arre programm med to aggrregate the sensor values coming frrom other
sensinng motes an nd forward themt along with its owwn data to seensing gatew way mote.
The back-end seensor appliccation is prrogrammed to: read TeelosB gatew way mote;
conveert acquired raw sensor values in a human undderstandable format; andd push the
data to
t data storaage layer.

©  ASCE
Computing  in  Civil  Engineering  2015 126

Data
a Storage

This laayer storess


sensoor valuees with
h
“SenssorID”, “Seensor type””,
and “Timestamp
“ p” in a SQL L
Serveer (DBMS S) in a
relational form mat. Heree,
storinng each sennsing mote`ss
uniquue identificaation (ID) iss
critical to later correlate
c thee
acquiired sensor values
v to thee
BIM model of a confined d
spacee. A data co onnection iss
established betw ween Revit
Archiitecture sooftware and d
SQL Server usiing a Revit
API. At this stage in n
CoSM MoS, each TelosB
T mote Figu
ure 2. CSV file
generrates sensorr data in the
form of a Comm ma Separatedd Values (CS SV) file (seee Figure 2). This data stream
s for
each mote is storred in a dataabase. It shoould be noteed here that this data fille is for a
singlee mote and as the numbber of motess in a WSN increase, thhe corresponnding data
files and size of o the databbase multipllies accordin ngly. This data storagee plays a
signifficant role in
i retrieval of
o historic sensor
s data and
a for futuure investigaations and
patterrn analysis.

Data
a Visualizatiion

CoSMoS is invoked
as a Revit Addd In from
Revitt Architectuure GUI as
showwn in Figure 3. A self-
updatting GUI iss displayed
with sensor valuesv of
tempeerature andd oxygen.
BIM plays a signnificant role
in thhis process since the
senso
or data of thhe confined
spacee is popullated with
relevaant parameetric data Figure 3: Invoked external applicattion from Revvit GUI
from the BIM software.
blishing a daata link that uniquely tiees sensed data to a speciific room is all that is
Estab
required to pair collected
c datta to the spaace that it deescribes. In case of CoSSMoS, this
data link is accoomplished by b assigningg the room element’s aautomaticallly created
uniquueID from Autodesk
A Reevit to each associated sensor.
s If thhe sensor vallues cross
over the definedd temperatuure/oxygen level l threshholds, system m will highhlight the
locatiion on a BIM
B model with red ccolor and will w generatte sound alarms and
smarttphone basedd notificationns to take timmely actionss to avoid haazardous situ
uations.

©  ASCE
Computing  in  Civil  Engineering  2015 127

CHALLENGES

There exist many sensor data related challenges when integrating BIM with
sensor technology for safety management. Some of these are discussed below:
Once huge sensor data is collected from multiple motes, this raises a challenge
on managing and retrieving relevant data from a database efficiently. Developed
prototype system uses SQL server, a traditional method to store data, based on the
relational database management systems (RDBMS). Recent research highlights that
sharp growth of data makes traditional RDBMSs inadequate to handle huge volume
and heterogeneity of sensor data. However, there is no consensus in the literature on
which type of database performs best for real time sensor applications. For example,
to achieve reliability and availability of data distributed file systems (Vasenev, 2014)
and NoSQL databases (Ramesh, 2014) are the suggested choices.
A database survey shows that database performance results vary for scan and
insert operations for sensor values. Veen et al. (2012) compared performance of
PostgreSQL (SQL database) with Cassandra and MongoDB (both being NoSQL
databases). Their results showed that Cassandra was the best choice for large critical
sensor applications, MongoDB performed best for small-medium sized non-critical
sensor application where insert performance is significant and PostgreSQL performs
best for read intensive applications. Hence, for CoSMoS, being more insert intensive,
Cassandra and MongoDB (NoSQL) require further investigation. Furthermore,
Pungila, C. et al. (2009) measured sensor data management performance using SQL
databases against Hypertable (distributed data storage system), IBM Informix
DatabaseTM (Time-Series) and Oracle Berkely DB. It was shown that all three
performed better than SQL databases. Also, IBM Informix (Time-Series) showing
lower insert performance but overall better performance than Hypertable. This work
also showed that consideration for database tool depends on the system architecture
and its ability to: communicate at very high data exchange rates; insert performance at
high data rate; and the ability to hold extremely large data amounts. Presently,
experiments are being designed for CoSMoS to compare the performance of various
database tools (SQL Server, Cassandera, MongoDB, IBM Informix, Hypertable).
Performance measures include: data exchange rates for read and write cycles; insert
performance including single insert versus bulk insert; physical versus virtual
performance due to the nature of CoSMoS data framework. Virtualization has opened
a new dimension in sensor data management and provisioning of virtualized cloud
based storage can significantly decrease physical infrastructure while improving the
logical presentation of sensor data. Another area to look for in the database layer is to
enable multiple asynchronous event triggers. Asynchronous event triggers are used to
perform multiple actions that are independent from each other in terms of their
execution when a defined condition is reached (Celko, 2014). For example, in case of
CoSMoS such action can be a visual alert on a BIM model along with a sound when a
certain sensor threshold or unusual reading is observed in a sensor data stream.
Due to the availability of wide variety of sensors for environmental
monitoring, the datasets collected from motes vary with respect to consistency and
redundancy resulting in storage of meaningless data. Moreover, analytical procedures
have stringent requirements on data quality. Therefore, acquired sensor data needs to
be pre-processed when collected from different types of sensing motes, so as to
enable effective data analysis for safety management. Pre-processing includes data

©  ASCE
Computing  in  Civil  Engineering  2015 128

cleaning and elimination of redundancy of data that will not only improve data quality
but will also reduce storage requirements. Developed prototype system discarded
incomplete sensor data packets transmitted by the motes and averaged multiple sensor
readings in order to provide BIM based visualizations through an interactive user
interface. However, for a deeper understanding of sensor data incomplete data packets
should be incorporated in the data analysis. Processed and organized sensor data will
not only help safety managers to monitor environmental conditions but will also help
them to identify failure cause–effect patterns in order to prepare preventive safety
plans in future.
As CoSMoS is expanded to a full-scale application, vast real time data from
numerous sensors is likely to become available. Using knowledge discovery and data
mining techniques in databases, patterns and predictions can be revealed. As a result,
accuracy of the sensor data acquired for visualization through a BIM model to aid
decision support is a huge challenge. Environmental monitoring wireless sensing
motes have serious resource constraints (Madden et al., 2002; Zhao et al., 2002)]. In
particular, they have limited: communication bandwidth (1-100 Kbps); storage;
transmission data rate; processing capabilities; and battery life, which may cause
motes to operate for certain number of hours. For example, wireless sensing mote
used for this research has 8MHz processor, 10 KB programming memory and 250
Kbps data rate only. This asks for special network management algorithms for sensor
data streams that can explicitly incorporate these resource constraints e.g. putting an
idle mote to sleep mode for energy efficient mechanism. Moreover, long distance
transmission by sensing motes is not energy efficient as energy consumption by the
mote is a linear function of the transmission distance. One method to prolong network
lifetime while preserving network connectivity is to set up a small number of costly,
but more powerful relay motes whose main function is to communicate with other
sensing or relay motes. One of the main challenges lies in the selection of relay motes
in the network in order to ensure the effectiveness of communication between sensing
motes and relay motes. This network management will result in more reliable data.
Another serious challenge of real-time sensor data streams is the
understanding of incomplete values in the acquired data. Many reasons can contribute
in loss of sensor values such as sensor network topology changes or packet loss.
These issues can arise due to communication failures or poor links, fading of signal
strength due to weak mote battery and packet collision between multiple transmitters.
Zhao et al. (2002) have shown the seriousness of this problem experimentally.
Finally, another prevalent challenge in sensor data acquisition processes is
imprecision, either due to delay of database update or because of noisy sensor
readings. In the former case, the massiveness of sensor readings, the limited battery
life and wireless bandwidth may not allow for instantaneous and continuous updates,
and therefore, the database state may lag the state of the real world. The latter case,
however, is due to inaccuracies of sensor measurements such as noise from external
sources, inaccuracies in the measurement technique and synchronization errors. The
cost of imprecise sensor data can be very significant if there has to be an immediate
decision making by the H&S manager or an actuator activation is required.
CONCLUSION
CoSMoS application investigates the integration of BIM with wireless sensors
to monitor environmental conditions of confined spaces. The designed application is

©  ASCE
Computing  in  Civil  Engineering  2015 129

at initial stage of development and incorporation of number and variety of sensors


will add more value to the system output. A robust real-time system should not only
include specific lower and upper sensor value thresholds but it should also be able to
recognize different patterns in the acquired sensor data. Future work may include
more sophisticated pattern analysis criteria to trigger an action. However, this is only
possible if the technical challenges of data management are resolved to store,
aggregate, manage and retrieve huge sensor data

REFERENCES
Attar, R., Hailemariam, E., Breslav, S., Khan, A., and Kurtenbach, G. (2011). Sensor-enabled
cubicles for occupant-centric capture of building performance data. ASHRAE Annual
Conference (pp. 1-8).

Bureau of Labor Statistics (2013). Fatal occupational injuries by industry, All U.S., 2013
[Online]. Available: http://stats.bls.gov/iif/oshwc/cfoi/cftb0277.pdf (accessed on Dec
2, 2014)

Cahill, B., Menzel, K., and Flynn, D. (2012). BIM as a centre piece for optimised building
operation. [Online] (accessed Dec 21, 2013) Available:
http://zuse.ucc.ie/~brian/publications/2012/BIM_centre_piece_OBO_ECPPM2012.pd
f

Celko, Joe. (2013) Streaming Databases And Complex Events. Complete Guide To Nosql. 1st
ed. Boston: Morgan Kaufmann, 2013. Pp. 63-79.

Cook, D., and Das, S. (2004). Smart environments: Technology, protocols and applications:
John Wiley & Sons.

Guven, G., Ergen, E., Erberik, M., Kurc, O. and Birgönül, M. (2012) Providing guidance for
evacuation during an emergency based on a real-time damage and vulnerability
assessment of facilities. J. Comput. Civ. Eng., pp. 586–593

Guinard, A., McGibney, A., and Pesch, D. (2009). A wireless sensor network design tool to
support building energy management. 1st ACM Workshop on Embedded Sensing
Systems for Energy-Efficiency in Buildings (pp. 25-30).

IACS (2007) Confined Space Safe Practice (rev.2) [Online] Available:


http://www.iacs.org.uk/document/public/Publications/Guidelines_and_recommendati
ons/PDF/REC_72_pdf212.pdf (accessed Jun 15, 2013).

Katranuschkov, P., Weise, M., Windisch, R., Fuchs, S., and Scherer, R. J. (2010). BIM-based
generation of multi-model views. [Online] Available:
http://www.hesmos.eu/plaintext/downloads/paper_114_final.pdf (accessed Mar 3,
2014)

Madden, S., Franklin, M. J. and Hellerstein, J. M. (2002) TAG: a Tiny AGgregation Service
for Ad-Hoc Sensor Networks. In Proceedings of 5th Annual Symposium on operating
Systems Design and Implementation (OSDI), December 2002.

©  ASCE
Computing  in  Civil  Engineering  2015 130

OSHA (2015) Confined Space Entry: Awareness Training. [Online] Available:


https://www.osha.gov/dte/grant_materials/fy10/sh-21000-
10/Confined_Space_Entry_Awareness.pptx . (accessed on Mar 6, 2015)

OSHA (2014) What are confined spaces? [Online] Available:


https://www.osha.gov/SLTC/confinedspaces/ (accessed on 3 December 2014)

Ozturk, Z., Arayici, Y., and Coates, S. P. (2012). Post occupancy evaluation (POE) in
residential buildings utilizing BIM and sensing devices: Salford energy house
example. [Online] Available: http://usir.salford.ac.uk/20697/1/105_Ozturk.pdf
(accessed Mar 13, 2014)

Pungila , C., Fortis, T. and Aritoni, O. (2009) Benchmarking Database Systems for the
requirements of sensor readings. IETE Technical Review, Sep – Oct 2009, Vol 26,
Issue 5, pp 342 - 349

Piza, H.I., Ramos, F.F., and Zuniga, F. (2005). Virtual sensors for dynamic virtual
environments. 1st IEEE Int. Workshop on Computational Advances in Multi-Sensor
Adaptive Processing (pp. 177-180): IEEE.

Ramesh, D. and Kumar, C. (2014) A scalable generic transaction model scenario for
distributed NoSQL databases, Journal of Systems and Software. [online] Available:
http://www.sciencedirect.com/science/article/pii/S0164121214002684

Riaz, Z., Arsalan, M., Kiani, A. and Azhar, S. (2014) CoSMoS: Confined Spaces Monitoring
System, a BIM and Sensing Technology integration for construction Health and
Safety, Automation in Construction. Volume 45, pp. 96 – 106.

Setayeshgar, S., Hammad, A., Vahdatikhaki, F., and Zhang, C. (2013). Real time safety risk
analysis of construction projects using BIM and RTLS. [Online] Available:
http://www.iaarc.org/publications/fulltext/isarc2013Paper224.pdf (accessed Oct 21,
2013)

Shiau, Y.C. and Chang C.T. (2012) Establishment of Fire Control Management System in
BIM Environment http://onlinepresent.org/proceedings/vol5_2012/11.pdf (2012)
(accessed on 10 March 2014)

Vanlande, R., Nicolle, C., and Cruz, C. (2008). IFC and building lifecycle management.
Automation in Construction, 18 (1), 70-78.

Vasenev, A., Hartmann, T. and Dorée, A. (2014) A distributed data collection and
management framework for tracking construction operations, Advanced Engineering
Informatics, Volume 28, Issue 2, Pages 127-137

Veen, J., Waaij, B. and Meijer, R. (2012) Sensor Data Storage Performance: SQL or NoSQL,
Physical or Virtual. IEEE Fifth International Conference on Cloud Computing,
Honolulu, HI, 24-29 June 2012, pp 431 – 438.

Zhao, J., Govindan, R. and Estrin, D. (2002) Computing aggregates for monitoring WSN. 1st
IEEE International Workshop on Sensor Network Protocols and Applications.

©  ASCE
Computing  in  Civil  Engineering  2015 131

GIS-Based Decision Support System for Smart Project Location

A. Komeily1 and R. Srinivasan1


1
M.E. Rinker, Sr. School of Construction Management, University of Florida, P.O.
Box 115703, Gainesville, FL 326011-5703. E-mail: [email protected]

Abstract

In recent years, leading sustainability rating systems have extended the


sustainability certification beyond buildings by introducing neighborhood level rating
systems. The rationale is that sustainable buildings start with suitable site location and
how well they integrate with their neighborhood. This paper discusses a Geographic
Information System (GIS) based decision support system that aids owners, developers,
and other stakeholders to analyze the project site location and its integration with the
neighborhood in a sustainable manner. This decision support system uses four criteria
to evaluate the project site location, a) connectivity to existing urban infrastructure; b)
integration within current neighborhood; c) connectivity to transportation network;
and d) environmental and agricultural land status. After reviewing literature on
project site location in relation to neighborhood level sustainability rating systems,
this paper discusses the results of preliminary test conducted using a neighborhood
case study.
Keywords: Sustainable site selection; Geographic information system; Neighborhood
rating systems; Decision support system

INTRODUCTION

Having a significant share of total energy consumption, related emissions, raw


material consumption and fresh water withdrawals buildings and their environment
are, among others, the primary focus of sustainability assessment. There is an
increased demand for sustainable buildings that pose minimal environmental impacts
owing to increased cost of energy and environmental concerns (Azhar et al. 2011).
This situation has resulted in the development and implementation of a variety of
building- and neighborhood- level assessment methods such as rating systems,
certifications, Life Cycle Assessment (LCA) based tools, technical guidelines,
assessment frameworks, and checklists by various organizations and government
entities worldwide (Haapio 2012).
In the case of buildings, building rating systems primarily focus on evaluating
building performance in regards to energy consumption, water efficiency, material
resource, and indoor quality. Examples include Leadership in Energy and
Environmental Design (LEED) and Green Globes in the U.S., Building Research
Establishment Environmental Assessment Methodology (BREEAM) in the U.K.,
Comprehensive Assessment System for Built Environment Efficiency (CASBEE) in
Japan, DGNB in Germany, etc. (Fowler and Rauch 2006). Although their holistic

©  ASCE 1
Computing  in  Civil  Engineering  2015 132

design approach helps, to some extent, considering the complex relationship between
the building’s construction, operation and its impact on environment, the building
rating systems cannot be used for exhaustive building sustainability assessment. This
is due to the fact that building rating systems are unable to meaningfully capture the
interaction between buildings, urban spaces, facilities, and infrastructures.
Additionally, they fail to consider project’s contextual circumstance in their analysis.
Although existing building rating systems should not be underestimated, their
deficiencies emerged as an encouragement for reconsidering the spatial boundary of
sustainability assessment and taking the assessment to higher level where project
context could be meaningfully considered (Conte and Monno 2012; Haapio and
Viitaniemi 2008).
Neighborhoods are the smallest in spatial terms to meaningfully addressing
sustainability issues based four pillars of sustainability particularly environmental,
social, economic, and institutional (Alborg 1994; Berardi 2013). The importance of
neighborhood resulted in development of different Neighborhood Sustainability
Assessment (NSA) tools; the most prominent ones are spin-offs from the building
level sustainability tools (Sharifi and Murayama 2013) such as LEED-Neighborhood
Development or LEED-ND, BREEAM-Communities, CASBEE-UD, DGNB-NSQ,
and Pearl Community in the UAE. Although it is difficult to categorize sustainability
issues definitively, as they frequently impact all dimensions of sustainability, NSA
tools have structured their assessments based on categories. By assigning categories,
NSA tools seek to provide some clarity about the intention of each issue. Among
these categories, there are similarities as well as significant differences (Sharifi and
Murayama 2014). One of the similarities among most of the tools is inclusion of
location-related criteria in different categories. The project site location and how it
integrates with its neighborhood based on the context of neighborhood can have a
significant impact on a range of environmental factors, energy consumption, energy
spent on residents’ transportation, local ecosystems, and the use of existing
infrastructures (WBDG 2014). The emphasis on location is significantly stronger in
tools originating from North America such as LEED, as urban sprawl has put higher
environmental, economic burden. Figure 1 lists the percentage of points assigned to
project location in the NSA tools discussed in this paper.
Among others, the requirements of location-based criteria are often complex, involve
several data sources, and take considerable time to evaluate. Since project site
location have a significant weight in the assessment, it is important to analyze the
projects in a cost effective and accurate manner especially during project feasibility
study phase. As sustainability gains popularity, automation in decision-making
process can play a key role for effective project assessments. For this reason, a GIS-
based decision support system is introduced in this paper. GIS has been applied in
different areas of urban planning since the 1960s (Yeh 1999) with a continuously
increasing trend. This increasing popularity can be attributed to strong capture,
management, manipulation, analysis, modeling, and display capabilities of GIS as
well as abundant sources of GIS urban information. This paper discusses a
Geographic Information System (GIS) based decision support system that aids

©  ASCE 2
Computing  in  Civil  Engineering  2015 133

ownners, develoopers, and other stakehollders to anallyze the project site locaation and its
inteegration with
h the neighborhood in a sustainable manner.
m

Figure 1. The
T percenttage of pointts assigned to criteria, which are directly
d
affected by projecct location and
a its urbann context in
n five differeent.

Thee sustainabillity criteria used


u in the paper
p were identified baased on conteent analysis
perrformed by authors on several NSA A tools. Foor this paperr, the criteria used for
idenntifying suustainable project
p locaation follow ws three distinct
d cateegories: 1)
Envvironment and
a Ecologyy: the effectt of projectt on the envvironment and a regions
ecoology; 2) Inffrastructure and Transpoortation: thee integrationn of the projject and its
connnectivity to
o infrastructuure and trannsit system;; and 3) Acccess to Neighborhood
Asssets: the acccessibility off project to neighborhood assets. Although
A eacch criterion
mayy belong to more
m than onne category,, in this studdy, we exclussively categoorized them
in their
t primary
y category ass shown in taable 1.

Tablee 1. Smart loocation com


mmon criterria
Caategory Criteria
EE1: Imperillled Species Connservation
EE2: Protecttion of Agricultuural Lands
En
nvironment and
d Ecology EE3: Floodplain Avoidance
EE4: Wetlannds and Water Boodies Conservation

IT1: Connecttivity to Existingg Water and Waaste Water Infrasstructure


Inffrastructure & IT2: Connecttivity to Existingg Transportationn Infrastructure
Trransportation IT3: Connecttivity to Existingg Bike Lane
IT4: Integrattion within Neighhborhood

AN1: Accesssibility to Adequuate Neighborhoood Assets: (Busiinesses,


Acccess to Neighbo
orhood Assets
Recreational Facilities, Schoools and other civvic and public sppaces)

GIS
S-BASED DECISION
D SUPPORT SYSTEM

Thee GIS-based d decision suupport systemm provides a means forr swiftly andd accurately
idenntifying site parcels based on the crriteria and geetting an imm
mediate insiight on how
the selected siite meets thhe requirem ments of susstainability goals.
g This GIS-based
deccision supporrt system forr smart projeect location comprises
c of three moduules namely
(a) database module,
m (b) analysis
a moddule, and (cc) interface module, Figgure 2. The

©  ASCE 3
Computing  in  Civil  Engineering  2015 134

dataabase mod dule deals with data cleaning, integrationn and seleection and
trannsformation.. The analyssis module deals with performing
p the necessarry analyses
andd the interfacce module iss responsiblee for interaccting the useer by gettingg inputs and
presenting the output.
o

Figure 2. Data
D flow strructure.

Dattabase mod dule. For identifying sm mart project location, thhis system uses
u several
dataa layers. Th hese were iddentified from publicly available daatabases maaintained by
Staate and/or cou unty websitees. The relationship of thhese data layyers and the criteria was
estaablished prio or to analyssis. Figure 3 shows how w these datta layers aree related to
deffined criteriaa. After all daata sources are
a adequateely retrievedd, analysis is performed.
As previously noted, LEED D-ND has significant
s emphasis on location of the project
witth a variety of
o criteria. Foor these reassons, LEED--ND’s defineed values weere used for
the minimum required
r perfformance vaalues for eacch criterion. However, thhe user can
deffine differentt requiremennts for analyssis.

Figure 3. Data
D layers used in anaalysis and th
heir relation
nship with criteria.

©  ASCE 4
Computing  in  Civil  Engineering  2015 135

Analysis module. The analysis module was developed using Arcpy, a language built
on Python, a widely used high-level programming language. Arcpy enables GIS
functions, particularly those available in ArcGIS tool (ArcGIS 2012). Since each
project must be analyzed based on the criteria established, a top-down design was
utilized which essentially involved decomposing the complex problem and solve it
through different functions. The structure is comprised of five auxiliary, nine
analyses, and one manager function. The auxiliary functions, using user input data,
prepare and create necessary data for performing analysis in main functions; and
executed in conjunction to analysis functions. These functions are: 1) transit schedule
integration, 2) surface status calculator, 3) development status calculator, 4)
intersection density calculator, and 5) parcel data extractor. The analysis functions are
responsible for analyzing the site based on the given criteria. The analysis functions
are: 1) transportation function, 2) accessibility function: analyzing site’s accessibility
to neighborhood assets, 3) water and waste water service function: to analyze the
project in regards to connectivity to water and waste water systems, 4) agricultural
land function, 5) sensitive habitats function, 6) wetlands function, 7) water bodies
function, 8) floodplains function, and 9) neighborhood connectivity function:
analyzing the intersection density in project vicinity. Each function returns a “yes” or
“no” value and saves the analysis results in the form of GIS shapefiles, in the local
hard drive. The manager function reads the results of the nine analysis functions and
returns the final result of analysis to the user interface module.

User interface module. The interface module is dedicated to communication with the
user and to get user input on the project site location data. As an important goal of
this tool is to have minimal user input, users are prompted with only two inputs, i.e.,
the type of analysis criteria (user defined or LEED-ND, in this case) and project
address.

CASE STUDY

For this paper, a project site located at the corner of 11th St. and P St. in Washington
D.C. is considered for analysis. The total area of this parcel is 117 square meters and
is currently recorded as residential. The goal is to check if the project site meets
sustainability criteria discussed in the previous sections.

GIS data. Table 2 lists GIS databases used for this case study. These data sources
were downloaded from District of Columbia Open Data website.

Analysis and results. The goal of this case study project is to analyze the
sustainability of the project location selected by user. This process is further
discussed in the following steps.

Step 1: User input. The introductory page prompts the user whether they want to
perform the analysis using pre-defined values of LEED-ND or defining their value.
Following this, the user is prompted to enter the parcel address, Figure 4.

©  ASCE 5
Computing  in  Civil  Engineering  2015 136

Table 2. GIS databases used for DC case study


Database Name Description Database Name Description
OwnerPly Parcel data of DC Place of Worship Neighborhood Assets
Street Centerline Street network of DC Post Offices Neighborhood Assets
Sidewalks Sidewalk network Libraries Neighborhood Assets
Farmers Market
Metro Bus Stops Transit Stops (Bus) Neighborhood Assets
Locations
Metro Station Entrances Transit Stops (Subway) Hotel Locations Neighborhood Assets
Bicycle Lanes Bike Lanes Police Stations Neighborhood Assets
Public Schools Neighborhood Assets Fire Stations Neighborhood Assets
Grocery Store Locations Neighborhood Assets DC Boundary Water Service Area
Waste Water Service
Pharmacy Locations Neighborhood Assets DC Boundary
Area
Bank Locations Neighborhood Assets Parcel Lots Agricultural Lands
Sensitive Habitats
(This layer shows
Cultural Areas (Parks,
habitat areas
zoos, gardens and Neighborhood Assets Habitats
Environmental
cemeteries)
Sensitivity Index (ESI)
maps)
Shopping Centers Neighborhood Assets Wetlands Wetlands
Sidewalk Café Neighborhood Assets Waterbodies Water Bodies
Hospital Neighborhood Assets Floodplains - 2010 Floodplains
Impervious Surface -
Gas Stations Neighborhood Assets Impervious Surfaces
2010

Figure 4. User input data

Step 2: Geocoding. Using geocoding, the address is translated into a point on the
map and the corresponding parcel is identified. However, since, in some cases,
geocoding can involve some inaccuracies, the information of the parcel is returned to
the user for confirmation. The developed area is based on the ratio of impervious
surface on the selected site to the total area of site. Here, the impervious surfaces
consist of buildings, swimming pools and asphalt pavements. The percentage of
development is calculated based on LEED-ND’s definition. The analysis initiates
after user’s confirmation.

Step 3: Analysis. According to the scope of this paper, the transportation module was
selected for further discussion. For analyzing connectivity to transportation
infrastructure a network analysis was performed. The network analysis is designed to
use the street polyline shapefile as the network line, the parcel as the origin and the
bus/subway stations as the destination. The cutoff range for bus and subway are
assigned as ¼ mile and ½ miles respectively to correspond with LEED-ND
requirements.

©  ASCE 6
Computing  in  Civil  Engineering  2015 137

Step 4: Results. The output of each module is either “yes” or “no.” While “yes”
indicates that the selected site meets the defined criteria for that module, “no”
indicates that the project has failed to reach the minimum requirements for at least
one criterion After analysis is complete, the core function returns the final “yes” or
“no” value completing the analysis. In both cases, a complete document is provided
for the user, explaining the performance of project against each criterion. Figure 5
shows a sample of the result of transportation function which shows the distance of
the site location to bus stops for the studied project.

Figure 5. Snapshot of result of connectivity to public transit (bus) analysis.

Figure 6. The Network Analyst identified the eligible transit stops (in this case,
bus stop). The central point is the project site and the lines represent the route to
the eligible stops.

DISCUSSION

Conducting a typical project analysis manually may take several days to weeks,
costing thousands of dollars and may also require a great effort in data collection. To
alleviate this problem, this paper proposes a GIS-based decision support system for
analyzing smart project location. Using a case study project neighborhood located in
Washington DC, this paper discusses the execution of the decision support system.
The proposed system is based on minimal user input and can greatly save time and
money for developers, especially during the feasibility study phase of the project.
Additionally, this system can be used for larger scale decision processes by
identifying all eligible sustainable locations in a municipality or a region and produce
a database of sustainable locations. This database can be used for smarter
policymaking (for example incentivizing development in these locations) in regional
level.

©  ASCE 7
Computing  in  Civil  Engineering  2015 138

REFERENCES

ArcGIS Help 10.1. (2012). Retrieved November 10, 2014, from


http://resources.arcgis.com/en/help/main/10.1/index.html#//000v000000v7000
000
Azhar, S., Carlton, W. A., Olsen, D., & Ahmad, I. (2011). Building information
modeling for sustainable design and LEED® rating analysis. Automation in
construction, 20(2), 217-224.
Berardi, U. (2013). Sustainability assessment of urban communities through rating
systems. Environment, development and sustainability, 15(6), 1573-1591.
Charter, A. (1994). Charter of European Cities & Towns Towards Sustainability. In
European Conference on Sustainable Cities and Towns, Aalborg, Denmark.
Conte, E., & Monno, V. (2012). Beyond the buildingcentric approach: A vision for an
integrated evaluation of sustainable buildings. Environmental Impact
Assessment Review, 34, 31-40.
Fowler, K. M., & Rauch, E. M. (2006). Sustainable building rating systems summary
(No. PNNL-15858). Pacific Northwest National Laboratory (PNNL),
Richland, WA (US).
Haapio, A., & Viitaniemi, P. (2008). A critical review of building environmental
assessment tools. Environmental impact assessment review, 28(7), 469-482.
Haapio, A. (2012). Towards sustainable urban communities. Environmental Impact
Assessment Review, 32(1), 165-169.
Sharifi, A., & Murayama, A. (2013). A critical review of seven selected
neighborhood sustainability assessment tools. Environmental Impact
Assessment Review, 38, 73-87.
Sharifi, A., & Murayama, A. (2014). Neighborhood sustainability assessment in
action: Cross-evaluation of three assessment systems and their cases from the
US, the UK, and Japan. Building and Environment, 72, 243-258.
WBDG Sustainable Committee. (2014, October 21). Optimize Site Potential.
Retrieved November 2, 2014, from
http://www.wbdg.org/design/site_potential.php
Yeh, A. G. O. (1999). Urban planning and GIS. Geographical Information Systems:
Principles, Techniques, Applications, and Management 2nd edition, Eds PA
Longley, M Goodchild, D Maguire, D Rhind (John Wiley, New York) pp,
877-888.

©  ASCE 8
Computing  in  Civil  Engineering  2015 139

Thermal Comfort and Occupant Satisfaction of a Mosque in a Hot and Humid Climate

Gulben Calis1; Berna Alt1; and Merve Kuru1

1
Department of Civil Engineering, Ege University, 35100 Bornova, Izmir, Turkey.
E-mail: [email protected]; [email protected]; [email protected]

Abstract

Mosques are distinguished from other types of buildings by having an intermittent


operation schedule. They are partially or fully occupied five times a day and the maximum
occupancy is expected to occur on Friday prayers. As buildings with intermittent occupancy
may not perform the same thermally as typical commercial and residential facilities, thermal
comfort conditions and perception of occupants have to be investigated. This paper presents
the results of a study monitoring indoor environmental conditions of a mosque in order to
assess thermal comfort conditions. A historic mosque, which is located in a hot and humid
climatic region of Turkey, was selected as a test building and thermal comfort conditions were
monitored during two Friday prayers in August and September. Indoor air temperature,
relative humidity and air velocity were collected via data loggers. The predicted mean vote
(PMV) and predicted percentage of dissatisfied (PPD) indices were calculated and evaluated
using the ASHRAE 55-2010 standard. In addition to this, a questionnaire based on Fanger’s
seven-point scale was conducted to understand the thermal sensation and preference of
occupants. A comparison is provided to highlight the difference between the calculated and
perceived satisfaction of occupants.
Keywords: Thermal comfort; PMV; PPD; Mosques

INTRODUCTION

Thermal comfort is a key factor that might affect comfort, health, and occupants’
performance (Mendes et al, 2013). It is influenced by a range of environmental and individual
factors, both objective and subjective, including air temperature, the temperature of the
surrounding surfaces, the air movement, the relative humidity, and the rate of air exchange
(Ormandy and Ezratty, 2012). Conventional thermal comfort theories are generally used to
make decisions, whereas recent research in the field of thermal comfort clearly shows that
important effects are not incorporated (Peeters et.al, 2009). As the conventional theories of
thermal comfort are set up based on steady state laboratory experiments, they might not
represent the real situation in specific types of buildings such as mosques and churches, which
have intermittent operation schedules. A recent study on indoor environmental conditions in
mosques indicates that thermal comfort cannot be correlated with ISO 7730 and ASHRAE 55-
2004 standards (Al-Ajmi, 2010). Moreover, occupants can adjust their clothing and activity in
response to thermal stress in their environment in typical buildings. However, this adjustment
is to a certain extent in mosques due to predefined clothing ensembles and activities.

©  ASCE
Computing  in  Civil  Engineering  2015 140

Investigation of indoor thermal comfort with respect to thermal performance,


problems and possible remedies in mosques has received little attention by researchers. Saeed
(1996) conducted a research in the dry desert region in Riyadh, Saudi Arabia, measured
thermal comfort in a mosque at Friday prayers during the hot season and evaluated occupant
satisfaction by Fanger’s model. The results indicate that occupants attending Friday prayer
would prefer a cooler climate than the one recorded in the survey. Al-Homoud et.al. (2009)
monitored energy use and thermal comfort in mosques in hot-humid climates of the eastern
region of Saudi Arabia. The results show that the relatively high energy use does not
guarantee thermal comfort in mosques and enhancing building envelopes with insulation and
changing HVAC operation strategies can contribute to thermal comfort. Budaiwi and Abdou
(2013) developed a guideline to improve thermal and energy performances of mosques.
However, there is still a lack of research on indoor thermal comfort in mosques.

The main objective of this study is to investigate thermal comfort conditions and
satisfaction of occupants in a naturally ventilated historic mosque in a hot-humid climate. The
following sections of the paper describe the experimental design and test site. Then, findings
and conclusions are presented.

EXPERIMENTAL DESIGN

In order to obtain quantitative data on the prevailing actual conditions, the following
data collection methods were used: (1) a physical measurement of certain parameters that
influence the thermal comfort conditions and (2) a questionnaire as the subjective
measurement.

Field measurements of the indoor environmental parameters

Measurements were taken every minute at a height of 1.1 m from the ground level as
advised in the prescriptions of the ASHRAE Standard 55-2010 (ASHRAE, 2010). Indoor air
temperature (Ta), relative humidity (RH) and air velocity were measured via the TESTO
Thermo-Anemometer Model 435-2. All equipment was calibrated before each experiment to
ensure reliability and accuracy in the readings recorded during the field studies. The main
characteristics of the measurement system employed in this work are shown in Table 1.

Table 1. Main characteristics of the measurement system


Parameter Operation range Accuracy
Temperature -20 to 50 0C ±0,3 0C
Relative humidity 0-100 % ±2 %
Air velocity 0-20 m/sn ±0,03 m/sn scale, +2% reading

Data was collected for an hour with 5 minutes intervals. These parameters were then
used to calculate the PMV and PPD indices, in accordance with Fanger’s model. PMV and
PPD indices based on Fanger’s model are widely used to understand occupant perception and
satisfaction in buildings. Fanger (1970) defined the PMV as the index that predicts, or
represents, the mean thermal sensation vote on a standard scale for a large group of persons
for any given combination of the thermal environmental variables, activity and clothing
levels. The index provides a score that corresponds to the seven point ASHRAE thermal
sensation scale, which is presented in Table 2. The PMV should be kept zero with a tolerance

©  ASCE
Computing  in  Civil  Engineering  2015 141

of ±0.5 scale
s units in
i order to ensure a coomfortable indoor
i envirronment acccording to thhe
internatioonal standarrds. The PP
PD is an inndex that predicts the percentage of thermallly
dissatisfieed people and
a is expreessed as a function
fu of the
t PMV byy Fanger. The T functionnal
relationshhip betweenn the PMV and
a PPD inddices are illu ustrated in Figure
F 1. As can be seeen
from the Figure, 5% of the occuppants are stilll dissatisfied
d at the PMVV neutral (0).
Table 2.
2 Seven poiint thermal sensation sccale

Scale -3 -2 -1 0 +1 +2 +3
Sensation
n Cold Cool Slightlyy Neutrall Slightlyy Warm
m Hot
cool warm

Fig
gure 1. The relationship
p between the
t PMV an
nd PPD indices

Thhe PMV mo odel has beenn validated bby the majorrity of studiees as an accu
urate predictor
of occupaant perceptioon in differeent climatic conditions (Fanger,
( 20002). Fanger’ss PMV moddel
is based on theoreetical analyssis of hum man heat exxchange by steady staate laboratorry
experimeents in Nortthern Europpe and Ameerica (Hump phreys and Nicol, 20022). The meaan
radiant teemperature (T
( r) used in PMV calcullations was estimated ussing the regrression moddel
shown in n Equation (1)
( as a fun nction of thee indoor airr temperaturre measured d proposed by
b
Nagano (2004)
( underr a determinaation factor of 0.99.

×Ta-0.01, R2=0.99
Tr =0.99× (1)

Thhe operativee temperatuure (To) waas determineed from thee indoor airr temperatuure
measuredd (Ta) and the mean radiaant temperatture (Tr), as seen
s in Equaation (2).

To = A×T
Ta +(1-A) ×T
Tr (2)

where thee weighting factor (A) depends on aiir velocity (w


w), as follow
ws:

A = 0.5 for
f w <0.2 m/s
m
A = 0.6 for
f 0.2 <w <00.6 m/s
A = 0.7 for
f 0.6 <w <1 m/s

©  ASCE
Computing  in  Civil  Engineering  2015 142

Subjective measurements

The subjective study involved collecting data via questionnaires which were prepared in
accordance with the ASHRAE 55-2010 standards. The questionnaire was developed to gauge
how participants feel towards their thermal environment and were asked to rate their thermal
sensation on ASHRAE seven point scale. The questionnaires were distributed after each
physical measurement and the subjects were required to make only one choice from the scale
for each question. Responses of the questionnaire were used to gather the quantified thermal
sensation of the occupants (from -3 to +3), and, thus, to calculate the Actual Mean Votes
(AMV) of the participants. The AMV and PMV indices were then compared to understand the
difference between perceived and actual indoor thermal conditions. Moreover, participants’
clothing insulation values (clo) were determined via the questionnaire. A total of 95 responses
were collected and all of the responses were included in the analysis.

TEST SITE DESCRIPTION

Physical and subjective measurements were carried out in a historical mosque, which
was constructed in 1907. The mosque is naturally ventilated and has undergone retrofitting
recently, and, thus, it is expected to maintain acceptable thermal comfort conditions. In order
to analyze the effectiveness of retrofitting in terms of thermal comfort, this mosque was
selected as a test site.

The mosque is located in Izmir (38°N, 27°E), which is on the Aegean Sea coast. The
climate is hot and humid during cooling season whereas heating season is mild with high
rainfall levels. The study was conducted during August and September, characterized by the
mean maximum temperatures 33°C and 29°C, respectively. Average monthly temperatures
and relative humidity values are shown in Figure 2.

80
70
60
50
40
30
20
10
0
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC

Average monthly temperature (°C) Average monthly relative humidity (%)

Figure 2. Outdoor mean temperature and relative humidity

A total of 2 field tests were conducted in the main worship with dimensions of 178 m2
and mean height of 21.5 m. The floor plan and the measurement point are presented in Figure
3.

©  ASCE
Computing  in  Civil  Engineering  2015 143

Figure 3. Floor plan of the mosque and measurement point

FINDINGS

Analysis of participants’ profiles

Forty five and fifty occupants participated in the study during August and September,
respectively. The participants were all males with the majority of participants (28%) in the
age range of 36-45. The percentage of participants in the age range of 18-25, 26-35, 46-55,
56-65 and 65 and more were 25%, 17%, 14%, 8% and 7%, respectively.

As all participants performed light activities for 15 minutes per prayer requirements
and were seated for the rest of the monitoring period. Accordingly, metabolic rate was
determined as 1.2 met per ASHRAE 55-2010. The total clothing insulation value was
calculated by using a list for each participant. Clothing ensembles were similar to each other
which consisted of underwear, long-sleeve shirt, thin trouser and socks. Accordingly, the total
clothing insulation values were calculated as 0.53 clo.

Analysis of indoor environmental conditions

Table 3 presents the statistical summaries of indoor measurements in August and


September. Indoor air temperature values ranged between 32.0 and 33.2 0C with a mean value
of 32.8 0C and standard deviation (STD) of 0.39 in August, whilst indoor air temperature
values ranged between 27.4 and 29.0 0C with a mean value of 28.2 0C and STD of 0.54 in
September. The STD values indicate that indoor air temperatures were relatively stable in
August. The same situation was observed with respect to relative humidity values. STDs for
relative humidity were 0.35 and 3.86 in August and September, respectively. Only exception
was observed in air velocity which ranged from 0.22 to 1.32 m/sn (with STD of 0.35) in
August and from 0.58 to 0.03 m/sn (with STD of 0.16) in September.

©  ASCE
Computing  in  Civil  Engineering  2015 144

Table 3. Descriptive statistics of indoor environmental conditions


Measurement August September
Indoor temperature (0C)
Mean 32.8 28.2
STD 0.39 0.54
Min 32.0 27.4
Max 33.2 29.0
Relative humidity (%)
Mean 40.1 37.4
STD 0.35 3.86
Min 37.4 30.7
Max 42.8 41.9
Air velocity (m/sn)
Mean 0.77 0.17
STD 0.35 0.16
Min 0.22 0.03
Max 1.32 0.58
Mean radiant temperature (0C)
Mean 32.4 27.9
STD 0.39 0.53
Min 31.7 27.1
Max 32.9 28.7
Operative temperature (0C)
Mean 32.7 28.1
STD 0.39 0.53
Min 31.9 27.3
Max 33.1 28.9

Analysis of participants’ thermal sensation

Figure 4 shows the thermal sensation of participants at the end of the prayer in August
and September. In August, 7% and 2% of respondents stated that they feel warm (+2) and hot
(+3), respectively. Only 11% indicated that they feel neutral (0). Although indoor air
temperature (with a mean value of 32.8 0C) was high, 36% of the participants indicated that
they feel slightly cool (-1), whilst 29% feel cool (-2). This might be due to the fact that the
participants opened all windows to let air movement in, and, thus, air velocity in indoor
environment was higher (with a mean value of 0.80 m/sn) in August compared to September
(with a mean value of 0.15 m/sn). Moreover, it was observed that occupants preferred to sit in
front of the windows during data collection in August. The actual mean vote (AMV) of the
participants was found to be (-0.58) which corresponds to slightly cool. In September, most of
the participants indicated that they feel either slightly cool (44% or neutral (30%). 16% of
respondents stated that they feel cool (-2). None of the participants felt hot or cold. AVM of
the participants was found to be (-0.62) which corresponds to slightly cool.

©  ASCE
Computing  in  Civil  Engineering  2015 145

Figure 4. Distribution
D of thermal sensation vvotes

Analysis of participa
ants’ thermaal preferencee

Fiigure 5 preseents thermall preference of respondeents in Auguust and Septeember. As caan


be seen, 58% of resppondents doo not need anny change in their environment acccording to thhe
surveys conducted
c inn August. Similarly,
S 800% of particcipants wannted no channge in indoor
conditionns in Septem
mber.

F
Figure 5. Diistribution oof thermal preference
p vvotes

Analysis of the PMV


V and PPD in
ndices

Inn August, thhe PMV wass calculated as 1.04 (sliightly warm m) which corrresponds to a
PPD equual to 28%. In Septem mber, PMV was calculaated as 0.744 (slightly warm) whicch
corresponnds to a PPD D equal to 16%. Accorrding to the ASHRAE55-2010 stanndard, therm mal
acceptabiility of indoor environmments is conssidered to bee within limmits of -0.5, 0,
0 +0.5 whicch
accounts for 90% of a PPD. Thee PPD and PM MV indices show that thhermal comffort conditionns
were not within the limits
l whichh are definedd in the stanndard, and, tthus, were not
n acceptablle.
Figure 6 illustrates thhe acceptablle ranges forr operative temperature
t and relativee humidity ffor
the mosqque in Augu ust and Septeember. The shaded areaa shows the comfort zon ne boundaries
where thee PMV is beetween -0.5 and +0.5 acccording to the t ASHRAE E 55 standarrd and the reed
circles syymbolizes thhe environmmental condittions accordding to the measuremen
m nts. As can beb
seen, commfort zones for
f August and
a Septembber differs withw respect tto operative temperaturees.
Operativee temperatuures can be considered to be 31.0 ±2 and 27..0 ±2 °C for August annd
Septembeer, respectively.

©  ASCE
Computing  in  Civil  Engineering  2015 146

(aa) (b)

Figure 6. Psychoometric chaart for (a) August


A (b) Seeptember

CONC
CLUSIONS
S

The main objective of this studdy was to investigate the thermaal comfort
condittions and occcupant therrmal comforrt sensationss in a naturaally ventilatted historic
mosquue which is in
i a hot and humid climaate. Indoor environment
e al conditions including
indoorr air temperaature, relativve humidity and air velo
ocity were mmonitored and collected
via data loggers duuring two Frriday prayers in August and Septembber 2014. A total of 95
occupaants were surveyed too understand the therm mal sensatioon and preference of
occupaants. The PMMV and PPD D indices w
were calculateed and comppared to thee calculated
perceived satisfacttion of particcipants whicch were presented as the AMV.

The results show thatt 58 % of ooccupants preferred an environmennt ‘without


changee’ in August against a total
t of 38 % of voters who found the t environm ment either
“cool””, “warm” orr “hot”. The PMV value showed thaat occupants found the en nvironment
slightlly warm (PMMV=1.04) whereas
w the A
AMV value was (-0.58)) which corrresponds to
slightlly cool. In September, the PMV vvalue was calculated ass 0.74 (sligh htly warm)
whereas the AMV V value waas (-0.62), w which corressponds to slightly cooll. 76 % of
occupaants preferreed an environment ‘withhout change’ in Septembber against a total of 20
% of voters who o found the environmeent either “ccool” or “w warm”. The difference
betweeen the calcculated and perceived satisfaction shows thatt standards might not
represent the reall situation inn specific tyypes of builldings (i.e. m
mosques), which
w have
mittent operaation scheduules. Besidees, people might
interm m toleratte and/or neeed higher
indoorr temperatuures in hot and hum mid climatees. Consequuently, the operative
temperatures weree found to be 31.0 ±2 and 27.0 ±2 ± °C for A August and September,
S
respecctively. Futu
ure studies fo
ocusing on tthe perceptioon of occupaants during the
t heating
seasonn are necessaary to supporrt this concluusion.

REFE
ERENCES

Al-Ajm mal comfort in air-condiitioned mosques in the dry desert


mi, F.F. (20010) “Therm
climate” Building
B and Environmennt, 45(11), 54407-2413.

©  ASCE
Computing  in  Civil  Engineering  2015 147

Al-Homoud, M.S., Abdou, A.A., Budaiwi, I.M. (2009) “Assessment of monitored


energy use and thermal comfort conditions in mosques in hot-humid climates”
Energy and Buildings, 41(6), 607-614.
ANSI/ASHRAE Standard 55-2010 (2010) ”Thermal environmental conditions for
human occupancy” ASHRAE, Atlanta, GA, USA.
Fanger, P. O. (1970) “Thermal comfort, Danish Technical Press, Copenhagen
Fanger, P.O. and Toftum, J. (2002) “Extension of the PMV model to non-air-
conditioned buildings in warm climates”. Energy and Buildings, 34(1), 533-536.
Humphreys, M.A., Fergus N.J. (2002) “The validity of ISO-PMV for predicting comfort
votes in every-day thermal environments”. Energy and Buildings, 34(6), 667-
684.
Mendes, A., Pereira, C., Mendes, D., Aguiar, L., Neves, P., Silvia, S., Batterman, S.,
and Paulo, J. (2013) “Indoor air quality and thermal comfort-Results of a pilot
study in elderly care centers in Portugal” Journal of Toxicology and
Environmental Health, 76 (4-5), 333-344.
Nagano K., Mochida T. (2004) “Experiments on thermal design of ceiling radiant
cooling for supine human subjects” Building and Environment, 39 (3), 267-275.
Ormandy, D., and Ezratty, V. (2012). “ Health and thermal comfort: From WHO
guidance to housing stractegies” Energy Policy, October 2012, 116-121.
Peeters, L.F.R., Dear, R. de, Hensen, J.L.M., and D'Haeseleer, W. (2009) “Thermal
comfort in residential buildings: Comfort values and scales for building energy
simulation” Applied Energy, 86(5), 772-780.
Saeed R.A.S. (1996) “Thermal comfort requirements in hot dry region with special
reference to Riyadh: part 1: for Friday prayers”. International Journal of
Ambient Energy, 17(1): 17-21.
Simons, B., Koranteng, C., Adinyira, E., Ayarkwa, J. (2014) “An Assessment of
Thermal Comfort in Multi Storey Office Buildings in Ghana” Journal of
Building Construction and Planning Research, 2, 30-38

©  ASCE
Computing  in  Civil  Engineering  2015 148

Threshold-Based Approach to Detect Near-Miss Falls of Iron-Workers


Using Inertial Measurement Units

K.Yang1, H. Jebelli2, C. R. Ahn3 and M.C. Vuran4


1
Ph.D. student, Construction Engineering and Management, Charles Durham School
of Architectural Engineering and Construction, University of Nebraska-Lincoln, W113
Nebraska Hall, Lincoln, NE 68588; PH (402) 472-5631; email:
[email protected]
2
M.S. student, Construction Engineering and Management, Charles Durham School of
Architectural Engineering and Construction, University of Nebraska-Lincoln, W113
Nebraska Hall, Lincoln, NE 68588; PH (402) 472-5631; email:
[email protected]
3
Assistant Professor, Construction Engineering and Management, Charles Durham
School of Architectural Engineering and Construction, University of Nebraska-Lincoln,
W113 Nebraska Hall, Lincoln, NE 68588; PH (402) 472-7431; email: [email protected]
4
Associate Professor, Department of Computer Science and Engineering, University
of Nebraska–Lincoln, 107 Schorr Center, Lincoln, NE 68588; PH (402) 472-5019;
email: [email protected]

ABSTRACT

Falls are the single most dangerous safety accident within the construction
industry, representing 33% of all fatalities in construction. Numerous unrecognized
near-miss falls exist behind every major fall accident. The detection of near-miss fall
occurrence therefore helps the identification of fall-prone workers/tasks and invisible
jobsite hazards and thereby can prevent fall accidents. This paper presents and
evaluates the feasibility of a threshold-based approach for detecting the near-miss falls
of construction iron-worker. Kinematic data of subjects are collected through an IMU
sensor attached to the subjects’ sacrum; the subjects then perform walking on a steel
beam structure. Fall-related features—sum vector magnitude (SVM), and normalized
signal magnitude area (SMA)—are used to detect near-miss falls. Threshold values of
these features are defined to achieve the best accuracy in near-miss fall detection based
upon experiment data. According to selected threshold values, iron-workers’ near-miss
falls were detected. The result of this research demonstrate the opportunity of utilizing
SVM and SMA in documenting workers’ near-miss fall incidents in real-time.

INTRODUCTION

©  ASCE
Computing  in  Civil  Engineering  2015 149

In the construction industry, fall accidents are one of the leading causes of
occupational fatalities, representing 33% of all fatalities in construction. In order to
reduce the number of falls, numerous studies have been conducted focusing on the
cause of fall accidents for related occupations in construction. Among construction
trades, iron-workers are exposed to the highest lifetime risk of fatal injuries (CPWR
2013). According to Beavers (2009), during 2000 to 2005, almost 75% fatalities of
iron-worker were caused by falls. Iron-workers often work at great heights from the
ground and use heavy steel materials and equipment for steel erection. In addition, the
working surface of iron-works is narrow due to the small widths of steel beam as
compared to other trades. Selection of fall prevention measures for iron-worker is thus
limited (e.g., safety harness and personal fall arrest system) due to the open edges of
their workplace. In this context, additional and adequate fall protection measures are
necessary in order to reduce the number of fall accident and the risk of fall accidents.
One promising accident prevention technique is to detect a leading indicator
that generally occurred before the accident. According to Bird and Germain (1966),
numerous near-misses exist behind a major accident. In general, a near-miss is
considered as an event in which no damage or loss occurs at the time. However, near-
misses can develop into accidents under slightly different conditions. Thus, near-miss
detection related approaches have been well applied in chemical processes, airline, and
nuclear-related industries to reduce the likelihood of accidents occurring (Phimister et
al. 2003). Similar with previous research, the likelihood of a fall accident’s occurrence
may be reduced through the detection of near-miss falls. In this context, detection of
near-miss fall occurrence helps to identify fall-prone workers/tasks and invisible jobsite
hazards, and can thereby prevent fall accidents.
Regarding iron-worker’s near-miss fall detection, our research group used
Inertial Measurement Units (IMU) and a Machine Learning algorithm (i.e., One-class
Support Vector Machine) to detect the near-miss falls of iron-worker (Yang et al. 2014).
This implemented algorithm necessitates a post-processing stage and high
computational cost for near-miss fall detection. However, the real-time detection of
near-miss falls requires the development of online algorithms that allow workers’ near-
miss falls to be detected on-board without the need to send raw data for off-line analysis.
In this context, this research investigated the feasibility of a threshold-based approach
to detect the near-miss falls of iron workers, which can be implemented as an online
algorithm usable on a wearable sensor unit with limited on-board processing
capabilities.

RESEACH BACKGROUND

The loss of balance (LOB) is one of the major proximal causes of fall accidents
for iron-workers with a “lack of fall protection” and “fall protection not secured”
(Beavers et al. 2009). Also, LOB itself has a high relationship with fall accidents that
generally occurred when a worker loses balance. In this context, many studies focus on
detecting fall accidents through monitoring a subject’s kinematic data using body-
attached sensors such as accelerometer, gyroscope, or both (Kangas et al. 2008; Lai et
al. 2011). Accelerometer implemented to collect acceleration of subject's body
movement with 3-axes and gyroscope can measure the orientation of body with three

©  ASCE
Computing  in  Civil  Engineering  2015 150

different angle. In the biomedical research area, detection of fall accidents from
Activities of Daily Living (ADL) is widely studied to protect elderly and disabled
individuals from unrecognized fall accidents. Most of this research focuses on detecting
actual fall accidents rather than near-miss falls. Only a few studies sought the detection
of near-miss falls—called “near falls” or “fall portents” in other research (Dzeng et al.
2014; Weiss et al. 2010).
In one near-miss fall detection study, Weiss (2010) used body-attached tri-axial
accelerometer to detect near-falls during treadmill walking and tested two features—
Sum Vector Magnitude (SVM) and normalized Signal Magnitude Area (SMA)—as
well as other derived features to garner useful information in detecting near-falls
against various features. This research utilized threshold-based classifiers, which
designates near-falls when values exceed a certain threshold value. Dzeng (2014)
applied smart-phone embedded accelerometer and gyroscope to detect fall and fall-
portents during construction tiling work; their study also used SVM as a fall-detection
feature. While these studies demonstrated the feasibility of threshold-based detection
of near-miss falls using SVM and SMA, their feasibility in the constrained working
environment of iron workers (e.g. narrow width of I-beams) needs to be further
investigated.

RESERCH OBJECTIVE & METHOD

The objective of this research is to evaluate the feasibility of using a threshold-


based classifier to detect near-miss falls during iron-workers’ working movements. The
threshold-based approach will simplify the detection process and reduce the
computational cost compared to previous approaches based on machine learning
algorithms. In particular, this research tested the feasibility of two features, normalized
Signal Magnitude Area (SMA) and Sum Vector Magnitude (SVM), both of which have
been widely used in previous research on fall detection (Dzeng et al. 2014; Kangas et
al. 2008; Lai et al. 2011; Weiss et al. 2010).
1
= | ( )| + ( ) + | ( )|

= ( ) +( ) +( )

Where Ax, Ay, and Az are the acceleration (m/s2) of the mediolateral, anterior-
posterior, and vertical axes. In previous research, the SVM measured the intensity of
movement and was widely used in fall detection (Bersch et al. 2014); the SMA metric
was applied to distinguish static and dynamic activities (Pannurat et al., 2014). Thus,
this research applied SMA and SVM metrics as features for detecting iron-workers’
near-miss falls during walking on a steel beam structure. The threshold value of SMA
and SVM metrics were studied using experiment data and were labeled. Then, near-
miss falls from walking were classified using SMA and SVM threshold values. In this
research, all of the computations were conducted using a custom-made MATLAB
(R2014a, Mathworks) program.

DATA COLLECTION

©  ASCE
Computing  in  Civil  Engineering  2015 151

In order to collect data, this research conducted laboratory experiments in which


subjects walked on an elevated steel beam structure (see Figure 1). Two healthy
subjects walked on this structure with a safety harness, safety helmet, and safety boots,
all of which are typical safety tools for iron-workers. With these safety tools, each
experiment subject performed continuous walking actions for 10 minutes. Subjects
were asked to maintain a certain walking speed (i.e., 30 seconds per 1 lap) to prevent
the occurrence of extra body acceleration from inconsistent walking speeds during the
experiment. Subject's kinematic data was recorded using an IMU (Shimmer 2R,
Shimmer) sensor with 51.2-Hz sampling frequency that was attached to the subject’s
sacrum (bottom of spine). This kinematic data was stored in a laptop wirelessly using
Bluetooth and support software from the IMU sensor manufacturer.
2 m (length) / 0.05 m (width)

4 m (length) / 0.1 m (width)

Steel Frame

Walking with IMU Sensor Example of Safety Tools


Steel Frame and Safety Tool for Data Collection
Figure 1. Steel beam structure for laboratory experiment

All experiment processes were documented through video (29.97 fps) and used
as a reference data for near-miss fall labeling. Before further processing, due to the
decimal point of IMU frequency (51.2-Hz), frame rate of video and frequency of IMU
data were changed to have 18.73 fps and 32-Hz respectively to prevent a loss of data
in processing. Labeling a near-miss fall is one of the most important steps in near-miss
fall detection research; in this research, the experiment organizer labeled near-miss falls
based upon recorded video data for each data window—0.5s. Label of near-miss fall
corresponded to moments when experiment subjects have one of the following
conditions: 1) cannot maintain the speed of walking due to losing balance with body
sway or arm swing, 2) have obvious body sway or swing motion regardless of walking
speed. During the experiment, actual falls, near-miss falls, and walking and turning
movements were also documented.

DATA PROCESSING AND ANALYSIS

Each data window (i.e., 0.5 s) included a 50% overlap to increase the robustness
of the dataset for near-miss detection. Based on the labeling process, 165 near-miss
falls were identified from the 1,925 total data samples. Before computing the SVM and
SMA using accelerometer data, this research normalized raw acceleration data, which
has an offset due to the gravity of the Earth and the orientation of the accelerometer.
Instead of using a complex orientation computation method, this research used

©  ASCE
Computing  in  Civil  Engineering  2015 152

Yuwono's (2012) method and subtracted the initial values of acceleration data (t = 0)
from every acceleration data. Also, in order to reduce the noise of IMU data, this
research applied 3rd order Butterworth IIR low-pass filter with a cut-off frequency 4-
Hz for acceleration data because most of the signal energy of human activity is located
below 3-Hz. Then this research computed SMA, SVM, and mean value of Angular
velocity in the vertical plane, as illustrated in Figure 2.
Since turning motions are included in this dataset, the mean value of angular
velocity in vertical plane was computed as a feature to identify turning motions.
Turning motions of subjects during their walking tasks on a rectangular-shaped
structure created higher angular velocity on the vertical plane (in yaw direction) as
compared to other classes (Figure 2-Left). The rule to detect turning motions can be
simply defined based on the mean angular velocity of the vertical plane. This threshold
value of angular velocity (TAV, 29.4 degree/sec) was then determined in a way to
minimize the number of misclassified near-miss falls and walking data.

Figure 2. Angular velocity (Left), SMA (Middle), and SVM (Right) values for
different movements

In addition, this study found a significant difference in the distributions of SMA


and SVM values between walking, near-miss falls, and actual falls. As expected, actual
falls generated extremely high values of SMA and SVM, and near-miss falls produced
higher SMA and SVM values when compared to the stable walking class. However,
this study found some overlaps of SMA and SVM values among different classes. For
example, the data reveal that turning motions have a similar range of SMA and SVM
values, although turning created quite great perturbations on subjects’ walking motions.
The overlaps of SMA and SVM distributions between walking and near-miss falls were
also determined. In particular, several near-miss incidents created very low levels of
SMA and SVM.
To detect near-miss falls, optimum threshold values of SMA and SVM were
then identified from the analysis of a Receiver Operating Characteristic (ROC) curve
of a binary classifier between near-miss fall and walking classes (see Figure 3). This
study did not include the detection of actual falls in this approach since actual fall
samples in the dataset were so small. According to this process, the threshold value for
SMA (TSMA) appeared to be 1.95 m/s and the threshold value for SVM (TSVM) equated
to 1.3 m/s2 for the overall dataset of both subjects.

©  ASCE
Computing  in  Civil  Engineering  2015 153

Figure 3. ROC curves of near-miss fall detection using SMA (left) and SVM
(right)

NEAR-MISS FALL DETECTION RESULTS

The overall rule-based approach to detect near-miss falls in these experiments


is presented in Figure 4. This rule-based approach using SMA achieved 74.9%
precision and 89.6% recall in near-miss fall detections (see Table 1). Also, the SVM
threshold showed a 68.7% precision and an 89% recall. The SMA based classifier
showed slightly better performance detecting near-miss falls, and it also had a lower
false alarm rate than SVM. Also, turning movements through angular velocity of
vertical axis was classified with 98.7% precision and 86.7% recall.

Figure 4. The near-miss fall detection process

Also, this study found that the variability of the threshold values of SMA and
SVM between subjects is not great. When finding the threshold value for the dataset
from each subject, similar threshold values appeared in both SMA (subject 1: 1.95 and
subject 2: 1.99) and SVM (subject 1: 1.31 subjec2: 1.52). This resonance indicates that

©  ASCE
Computing  in  Civil  Engineering  2015 154

the threshold values to detect near-miss falls can be uniformly defined across different
subjects. However, additional tests, including subjects with diverse physiological
characteristics, are necessary to confirm the generality of the threshold values. In
addition, this study found that turning motions could be one of the causes that created
near-miss falls in the experiments; however, the suggested rule-based approach may
not be capable of detecting near-miss during such turning motions. Moreover, the
appropriate definition of near-miss falls is important to increase the predictor’s
accuracy. During the experiment, a few number of ambiguous motions (e.g., change of
speed, subject’s individual difference in reaction when losing balance) occurred. This
ambiguity impacted the labelling process and decreased the overall accuracy of near-
miss fall detection. This situation may indicate the needs to break down near-miss falls
into several different types (e.g., body sway, slowing walking speed).

Table 1. Near-miss Fall Detection Result in Confusion Matrix (SMA)


Near-miss fall Normal Walking Turning
(Predicted) (Predicted) (Predicted)
Near-miss fall
161 (89.6%) 17 (9.3%) 2 (1.1%)
(Actual)
Normal Walking
39 (2.8%) 1335 (97.1%) 2 (0.1%)
(Actual)
Turning
15 (4.1%) 34 (9.2%) 320 (86.7%)
(Actual)

CONCLUSTION AND DISCUSSION

This study examined the feasibility of the threshold-based approach to detect


near-miss falls in order to facilitate the development of online classifiers. The results
showed 94% accuracy (=(TP+TN) / (TP+FP+FN+TN)) in using SMA and 93% of
accuracy in SVM. This study confirms the usefulness of the two selected features in
detecting near-miss falls in iron-workers’ working environment, and it opens the
opportunity for developing online classifiers with these two features. However, this
research identified near-miss falls from a limited set of motions during controlled
experiments. Further study will be necessary to investigate and validate the detection
of near-miss falls from diverse postures and movements that construction workers have
in their daily operations.
Detection of near-miss fall will offer numerous benefits to the prevention of
potential fall accidents. This can give an opportunity to eliminate and address both
extrinsic and intrinsic hazard conditions of fall accident. Especially, near-miss fall
detection approach can be used to quantify individualized fall accident risk or
likelihood based upon the number of individually occurred near-miss fall discerned
through IMU sensors attached to the individual iron-worker. Based upon derived fall
accident risk and likelihood, individual worker can assess their working behavior and
potential risk of fall. Also, construction manager can be use this individually quantified
near-miss data to his safety activity. Moreover, with the locational information of near-
miss occurrences that can be gleamed by such research, unrecognized jobsite hazards

©  ASCE
Computing  in  Civil  Engineering  2015 155

or fall-inducing locations in a construction site can be detected and alleviated before a


fall accident.

REFERENCE
Beavers, J., Moore, J., and Schriver, W. (2009). “Steel Erection Fatalities in the
Construction Industry.” Journal of Construction Engineering and Management,
135(3), 227–234.
Bersch, S. D., Azzi, D., Khusainov, R., Achumba, I. E., and Ries, J. (2014). “Sensor
Data Acquisition and Processing Parameters for Human Activity Classification.”
Sensors (Basel, Switzerland), 14(3), 4239–4270.
Bird, F. E., and Germain, G. L. (1966). Damage Control. New York: American
Management Assoc.
CPWR. (2013). The Construction Chart Book. The Center for Construction Research
and Training.
Dzeng, R. J., Fang, Y. C., and Chen, I. C. (2014). “A feasibility study of using
smartphone built-in accelerometers to detect fall portents.” Automation in
Construction, 38, 74–86.
Kangas, M., Konttila, A., Lindgren, P., Winblad, I., and Jämsä, T. (2008). “Comparison
of low-complexity fall detection algorithms for body attached accelerometers.”
Gait & Posture, 28(2), 285–291.
Lai, C. F., Chang, S. Y., Chao, H. C., and Huang, Y. M. (2011). “Detection of Cognitive
Injured Body Region Using Multiple Triaxial Accelerometers for Elderly
Falling.” IEEE Sensors Journal, 11(3), 763–770.
Pannurat, N., Thiemjarus, S., and Nantajeewarawat, E. (2014). “Automatic Fall
Monitoring: A Review.” Sensors, 14(7), 12900–12936.
Phimister, J. R., Oktem, U., Kleindorfer, P. R., and Kunreuther, H. (2003). “Near-Miss
Incident Management in the Chemical Process Industry.” Risk Analysis, 23(3),
445–459.
Weiss, A., Shimkin, I., Giladi, N., and Hausdorff, J. M. (2010). “Automated detection
of near falls: algorithm development and preliminary results.” BMC Research
Notes, 3(1), 62.
Yang, K., Aria, S., Stentz, T. L., and Ahn, C. R. (2014). “Automated Detection of Near-
miss Fall Incidents in Iron Workers Using Inertial Measurement Units.”
Construction Research Congress 2014, American Society of Civil Engineers,
935–944.
Yuwono, M., Moulton, B. D., Su, S. W., Celler, B. G., and Nguyen, H. T. (2012).
“Unsupervised machine-learning method for improving the performance of
ambulatory fall-detection systems.” BioMedical Engineering OnLine, 11(1), 9.

©  ASCE
Computing  in  Civil  Engineering  2015 156

A Framework for Model-Driven Acquisition and Analytics of Visual Data Using


UAVs for Automated Construction Progress Monitoring

Jacob J. Lin1; Kevin K. Han2; and Mani Golparvar-Fard3


1
Ph.D. Student, Dept. of Civil and Env. Engineering, Univ. of Illinois at Urbana-
Champaign, 205 N Mathews Ave., Urbana, IL 61801. E-mail: [email protected]
2
Ph.D. Candidate, Dept. of Civil and Env. Engineering and MS Student, Dept. of
Computer Science, Univ. of Illinois at Urbana-Champaign, 205 N Mathews Ave.,
Urbana, IL 61801. E-mail: [email protected]
3
Assistant Professor, Dept. of Civil and Env. Engineering and Dept. of Computer
Science, University of Illinois at Urbana-Champaign, 205 N Mathews Ave., Urbana,
IL 61801. E-mail: [email protected]

Abstract

Automated assessment of work-in-progress using large collections of site


images and 4D BIM has potential to significantly improve the efficiency of
construction project controls. Nevertheless, today’s manual procedures for taking site
photos do not support the desired frequency or completeness for automated progress
monitoring. While the usage of Unmanned Aerial Vehicles for acquisition of site
images has gained popularity, their application for addressing issues associated with
image-based progress monitoring and particularly leveraging 4D BIM for steering the
data collection process has not been investigated before. By presenting examples
from two case studies conducted on real-world construction projects, this paper
suggests a framework for model-driven acquisition and analytics of progress images.
In particular, the potential of spatial (geometry, appearance, and interconnectivity)
and temporal information in 4D BIM for autonomous data acquisition and analytics
that guarantees completeness and accuracy for both as-built modeling and monitoring
work-in-progress at the schedule task-level is discussed.

INTRODUCTION

Over the past few years, the availability of inexpensive point-and-shoot, time-
lapse, and smartphone cameras have significantly increased the number of images
that are being taken on construction sites on a daily basis (Bae et al. 2014; Han and
Golparvar-Fard 2014a). The application of these large collections of site images
together with 4D Building Information Models (BIM) for monitoring state of work-
in-progress (WIP), creates an unprecedented opportunity for developing workflows
that can systematically record, analyze, and communicate progress deviations. To do
so, construction firms assign field engineers to filter, annotate, organize, and present

©  ASCE 1
Computing  in  Civil  Engineering  2015 157

data for project coordination purposes. However, the cost and complexity of the
collection, analysis and reporting performance deviations results in sparse and
infrequent monitoring practices, and thus some of the gains in efficiency are
consumed by the monitoring cost.
To address the challenges associated with the analysis of these collections of
site images, many researchers have focused on devising methods that register and
compare 4D BIM with the site images. While these methods have shown promising
results, their accuracy has remained at large a function of quality, completeness, and
frequency of the input –collections of the images. In most cases, current collections
do not support the desired frequency or completeness for as-built modeling and
automated progress monitoring. The specific challenges are:

• Visual documentation of the entire construction sites and of significant


changes that constitute progress deviations– Today’s photography
documentation of construction progress does not exhibit the geometrical and
appearance information needed for detecting BIM elements. These data
collection practices are also often restricted by hard-to-reach areas such as
high floors and safety hazards (e.g., slab openings and excavation backfilling
areas).
• Limited field-of-view in close-range images– Visibility to elements in site
images is often restricted by both static and dynamic occlusions; e.g., a
concrete foundation wall in the basement blocking interior columns and
movements of the equipment and workers creating scene clutter.
• Site photography needs training and guaranteeing completeness can be
costly- The quality of images in terms of location and field-of-view with
respect to construction elements heavily depends on experience. Also
guaranteeing complete documentation for each inspection is likely going to be
very costly, especially if the images are to be manually compared to BIM.

To minimize these challenges, construction firms are increasingly using camera-


mounted Unmanned Aerial Vehicles (UAVs) to monitor project sites. While these
platforms have received significant popularity, their application for addressing the
needs for automated image-based progress monitoring and particularly leveraging 4D
BIM for steering the data collection process have not been investigated before. To
this end, a model-driven approach for acquisition of site images is presented. More
precisely, the potential of the spatial (geometry, appearance, and interconnectivity)
and temporal information in 4D BIM for an autonomous data acquisition and
analytics that guarantees completeness and accuracy for both as-built modeling and
monitoring WIP at the schedule task-level is discussed. In the following, the state-of-
the-art methods for data collection and analytics are presented.

RELATED WORK

The state-of-the-art research on leveraging collections of site images together


with BIM for progress monitoring can be categorized into the following:

©  ASCE 2
Computing  in  Civil  Engineering  2015 158

Methods that register site images with BIM and 3D point cloud models.
To register BIM with site images, several methods are proposed which can be
categorized based on the technique used for image acquisition: unordered vs. time-
lapse/fixed viewpoints. (Golparvar-Fard et al. 2009; Kim and Kano 2008; Rebolj et al.
2008; Zhang et al. 2009) propose different pose estimation methods for registering
BIM over time-lapse images. Using unordered collections of site photos, (Golparvar-
Fard et al. 2011) present image-based 3D reconstruction procedures based on a
pipeline of Structure-from-Motion (SfM) and Multi-View Stereo algorithms to
generate as-built 3D point clouds. These point cloud models are then manually
registered with BIM by solving for the similarity transformation. (Karsch et al. 2014)
leverages BIM and presents a constrained-based procedure to improve image-based
3D reconstruction. Their experimental results show that the accuracy and density of
image-based 3D reconstruction and back-projection of BIM on unordered and un-
calibrated site images can be improved compared to the state-of-the-art.
Methods that analyze 3D geometry of the as-built scene or the
appearance information contained in the images. (Golparvar-Fard et al. 2012)
leverages integrated scenes of dense image-based 3D point clouds and BIM, and
infers the state of WIP based on expected-vs.-actual occupancy in the scene. However,
occupancy based methods are not capable of differentiating different stages of
operations involved in construction of an element. Instead of relying on geometry,
(Han and Golparvar-Fard 2014a;b) present appearance-based material classification
methods for monitoring operation-level construction progress. Their method also
leverages 4D BIM and 3D point clouds generated from site images. Yet, through
reasoning about occlusions, each BIM element is back-projected on all images. From
these back-projections, 2D image patches are sampled per element and are classified
into different material types. By reasoning about the observed frequency of different
material types, their method is capable of tracking operation-level progress beyond
activities shown in a typical work breakdown structure of a schedule.
Studies that leverage UAV for as-built modeling purposes. To facilitate the
process of data collection, (Siebert and Teizer 2014; Zollmann et al. 2014) propose
leveraging UAVs for collecting progress images. These methods primarily rely on
GPS for navigations thus their application for autonomous navigations on building
sites in dense urban areas and interior spaces is still limited.
Although the analysis of the site images with BIM is automated to some
degree, yet the data collection has remained a time-consuming process for most part
and does not necessarily follow any specific strategy. While BIM plays an important
role on guiding the data collection and analytics, its application for model-direction
collection of site images is still unexplored.

MODEL-DRIVEN COLLECTION OF VISUAL DATA

This paper proposes a framework for acquisition of site images using UAVs
which has potential for guaranteeing accuracy and completeness of as-built 3D
modeling and providing information at the level of detail necessary for monitoring
WIP. The approach relies on a detailed BIM that can serve as a basis for project
control, 3D as-build modeling, model-based reasoning, and communication of

©  ASCE 3
Computing  in  Civil  Engineering  2015 159

progress deviations (Figure 1). The following provides the specific steps that can
streamline the process of data acquisition and analytics for complete assessment of
work-in-progress:

Figure 1. The overview of the model-driven collection of visual data

The desired Level of Development (LoD) in BIM and Work Breakdown


Structure in schedule. Monitoring progress advancement using camera-equipped
UAVs on a daily basis requires detailed BIM and weekly work plans. It also requires
modeling workflow details that are not formally represented in look-ahead schedules.
Such details are often not available because a content plan –which describes what
components should be modeled so that estimates and schedules can be created from
the BIM – primarily remains at LoD 300 or 400. Also, many daily task details such as
those reflected in a weekly work plan are not present in the work break down (WBS)
and organization break down (OBS) structure. Consequently, it is difficult to reason
about ‘Who does what work at what location’. BIM at LoD400 may also be missing
several key layers and temporary structures required, all which are needed for visual
progress monitoring task. This is because the 3D areas where changes are expected –
due to construction progress– are to be derived from the underlying 4D BIM and
analyzed based on changes in both geometry and/or appearance. As such it is
challenging to create probabilistic maps on the most likely locations and viewpoints
that could be used as waypoint for the path planning of autonomous UAVs.

Figure 2. LoD needed for comparing as-built to as-planned for progress


monitoring: Similar to (a)(b), reinforcement bars and formwork should be
added to the model in (c) for accurate assessment of a WIP foundation wall.

Accuracy and completeness of as-built documentation. The accuracy and


completeness of image-based reconstruction can be enhanced by BIM as a constraint.

©  ASCE 4
Computing  in  Civil  Engineering  2015 160

(Kaarsh et al., 2014) presentts a new BIMM-assisted SfM


S proceduure together with Multi-
vieww Stereo th hat improvess completenness and acccuracy in 3D D reconstrucction. Their
expperiments as illustrated in Figure 3 show
s the posssibility of achieving
a moore accurate
andd complete point
p cloudss due to bettter camera registration and higherr success in
locaalization of all
a images innto the underrlying base point
p cloud. Since the loocations and
posses of the caameras ( anda ) in 3D
D is more accurately
a deerived, theirr method is
cappable of prod ducing accuurate 2D bacck-projectionns of the BIIM onto sitee images as
welll. Figure 3 shows seveeral examplee overlays of BIM withh site imagess using this
metthod.

M-assisted SfM: enhanced accurracy of oveerlaying BIIM on site


Figgure 3. BIM
imaages.

Table 1 shows two sets of expeerimental results where the perform


mance of the
BIMM-assisted SfM
S proceduure (Karsh et
e al., 2014)) for superim
mposing BIM
M with site
imaages has beeen comparedd on two dattasets againsst VisualSfMM (Wu 2013)– the state
of the art non n-constraint SfM proceedure, Photoosynth (Snavely et al. 2008) and
Inteeractive Clossest Point + Photosynth..

Tabble 1. Commparison ussing Modell-assisted SfM againstt existing approaches


a
usin
ng real-worrld construcction data.
Dataaset # of
o Rottational error (ddegrees) Translaational error (m
meters)
imag
ges Model- VSfM PS S PS-ICP Model- VSfMV PS PS-ICP
assisted assisted
Dataaset A 15
5 0.67 2.28 8.779 79.4 1.91 22.51 6.99 10.19
Dataaset B 160
0 0.36 0.3 0.331 5.13 0.22 0
0.24 0.24 2.43

Accoun nting for occclusions durring back-prrojection off BIM into site s images.
By identifying and removving static and a dynamicc occlusionss, the resultting images
from
m back-projection of BIIM can be more
m accuratee for appearaance-base deetection. To
do so, the methhod of (Karssh et al., 20114) could bee used to maake use of thhe BIM and
the sparse set of
o 3D points computed duuring the SfM fM proceduree (Sec 5). Byy projection
points in the BIM,
B it is preedicted wheether or not this point iss in front off the model.
Beccause the baack-projecteed point clouuds are typiically sparsee, the binaryy occlusion
predictions can n be floodeed by superppixels and finally smooothened with a cross-

©  ASCE 5
Computing  in  Civil  Engineering  2015 161

bilaateral filter. Figure 4 shows


s an iddle mobile crane
c and immobile
i ellements are
blocking the caamera’s fieldd of view. Affter reasoninng about the depth, thosee occlusions
cann be removed d.

Figgure. 4: Iden
ntifying occclusions and
d removing by
b analyzin
ng point clou
ud vs. BIM..

Appeara ance-based assessmentt. To leveraage the appearance-baseed material


classsification for monitooring operaation-level construction
c n progress (Han and
Gollparvar-Fard d 2014a; b), the propossed back-proojection andd material cllassification
metthod extractss 2D image patches of BIM
B elementts and classiffies their maaterial types
for inferring thee state of proogress.

Figgure. 5: Extrraction of im
mage patchees (left) and
d material cllassification
n (right).

Canoniical views to elemen nts can prroduce bettter appearrance-based


monitoring ressults. The loocation and viewpoint of o the camerra to each element
e can
affeect the resu ult of appeaarance-basedd recognitioon. Instead of randomlyy sampling
squuared-shape patches
p and analyzing thhe frequencyy of observedd materials forf progress
monitoring, onee could transform each patch p into a frontal view
w, and the coompare that
agaainst a discriiminative claassifier that is trained foor canonical views only. Figure. 6a
illuustrates how the canonical view can provide a beetter samplinng strategy for f material
classsification annd inferencee of construcction progresss.
Desiredd locations and viewpooints to bee provided to t the UAV V for path
planning. Thee underlyingg semantic in 4D BIM M and also the best loocation and
view wpoint for taking
t imagees in a canonnical view –from
– each face
f of the construction
c
elem ments– can serve as strong
s priorss for path planning
p off the UAV. Figure 6b
illuustrates this concept wheere the know wledge of what
w changess to expect in the scene
andd most ideall viewpointss have helpeed with idenntification of o a waypoiint for path
plannning of the UAV.

©  ASCE 6
Computing  in  Civil  Engineering  2015 162

Figgure. 6: (a) The viewp point of thee camera caan significaantly affect the object
reccognition reesult while viewpoint is i normal to t the desirred plane gives
g better
result (b) 4D BIM
B to identify the wayypoints in UAV
U path pllanning.
Discusssion on on ngoing casee studies an nd the rolee of BIM (geometry,
apppearance, an nd interdepeendency). ToT leverage thet conceptss introducedd above, the
authhors are cu urrently connducting tw wo pilot prrojects on a building project in
Chaampaign, ILL (see Figuree 7 for an example
e as-bbuilt point cloud
c generaated on this
proj
oject) and alsso on a stadiuum project in Sacramentto, CA.

Figure. 7: Top
p Row: Den
nse 3D pointt cloud moddel of constrruction; Botttom Raw:
Superimposing 4D BIM
B on an aerial
a photoo taken duriing construcction.

CO
ONCLUSION

This paaper presentss a model-drriven approaach for acquuisition and analysis of


proogress imagees. In particcular, the pootential of spatial
s (geom
metry, appeaarance, and

©  ASCE 7
Computing  in  Civil  Engineering  2015 163

interconnectivity) and temporal information in 4D BIM for autonomous data


acquisition and analytics that guarantees completeness and accuracy for both as-built
modeling and monitoring WIP at the schedule task-level is discussed. Future work
involves exploring each component of the proposed approach and validation on actual
construction projects.

AKNOWLEDGEMENT

This material is in part based upon work supported by the National Science
Foundation under Grant CMMI-1427111. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.

REFERENCES

Bae, H., Golparvar-Fard, M., and White, J. (2014). “Image-Based Localization and
Content Authoring in Structure-from-Motion Point Cloud Models for Real-Time
Field Reporting Applications.” Journal of Computing in Civil Eng, 637–644.
Golparvar-Fard, M., Peña-Mora, F., Arboleda, C. A., and Lee, S. (2009).
“Visualization of Construction Progress Monitoring with 4D Simulation Model
Overlaid on Time-Lapsed Photographs.” Journal of Computing in Civil Eng.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2011). “Integrated Sequential
As-Built and As-Planned Representation with Tools in Support of Decision-
Making Tasks in the AEC/FM Industry.” Journal of Construction Engineering
and Management.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2012). “Automated Progress
Monitoring Using Unordered Daily Construction Photographs and IFC-Based
Building Information Models.” Journal of Computing in Civil Eng, 147–165.
Han, K., and Golparvar-Fard, M. (2014a). “Appearance-based Material Classification
for Monitoring of Operation-Level Construction Progress Using 4D BIM and
Site Photologs.” Automation in Construction, Elsevier.
Han, K., and Golparvar-Fard, M. (2014b). “Automated Monitoring of Operation-level
Construction Progress Using 4D BIM and Daily Site Photologs.” Construction
Research Congress 2014, D. Castro-Lacouture, J. Irizarry, and B. Ashuri, eds.,
American Society of Civil Engineers, 1033–1042.
Karsch, K., Golparvar-Fard, M., and Forsyth, D. (2014). “ConstructAide: analyzing
and visualizing construction sites through photographs and building models.”
ACM Transactions on Graphics (TOG), ACM, 33(6), 176.
Kim, H., and Kano, N. (2008). “Comparison of construction photograph and VR
image in construction progress.” Automation in Construction, 17(2), 137–143.
Rebolj, D., Babi , N. ., Magdi , A., Podbreznik, P., and Pšunder, M. (2008).
“Automated construction activity monitoring system.” Advanced engineering
informatics, Elsevier, 22(4), 493–503.
Siebert, S., and Teizer, J. (2014). “Mobile 3D mapping for surveying earthwork
projects using an Unmanned Aerial Vehicle (UAV) system.” Automation in
Construction, Elsevier, 41, 1–14.

©  ASCE 8
Computing  in  Civil  Engineering  2015 164

Snavely, N., Garg, R., Seitz, S. M., and Szeliski, R. (2008). “Finding Paths through
the World’s Photos.” SIGGRAPH.
Wu, C. (2013). “Towards linear-time incremental structure from motion.”
Proceedings - 2013 Int Conference on 3D Vision, 3DV 2013, 127–134.
Zhang, X., Bakis, N., Lukins, T. C., Ibrahim, Y. M., Wu, S., Kagioglou, M., Aouad,
G., Kaka, A. P., and Trucco, E. (2009). “Automating progress measurement of
construction projects.” Automation in Construction, Elsevier, 18(3), 294–301.
Zollmann, S., Hoppe, C., Kluckner, S., Poglitsch, C., Bischof, H., and Reitmayr, G.
(2014). “Augmented Reality for Construction Site Monitoring and
Documentation.” Proceedings of the IEEE, IEEE, 102(2), 137–154.

©  ASCE 9
Computing  in  Civil  Engineering  2015 165

Semantic Annotation for Context-Aware Information Retrieval for Supporting


the Environmental Review of Transportation Projects

Xuan Lv1 and Nora M. El-Gohary, A.M.ASCE2


1
Graduate Student, Dept. of Civil and Environmental Engineering, Univ. of Illinois at
Urbana-Champaign, 205 N. Mathews Ave., Urbana, IL 61801. E-mail:
[email protected]
2
Assistant Professor, Dept. of Civil and Environmental Engineering, Univ. of Illinois
at Urbana-Champaign, 205 N. Mathews Ave., Urbana, IL 61801 (corresponding
author). E-mail:[email protected]
Abstract

Although transportation practitioners nowadays have an unprecedented level


of access to massive information, there still exist substantial gaps in their ability to
efficiently and reliably find the right information, at the right time, and for the
task/decision at hand. To address this gap, this paper proposes a context-aware
information retrieval (IR) approach, which can capture and exploit the
conceptualization of user needs, decision context, and content meanings to support
the retrieval of information that is more relevant to decision making. The proposed IR
approach includes three primary components: semantic annotation (SA), semantic
query processing (SQP), and semantic document ranking (SDR). This paper focuses
on SA for IR for supporting the Transportation Project Environmental Review (TPER)
decision-making process. The paper proposes an epistemology-based SA algorithm
for automatically annotating webpages in the TPER domain with contextual concepts
from an epistemological model. The TPER epistemology is a semantic model for
representing and reasoning about information and information retrieval in the TPER
domain. In developing the proposed algorithm, a number of syntactic-based and
semantic-based annotation methods/algorithms were developed and tested. For the
syntactic-based algorithms, the effect of syntactic expansion and filtering was
investigated. For the semantic-based algorithms, different semantic similarity
calculation methods were evaluated. All the algorithms were tested on a testing data
set of 1,328 webpages, which were collected from the FHWA Environmental Review
Toolkit Website, and evaluated in terms of Mean Average Precision (MAP) and
average precision. The final, proposed SA algorithm achieved 84.07% MAP and
90.67% average precision at top 50 documents, on the testing data.

INTRODUCTION

The National Environmental Policy Act (NEPA) requires any federally funded
or involved transportation projects to go through an environmental review process to
evaluate their impact on the environment. The environmental review process not only
affects transportation decision making by taking environmental concerns into account,
but also affects the project development process in terms of time and cost. According

©  ASCE 1
Computing  in  Civil  Engineering  2015 166

to a study conducted on the timeliness of the environmental review process (Venner


Consulting et. al 2012), the review process consumes nearly 30% of the total project
development time on average, and a longer review time is correlated with a longer
project development time. The Transportation Project Environmental Review (TPER)
process, which requires the collaboration of a number of stakeholders and the
collection and communication of a large amount of textual information, has long been
criticized for being “a costly and time-consuming process that thwarts agency
decision making” (Hansen and Wolff 2011); and the time to complete the TPER
process for large scale transportation projects nearly tripled since 1970s (Clark and
Canter 1997; Barberio et al. 2008; Venner Consulting et. al 2012).
To improve the efficiency and effectiveness of the current TPER process, the
Illinois Department of Transportation (IDOT) funded a research project (ICT 2014)
for studying the NEPA process. As a result of the study, the need for an information
retrieval (IR) system that better understands the requirements of transportation project
stakeholders and the knowledge of the environmental review domain was defined as a
means to improve the efficiency and effectiveness of the process. To address this
need, this paper proposes a context-aware IR system, which can “capture and exploit
the conceptualization of user need and content meanings” (Fernandez et al. 2011), to
support the search and retrieval of textual information in the TPER domain.
The proposed context-aware IR framework includes three primary
components: semantic annotation (SA), semantic query processing (SQP), and
semantic document ranking (SDR). SA aims to generate semantic metadata that
represent the document context for documents in the collection. SQP aims to extract
the user context from the user’s query and, accordingly, transform the user’s query
into a semantic query. SDR aims to retrieve and rank documents based on the
epistemic context (which includes user context, searching context, and document
context). The context analyses are conducted based on a TPER epistemology. The
TPER epistemology is a semantic model for representing and reasoning about
information and IR in the TPER domain for facilitating context-aware, domain-
specific IR. This paper focuses on presenting the methodologies and experiments of
SA.

BACKGROUND AND KNOWLEDGE GAPS

SA is the process of assigning the semantic descriptions to the entities in the


text (Kiryakov et al. 2004). Current researchers have focused on three different types
of SA (Castells et al. 2007; Fernandez et al. 2011): (1) statistical approaches, which
identify groups of words that commonly appear together, based on a statistical model,
and use these word groups as semantic descriptions; (2) linguistic conceptualization
approaches, which take advantage of linguistic resources like WordNet or thesauri to
enhance document indexing; and (3) ontology-based approaches, which link the
concepts in the ontology with the text, and provide a much more detailed and densely
populated concept space in the form of an ontology (Fernandez et al. 2011). In
comparison to ontology-based approaches, statistical and linguistic conceptualization
approaches are (1) commonly based on shallow and sparse conceptualizations, (2)

©  ASCE 2
Computing  in  Civil  Engineering  2015 167

usually consider very few different types of relations between concepts, and (3)
usually allow for low information specificity levels (Castells et al. 2007).
In recent years, a number of research works have been conducted on
improving IR in the construction domain. Soibelman et al. (2008) combined vector-
space-model with document classification information to retrieve documents related
to a project model object, and developed a domain-specific thesaurus to improve the
retrieval of construction product information from the internet. Demian and
Balastoukas (2012) investigated the effects of granularity and context when
measuring relevance and visualizing results for retrieving building design and
construction content, and found out that users performed better and were more
satisfied when the search results were displayed with their context information in
terms of the related discipline, building components, and sub-component objects. Fan
et al. (2015) implemented three machine learning algorithms to enhance the retrieval
results through user feedbacks, and utilized project-specific term dictionary and
dependency grammar information to facilitate feature selection.
Although a number of studies on IR have been conducted in the construction
domain, there still exist many challenges in developing IR systems that can efficiently
retrieve relevant information for decision making: (1) Most of the existing IR systems
in the construction domain are built on keyword-based IR models, such as the vector-
space-model, and lack formal representation of context; (2) Most of the IR efforts in
the construction domain focused on retrieving project-based information and lack
support for retrieving project-independent information, such as regulations; (3) Most
of the current IR systems in the construction domain have not been compared with
other state-of-the-art IR systems in terms of retrieval performance.

PROPOSED SEMANTIC ANNOTATION METHODOLOGY

The proposed approach for SA for supporting context-aware IR in the TPER


domain is summarized in Figure 1.

Figure 1. Proposed methodology for SA

Step 1: TPER Epistemology Development. The TPER epistemology


provides the foundation of context-aware IR in the TPER domain. It covers concepts
of the “TPER epistemic context” including “user context”, “searching context”, and
“document context”. In developing the TPER epistemology, the context-aware
epistemic model for sustainable construction practices by Zhang and El-Gohary (2014)
was benchmarked. The concepts in the epistemology were defined based on a
literature review of work in the following three sub-domains: (1) epistemology and its
application in different domains, (2) context-aware IR systems, and (3) TPER process.

©  ASCE 3
Computing  in  Civil  Engineering  2015 168

Step 2: Data Preparation. To create a document collection on the TPER


domain, 1,328 web pages were collected through crawling under the domain of the
FHWA Environmental Review Toolkit website (www.environment.fhwa.dot.gov).
When writing the crawled information into the local file, their encodings were
automatically transformed into the UTF-8 encoding, and any unnecessary html tags
and non-ASCII characters were removed to ensure the performance of concept
matching are not undermined by noise. The data sets include the title, headings, and
body text of each web page.
After data collection, each document was manually annotated by the authors
with one or more functional process context concepts. This paper focuses on
analyzing the ‘functional process context’ under the ‘TPER epistemic context”, which
has 6 sub-concepts: ‘project scoping process’, ‘environmental screening process’,
‘alternative analysis process’, ‘document development process’, ‘environmental
mitigation process’, and ‘stakeholder involvement process’. The manual annotation
result formed the gold standard for the following experiments.

Step 3: Data Preprocessing. To prepare the raw text data for the
implementation of SA algorithms, the Bag of Word (BOW) model (Manning et al.
2009) was used to represent the document. In this model, a document is represented
as an unsorted set of words with their corresponding weight that represents the
discriminating power of the word. In order to represent the document in the BOW
model, the following three commonly-used techniques for data preprocessing were
conducted: (1) Tokenization: Tokenization is the process of breaking the text into
elements (called tokens) such as words, phrases, symbols, or other meaningful
elements. In this work, a single word was regarded as a common token, and a list of
special tokens which consist of terminologies in the TEPR domain was also
developed. Examples of the special tokens include “Categorical Exclusion”,
“Environmental Assessment”, and “Environmental Impact Statement”, which refer to
the three different environment review actions required by the federal law; (2)
Stopword Removal: Stopwords are those words that have high frequency but low
discriminating power, which indicate they have little value in helping select
documents that match a user need; (3) Lemmatization: Lemmatization is the process
of removing inflectional endings and return the base or dictionary form of a word,
which is known as the lemma. For example, after the lemmatization, the words
“mitigates”, “mitigated”, and “mitigating” would all be transformed into their lemma
“mitigate”.

Step 4: Syntactic-Based Semantic Annotation. The syntactic-based SA


approach mainly uses NLP techniques and mainly syntactic features to annotate the
text with concepts in the TPER epistemology. In the proposed syntactic-based
approach, for each concept (e.g., “stakeholder involvement process”) in the TPER
epistemology, a concept index was created to store concept terms (e.g., “stakeholder”
and “involvement”), which are the most common text descriptions of the concept.
First, the syntactic concept expansion was performed to expand the concept term with
related lexical terms (synonyms, hyponyms, and hypernyms) from a lexical dictionary
(Wordnet). Second, the syntactic concept filtering was conducted to (1) remove the

©  ASCE 4
Computing  in  Civil  Engineering  2015 169

noises as a result of the concept expansion, and (2) expand concept terms with
domain-specific context terms. Third, to determine the relevance of the annotation,
the Term Frequency-Inverse Document Frequency (TF-IDF) weights of the concept
terms were calculated, and the relevance of the annotation was then determined by the
TF-IDF weights and the relevance between the expansion concept terms and the
original concept terms.

Step 5: Semantic-Based Semantic Annotation. The semantic-based SA


approach uses (1) TPER epistemology as one of the inputs for the annotation, and (2)
involves deep semantic analysis. For each concept in the TPER epistemology, a
semantic concept index was created to not only contain the concept terms of the
original concept, but also the concept terms of its related concepts. The related
concepts are defined as the concepts that have direct relations to the original concept
including its descendants (direct and indirect subconcepts) and other concepts that
have non-hierarchical relations to the original concept. The relevance of the
annotation was then determined by the TF-IDF weights of the terms in the semantic
concept index and the semantic similarity (SS) between the original concept and the
related concepts. Three path-based SS measures that use different path features were
selected: Leacock and Chodorow (1998) SS, Wu and Palmer (1994) SS, and Mao and
Chu (2006) SS. Leacock and Chodorow (1998) calculated the SS only based on the
shortest path distance between the two concepts. Wu and Palmer (1994) calculated
the SS based on two path features: the shortest path distance between the two
concepts and the location of the Most Informative Subsumer (MIS) of the two
concepts. The MIS of concept nodes and is defined as the lowest node that can
be a parent for concepts and (Al-Mubaid and Nguyen 2006). Mao and Chu
(2007) utilized the shortest path distance and the number of descendants of the two
concepts to calculate the SS.

Step 6: Performance Evaluation. The performance of the above mentioned


syntactic-based and semantic-based SA algorithms were evaluated using Mean
Average Precision (MAP) and average precision at the top K (50) documents.
Precision is calculated based on the following equation (Manning et. al 2009), where
the True Positive (TP) refers to the number of documents correctly annotated as
positive, and False Positive (FP) refers to the number of documents incorrectly
annotated as positive. MAP can be calculated using the following two functions (Ceri
et. al 2013), where Q is the total number of information needs, for a single query q,
AveP(q) is the average precision for the top k documents after a relevant document is
retrieved, k is the rank in the sequence of retrieved documents, n is the number of
retrieved documents, P (k) is the precision score at cut-off k, and rel(k) is an indicator
function equals to 1 if the retrieved document at rank k is a relevant document and 0
otherwise.

©  ASCE 5
Computing  in  Civil  Engineering  2015 170

EXPERIMENTS RESULTS AND ANALYSIS

Performance of Syntactic-based SA Approach. Syntactic-based SA was


conducted in three different ways: (1) using original concept terms only; (2)
conducting concept expansion on original concept terms; and (3) conducting both
concept expansion and filtering on original concept terms. The performance results of
the three syntactic-based SA methods are summarized in Table 1. When applying
concept expansion on the original concept terms, the MAP dropped from 47.28% to
47.02% and average precision at the top 50 documents dropped from 70.33% to 64%.
The reason for the performance decline is that the concept expansion brought a lot of
noises. When applying both the concept expansion and filtering on the original
concept terms, the MAP and average precision at the top 50 documents increased to
70.96% and 82.33%, respectively. The enhanced performance is attributed to two
main reasons: (1) concept filtering removed the noises brought by concept expansion;
and (2) concept filtering expanded the original concept terms with domain-specific
context terms.

Table 1. Results of Syntactic-based and Semantic-based SA Approaches


Syntactic-based Approach Semantic-based Approach
Functional Concept
Performance Original Leacock
Process Concept Expansion Wu and Mao and
Measure Concept and
Context Expansion + Concept Palmer Chu
Terms Chodorow
Filtering
Project AP at top 50 24.57% 19.92% 67.57% 81.27% 80.27% 57.72%
Scoping
Process Precision at top 50 52.00% 44.00% 80.00% 94% 94.00% 78%
Environmental AP at top 50 39.73% 34.45% 52.25% 86.60% 71.45% 38.31%
Screening
Process Precision at top 50 70.00% 58.00% 74.00% 90.00% 80.00% 68%
Document AP at top 50 34.15% 42.04% 55.11% 81.61% 81.93% 61.83%
Development
Process Precision at top 50 64.00% 62.00% 72.00% 88% 88.00% 88.00%
Alternative AP at top 50 23.94% 20.21% 54.38% 57.12% 49.73% 12.58%
Analysis
Process Precision at top 50 54.00% 36.00% 70.00% 74% 70.00% 40%
Environmental AP at top 50 100.00% 100.00% 100.00% 100.00% 100.00% 100%
Mitigation
Process Precision at top 50 100.00% 100.00% 100.00% 100.00% 100.00% 100%
Stakeholder AP at top 50 61.32% 65.53% 96.47% 97.83% 97.92% 94.05%
Involvement
Process Precision at top 50 82.00% 84.00% 98.00% 98.00% 98.00% 96%
AP at top 50 47.28% 47.02% 70.96% 84.07% 80.22% 60.75%
Average
Precision at top 50 70.33% 64.00% 82.33% 90.67% 88.33% 78.33%

Performance of Semantic-based SA Approach. The performance of


semantic-based SA approach was evaluated for the three SS measures. The
performance results are summarized in Table 1. As shown in Table 1, Leacock and
Chodorow (1998) SS measure achieved the best performance of 84.07% MAP and
90.67% average precision at the top 50 documents. The Leacock and Chodorow
(1998) SS outperformed the other methods because it utilizes only the shortest path

©  ASCE 6
Computing  in  Civil  Engineering  2015 171

distance between the two concepts, while the other methods utilize other path features
(location of the MIS of the two concepts, and the number of descendants of the two
concepts) that are not effective for SA as indicated by the results.

Comparison of Syntactic-based and Semantic-based SA Approaches. To


compare the syntactic-based and semantic-based approaches the best performing
methods (conducting both concept expansion and concept filtering on the original
concept terms for syntactic-based SA and using Leacock and Chodorow (1998) SS
measure for semantic-based SA) were compared. As shown in Table 1, the semantic-
based approach outperformed the syntactic-based approach on both the MAP and
average precision at the top 50 documents. The semantic-based approach achieved
better performance over syntactic for the following two reasons: (1) the semantic-
based approach takes the domain knowledge into consideration, while the syntactic-
based approach overlooked important semantic relations, such as “is-a”, and “is-part”
relations; and (2) the syntactic-based approach was purely based on lexical relations
and corpus statistics, and its performance depends largely on the quality of the lexical
dictionary and the corpus.

CONCLUSION AND FUTURE WORK

This paper presents a domain-specific, semantic-based algorithm for


annotating documents in the TPER domain with concepts from the TPER
epistemology for supporting the context-aware IR in the TPER domain. In developing
the SA algorithm, both syntactic-based and semantic-based algorithms were tested in
terms of MAP and average precision at the top 50 documents. For the syntactic-based
algorithms, the effects of syntactic concept expansion and filtering were investigated.
Syntactic concept expansion brings a lot of noises, but syntactic concept filtering is
effective in enhancing the performance through reducing the noises and expanding
the concept index with domain-specific relations. For the semantic-based algorithms,
three different SS measures were tested. The best performance was achieved through
a semantic-based algorithm that used Leacock and Chodorow (1998) SS measure as
the relevance factor.
In future work, the authors will (1) explore the use of the proposed SA
algorithm for annotating other concepts in the TPER epistemology, such as concepts
from the project context and resource context branches; (2) collect more relevant
documents in the TPER domain, such as project environmental reports, project
website, and public opinions; and (3) work on improving the performance of the
proposed method by investigating other SS measures.

ACKNOWLEDGEMENT

This material is based upon work supported by the Strategic Research


Initiatives (SRI) Program by the College of Engineering at the University of Illinois
at Urbana-Champaign.

©  ASCE 7
Computing  in  Civil  Engineering  2015 172

REFERENCES

Al-Mubaid, H., and Nguyen, H. A. (2006). “A combination-based semantic similarity


measure using multiple information sources.” IEEE Intl. Conf. on Information
Reuse and Integration, IEEE, Piscataway, NJ, 617-621.
Barberio, G., Barolsky, R., Culp, M., and Ritter, R. (2008). “PEL – A Path to
Streamlining And Stewardship.” Public Roads, <http://
www.fhwa.dot.gov/publications/publicroads/08mar/01.cfm>(March 23, 2014).
Castells, P., Fernandez, M., and Vallet, D. (2007). “An adaptation of the vector-space
model for ontology-based information retrieval.” IEEE Trans. Knowl. Data
Eng., 19(2), 261-272.
Clark, E. R., and Canter, L. W. (1997). Environmental Policy and NEPA: Past,
Present, and Future, CRC Press, Boca Raton, FL, 47-62.
Ceri, S., Bozzon, A., Brambilla, M., Valle, E. D., Fraternali, P., and Quarteroni, S.
(2013). Web Information Retrieval, Springer, Berlin, Heidelberg, 9-11.
Demian, P. and Balatsoukas, P. (2012). ”Information retrieval from civil engineering
repositories: importance of context and granularity.” J. Comput. Civ.
Eng., 26(6), 727–740.
Fernandez, M., Cantador, I., Lopez, V., Vallet, D., Castells, P., and Motta, E. (2011).
“Semantically enhanced information retrieval: an ontology-based approach.” J.
Web. Semant., 9(4), 434–452.
Fan, H., Xue, F., and Li, H. (2015). ”Project-based as-needed information retrieval
from unstructured AEC documents.” J. Manage. Eng., 31(1), 1-12.
Illinois Center for Transportation (ICT). (2014). “Guidelines Developed to Streamline
NEPA and IDOT/MPO Transportation Planning Processes.”
<http://ict.illinois.edu/2014/07/23/guidelines-developed-to-streamline-nepa-
and-idotmpo-transportation-planning-processes-2/> (June 23, 2014).
Kiryakov, A., Popov, B., Terziev. I., Manov, D., and Ognyanoff, D. (2004).”Semantic
annotation, indexing and retrieval.” J. Web. Semant., 2(1), 49-79.
Leacock, C., and Chodorow, M. (1998). “Combining local context and WordNet
similarity for word sense identification.” WordNet: An Electronic Lexical
Database. The MIT Press, Cambridge, MA, 265-283.
Mao, W., and Chu, W. W. (2007). “The phrase-base vector space model for automatic
retrieval of free-text medical documents.” J. Data Knowl. Eng., 61(1), 76-92.
Manning, C., Raghavan, P., and Shutze, H. (2009). Introduction to information
retrieval, Cambridge University Press, Cambridge, England, 32-34.
Soibelman, L and Wu, J and Caldas, C and Brilakis, I and Lin,
KY (2008). “Management and analysis of unstructured construction data
types.” J. Adv. Eng. Inform., 22(1). 15-27.
Venner Consulting, Institute for Natural Resources, Oregon State University, and
Parametrix, Inc. (2012). Expedited Planning and Environmental Review of
Highway Projects, TRB, Washington, D.C., 13-14.
Wu, Z., and Palmer, M. (1994). “Verb semantics and lexical selection.” Proc., 32nd
Annual Meeting on Association for Computational Linguistics, ACM, New
York, NY 133-128.
Zhang, L., and El-Gohary, N. M. (2014). “A context-aware epistemic model for
sustainable construction practices.” J. Constr. Eng. Manage., submitted.

©  ASCE 8
Computing  in  Civil  Engineering  2015 173

Automated Extraction of Information from Building Information Models into a


Semantic Logic-Based Representation

J. Zhang1 and N. M. El-Gohary2


1
Graduate Student, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801.
E-mail: [email protected]
2
Assistant Professor, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801.
E-mail: [email protected]

Abstract

One of the major goals of building information modeling is to support


automated compliance checking (ACC). To support ACC, building design
information needs to be extracted from building information models (BIMs) and
transformed into a representation that would allow for automated reasoning about
those design information in combination with information from regulatory documents.
However, existing BIM information extraction (IE) efforts are limited in supporting
complete automation of ACC. Complete automation of ACC requires (1) automating
both the extraction of information from BIMs and the extraction of regulatory
information from regulatory documents and (2) aligning the instances of information
concepts and relations extracted from a BIM with those extracted from regulatory
documents, in order to facilitate direct automated reasoning about both information
for compliance assessment. To address this gap, this paper proposes an automated
BIM IE method for extracting design information from industry foundation classes
(IFC)-based BIMs into a semantic logic-based representation that is aligned with a
matching semantic logic-based representation of regulatory information. The
proposed BIM IE method utilizes semantic natural language processing (NLP)
techniques and java standard data access interface (JSDAI) techniques to
automatically extract project information from IFC-based BIMs and transform it into
a logic format (logic facts) that is ready to be automatically checked against logic-
represented regulatory rules (logic rules). The BIM IE method was tested on
extracting design information from a Duplex Apartment BIM model. Compared to a
manually developed gold standard, the testing results showed 100% precision and a
short time of 15.02 seconds for processing 38,898 lines of data.

INTRODUCTION

Construction projects are governed by various regulations. The manual


process of checking compliance with regulations is time consuming, costly, and error-
prone (Zhang and El-Gohary 2013). Automated compliance checking (ACC) of

©  ASCE 1
Computing  in  Civil  Engineering  2015 174

construction projects against various regulations could save time, cost, and reduce
human errors (Zhong et al. 2012). To facilitate ACC, a computer-interpretable, user-
understandable, and unambiguous representation is needed for construction
regulations (Garrett and Palmer 2014). To address this need, a semantic logic-based
information representation (IRep) and compliance reasoning (CR) schema for
representing regulatory information and design information was proposed in prior
work (Zhang and El-Gohary 2014b). A challenge is then how to extract design
information from building information models (BIMs) and transform the extracted
design information into this semantic logic-based representation. This extraction and
transformation method needs to support reliable information transfer between the
BIMs and this semantic logic-based representation in an automated way. This paper
aims to address this challenge by proposing a new BIM information extraction (IE)
method. The proposed method utilizes java standard data access interface (JSDAI)
and semantic natural language processing (NLP) techniques. The remaining sections
of this paper present the details of the proposed method and its preliminary testing on
extracting information from a Duplex Apartment BIM model.

BACKGROUND

Semantic NLP. NLP aims to enable computers to process natural language


text and speech in a human-like manner (Cherpas 1992). NLP has many application
domains such as automated natural language translation (Marquez 2000), text
classification (Zhou and El-Gohary 2014), and information extraction (Zhang and El-
Gohary 2013). Techniques in NLP utilize two main types of features: syntactic
features and semantic features. Syntactic features are related to the grammatical
structure of the text, such as part of speech (POS) tags that depict lexical and
functional categories of words and phrasal tags that depict lexical and functional
categories of phrases. Semantic features are related to the meaning of text, such as
concepts and relations from a semantic model. Ontology is a widely used semantic
model that captures domain knowledge in a structured manner through concepts,
relations, and axioms (El-Gohary and El-Diraby 2010). Semantic NLP utilizes both
syntactic and semantic features in conducting NLP tasks.

JSDAI. JSDAI is a standard data access interface (SDAI) application


programming interface (API) for accessing and processing information in EXPRESS
written models. EXPRESS is an ISO standard product data modeling language (ISO
2004). The industry foundation classes (IFC) specification, the most popular data
schema for BIMs, is written in EXPRESS language. There are two types of
information access methods in JSDAI: early binding and late binding. Early binding
requires the availability of the EXPRESS model at the program compiling time and
accesses each entity of the known EXPRESS model with standard access methods
such as “set” and “get”. Late binding does not require the availability of the
EXPRESS model at the program compiling time and accesses each attribute and
entity using standard access methods such as “set” and “get”. Late binding is more
complex than early binding, but is independent of specific EXPRESS models.

©  ASCE 2
Computing  in  Civil  Engineering  2015 175

Proposed Information Representation (IRep) and Compliance Reasoning


(CR) Schema. The previously proposed semantic logic-based IRep and CR schema
(presented in Zhang and El-Gohary 2014a) is based on first order logic (FOL) – a
formally defined logic that is expressive and effective in supporting automated
reasoning. Among the different types of FOL statements, horn clauses (HCs) were
selected because they are the most effective in supporting automated reasoning. A HC
is a type of FOL statement that is composed of a disjunction of literals of which at
most one is positive. Prolog logic programming language and reasoner was selected
to represent the HC-based design information, because it is the most widely used
logic programming language. There are three types of clauses in Prolog – rules, facts,
and queries. A rule has the form: “H :- B1, B2, …, Bn. (n>0).” H is called the head
and B1 to Bn are called the body, where all are atomic formulas. The rule means “if
B1, B2, …, and Bn, then H.” A fact is a special type of rule whose body is always
true (Zhou 2012). In the proposed schema, regulatory information and design
information are represented as logic rules and logic facts, respectively.

PROPOSED METHOD

The authors propose a two-step method (Figure 1) to automatically extract


design information from IFC-based BIMs and transform the extracted information
into logic facts following their previously proposed semantic logic-based IRep and
CR schema. The information extraction (IE) step utilizes JSDAI techniques to access
and extract the entities and attributes in the IFC-based BIMs. The information
transformation (ITr) step utilizes semantic NLP techniques to map and transform the
entities and attributes into logic facts.
Entity and
Attribute
Input: Information Tuples Information Output:
IFC-Based BIMs Extraction Transformation Logic Facts

JSDAI Semantic NLP

Figure 1. Proposed method.

Input: IFC-Based BIMs. The proposed method aims to process BIMs in IFC
format (i.e., files having the extension name of “.ifc”, referred to as IFC files
hereafter) based on the IFC schema. The IFC schema is the main data model schema
to describe data in the building and construction industry, which is registered with
ISO as ISO 16739. The IFC format is neutral and platform independent
(BuildingSMART 2014). The BIMs in IFC format use “STEP physical file” format
defined as ISO10303-21 (BuildingSMART 2014). In a “STEP physical file”, each
line is assigned a line number. Each line represents an entity. An entity in IFC format
represents either a concept or relation. For example, “IFCBUILDINGSTOREY” is an
entity representing a concept “building storey”, and “IFCRELVOIDSELEMENT” is
an entity representing a relation “voids element” that defines the relation between an
“opening element” and the “void” made by the “opening element”.

©  ASCE 3
Computing  in  Civil  Engineering  2015 176

Output: Logic Facts. The proposed method outputs text files carrying
processed information represented as logic facts, following the previously proposed
IRep and CR schema (presented in Zhang and El-Gohary 2014a). In the proposed
IRep and CR schema, two types of facts are used: concept facts and relation facts. A
concept fact defines a constant as an instance of a certain concept. For example,
“door(door6652)” defines the constant “door6652” as an instance of the “door”
concept. A relation fact defines a relationship between an instance of a concept and
an instance of another concept or a value. For example, has(project34, site38274)
defines the association relation between an instance of project “project34” and an
instance of site “site38274.” The number in an instance is the line number of the
instance in its source IFC file. The use of these line numbers satisfies three purposes:
(1) identifying instances, (2) distinguishing instances, and (3) establishing links
between the logic facts and lines in their IFC source file. Both concept facts and
relation facts are represented as predicates. A predicate is the building block of a
logic clause. A predicate consists of a predicate symbol and one or more arguments
(i.e., constants or variables) in parenthesis following the predicate symbol [e.g., the
predicate “door(door6652)” has one predicate symbol “door” and one argument
“door6652”, where “door6652” is a constant].

Information Extraction. In the IE step, JSDAI is used to access the entities


and attributes in the IFC file. Between the two access types of early binding and late
binding, late binding was selected. Because the proposed method is designed to
support BIMs based on different versions of the IFC schema (e.g., IFC 2x3, IFC2x3-
TC1, IFC4). In this manner, even future BIMs based on future IFC schema releases
(which are not published and known now) could still be supported by this proposed
method, with the only premise that future IFC releases continue using EXPRESS
language. The proposed IE step processes information according to the metadata at
the EXPRESS data schema level (i.e., schema of schema of BIMs). The algorithm for
this IE step is shown in Figure 2. The algorithm first initializes all variables to be
used and compiles the version of IFC schema to use, then processes the lines in an
IFC file one by one. The processing of each line uses a subroutine S1. In S1, if the
entity represented by the line being processed is of aggregate type, then S1 is
recursively called on each sub-entity of the aggregate entity. Otherwise, the names of
all attributes of the entity being processed are looked up in the compiled IFC schema,
and the values of all attributes of the entity being processed are then accessed. At the
end of S1, the following information for the entity processed are stored into a tuple:
(1) the name of the entity; (2) the line number of the entity; (3) the list of attribute
names of the entity; and (4) the values of the attributes of the entity. Figure 3 shows
an example of the processing, where the entity in line “#36686” from Part I generates
the bold-highlighted tuple in Part II.

Information Transformation. In the ITr step, semantic NLP is used to


transform the extracted tuple-represented entities and attributes into logic facts. This
step includes two main processing subtasks: (1) semantic look up of entity names and
attribute names; and (2) transformation of entities and attributes into concept facts
and relation facts.

©  ASCE 4
Computing  in  Civil  Engineering  2015 177

Start

Initialize variables;
Compile IFC schema to use;

Read in a line from Input into L;


look up the name of the entity and store it into E;
End of lines?
look up the line number of the entity and store it into Lnum;
look up the type of the entity;
Yes

End
Is the entity an
Yes
aggregate type?
Subroutine S1

No
Iterate through each sub-entity SE
in the aggregate, process each SE Look up the entity in the compiled
using the subroutine S1; IFC schema to get the names of its
attributes A1, A2, … An;

Get the value of each attribute for


current entity as V1, V2, … Vn;

Yes

Store (E, Lnum, [A1, A2, … An],


and [V1, V2, … Vn]) as a tuple;

Figure 2. Proposed IE algorithm.

In the semantic look up subtask, each extracted entity name, attribute name,
and attribute value (i.e., if the attribute value is of entity type or enumeration type) is
looked up in the used IFC schema version. The matched name or enumeration type
value in the IFC schema is then used to convert the extracted name/value into
underscore-connected terms. For example in Figure 3, an entity
“OWNERHISTORY” is looked up in the IFC schema to find the matched entity
name “IfcOwnerHistory”. Then the term boundary information in “IfcOwnerHistory”
(i.e., represented by capitalization) is used to convert the extracted
“OWNERHISTORY” to “owner_history.” This is needed because the output of this
ITr step is to instantiate logic rules based on the semantic logic-based representation.
To enable that instantiation, the semantic information of each term in an entity or
attribute name needs to be semantically matched with terms in concepts and relations
from regulatory requirements (represented in logic rules). In the entities/attributes
transformation subtask, a rule-based NLP approach is selected (Zhang and El-Gohary
2014a). Three main NLP-based transformation rules are used: (1) an entity is
transformed into a concept fact (i.e., a predicate) by using the name of the entity as
the name of the predicate and using the name of the entity concatenated with the line
number as the argument (i.e., an entity constant) of the predicate. For example, in
Figure 3, the beam entity is transformed into a concept fact “beam(beam36686),”
with the name of the entity “beam” being the predicate name and the concatenation of
the entity name and the line number “beam36686” as the predicate argument; (2) an
attribute of an entity is transformed into a relation fact (i.e., a predicate) by using the
name of the attribute preceded by “has_” as the name of the predicate, using the

©  ASCE 5
Computing  in  Civil  Engineering  2015 178

corresponding entity constant as the first argument of the predicate, and using the
value of the attribute as the second argument of the predicate (if the value is not a
reference to another entity). For example, in Figure 3, the attribute “global_id” for the
beam entity is transformed into a relation fact “has_global_id(beam36686,
2OrWItJ6zAwBNp0OUxK_l8);” and (3) if the value of an attribute is a reference to
another entity, then the referred entity constant is used as the second argument of the
predicate. For example, in Figure 3, the attribute “owner_history” for the beam entity
is transformed into a relation fact with the referred entity constant owner_history33 as
the second argument: “has_owner_history(beam36686,owner_history33).”

#33=IFCOWNERHISTORY(#32,#2,$,.NOCHANGE.,$,$,$,0);
#36605=IFCLOCALPLACEMENT(#38,#36604);
Lines in IFC-
#36685=IFCPRODUCTDEFINITIONSHAPE($,$,(#36602,#36684)); I
Based BIMs
#36686=IFCBEAM('2OrWItJ6zAwBNp0OUxK_l8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-
Wide Flange:W310X60:208816',#36605,#36685,'207325');

(OWNERHISTORY, 33,
[owninguser,owningapplication,state,changeaction,lastmodifieddate,lastmodifyinguser,lastmodifyingapplication,creationdate],
[#32,#2,none,NOCHANGE,none,none,none,0]);
Tuples as (LOCALPLACEMENT,36605,[placementrelto,relativeplacement],[#38,#36604]);
output of IE (PRODUCTDEFINITIONSHAPE,36685,[name,description,representations],[none,none,(#36602,#36684)]);
II
(BEAM, 36686,
[globalid,ownerhistory,name,description,objecttype,objectplacement,representation,tag],['2OrWItJ6zAwBNp0OUxK_l
8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-Wide
Flange:W310X60:208816',#36605,#36685,'207325']);

(owner_history, 33,
[owning_user,owning_application,state,change_action,last_modified_date,last_modifying_user,last_modifying_application,cre
ation_date], [#32,#2,none,NOCHANGE,none,none,none,0]);
Tuples after
(local_placement,36605,[placement_rel_to,relative_placement],[#38,#36604]);
Semantic III
(product_definition_shape,36685,[name,description,representations],[none,none,(#36602,#36684)]);
Look up
(beam, 36686,
[global_id,owner_history,name,description,object_type,object_placement,representation,tag],['2OrWItJ6zAwBNp0OU
xK_l8',#33,'M_W-Wide Flange:W310X60:W310X60:207325',$,'M_W-Wide
Flange:W310X60:208816',#36605,#36685,'207325']);

owner_history(owner_history33).
local_placement(local_placement36605).
product_definition_shape(product_definition_shape36685).
beam(beam36686).
Logic Facts has_global_id(beam36686,2orwitj6zawbnp0ouxk_l8).
as Output of has_owner_history(beam36686,owner_history33). IV
ITr has_name(beam36686,m_w-wide_flange:w310x60:w310x60:207325).
has_object_type(beam36686,m_w-wide_flange:w310x60:208816).
has_object_placement(beam36686,local_placement36605).
has_representation(beam36686,product_definition_shape36685).
has_tag(beam36686,207325).

Figure 3. An example illustrating the processes of the proposed method.

PRELIMINARY EXPERIMENTAL RESULTS AND ANALYSIS

For experimental purposes, the proposed IE and ITr algorithms were


implemented in JAVA Standard Edition Development Kit jdk1.7.0_40. The JSDAI
v4 was used in the experiment to access IFC-based BIMs. The Duplex Apartment
Project from buildingSMARTalliance of the National Institute of Building Sciences
(East 2015) was selected as the source of data. The IFC file
“Duplex_A_20110907.ifc” was selected for testing which includes 38,898 lines of
data. Out of the 38,898 lines of data, 100 lines were randomly selected as a testing
sample. A gold standard was developed by manually interpreting the entities and

©  ASCE 6
Computing  in  Civil  Engineering  2015 179

attributes in these 100 lines and generating the target logic facts for them. The
proposed automated IE and ITr algorithms were applied to the
“Duplex_A_20110907.ifc” file, and the output results were compared with the
manually developed gold standard. The experimental results are summarized in Table
1. For the 100 concept facts and 328 relation facts corresponding to the 100 lines of
data, 100% precision was achieved. In addition, it took only 15.02 seconds to process
all the 38,898 lines of data. Yet, two limitations of the proposed method are
identified: (1) some output logic facts are not interpretation-friendly. For example, the
universal pre-fix for relation facts (i.e., “has_”) does not semantically fit in cases like
the following predicate P1; (2) the relations represented in the IFC file are not
perfectly aligned with the authors’ proposed semantic logic-based representation. For
example, an explicit relation entity in IFC is typically represented by two predicates
(e.g., P2 and P3) whereas in the semantic logic-based representation it is represented
by only one predicate (e.g., P4).

P1: has_for_layer_set(material_layer_set_usage21369,material_layer_set21320).
P2: has_relating_space(rel_space_boundary38711,space514).
P3: has_related_building_element(rel_space_boundary38711,covering23992).
P4: has_space_boundary(space514, covering23992).

Table 1. Preliminary Experimental Results.


Number/Measure Concept Facts Relation Facts
In Gold Standard 100 328
Extracted 100 328
Correctly Extracted 100 328
Precision 100% 100%

CONCLUSION AND FUTURE WORK

This paper presents a new BIM IE method for automatically extracting design
information from IFC-based BIMs and transforming the extracted information into
logic facts. The proposed method is intended to support automated reasoning for
automated compliance checking. Beyond this application, the proposed method could
be further used to assist other analysis and reasoning applications that use IFC-based
BIMs, because it provides semantic logic-represented information that are human-
interpretable and computer-processable. The method utilizes JSDAI and semantic
NLP techniques. It could process BIMs based on different versions of IFC schema.
The method was tested on processing information in the Duplex Apartment Project
from buildingSMARTalliance of the National Institute of Building Sciences.
Comparing to a manually developed gold standard, the experimental results showed a
100% precision. In addition, the processing of 38,898 lines of data only took 15.02
seconds. In their future work, the authors will further refine the proposed method to
make its output logic facts better aligned with regulatory requirements represented as
logic rules.

©  ASCE 7
Computing  in  Civil  Engineering  2015 180

ACKNOWLEDGEMENT

This material is based upon work supported by the National Science


Foundation under Grant No. 1201170. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the author and do not
necessarily reflect the views of the National Science Foundation.

REFERENCES

ISO. (2004). “ISO 10303-11:2004 - Part 11: Description methods: The EXPRESS
language reference manual.”
<http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csn
umber=38047> (Dec. 05, 2014).
BuildingSMART. (2014). “Industry Foundation Classes (IFC) data model.” <
http://www.buildingsmart-tech.org/specifications/ifc-overview> (Dec. 06,
2014).
Cherpas, C. (1992). “Natural language processing, pragmatics, and verbal behavior.”
Anal. Verbal Behav., 10(1992), 135–147.
El-Gohary, N. M., and El-Diraby, T. E. (2010). “Domain ontology for processes in
infrastructure and construction.” J. Constr. Eng. Manage., 136(7), 730–744.
Garrett, Jr., J.H., and Palmer, M.E. (2014). “Delivering the infrastructure for digital
building regulations.” J. Comput. in Civ. Eng., 28, 167-169.
Marquez, L. (2000). “Machine learning and natural language processing.” Proc.,
“Aprendizaje automatico aplicado al procesamiento del lenguaje natural”.
East, E.W. (2015). “Common building information model files and tools.” <
http://www.nibs.org/?page=bsa_commonbimfiles> (Mar. 03, 2015).
Zhang, J., and El-Gohary, N.M. (2014a). “Automated information transformation for
automated regulatory compliance checking in construction.” J. Comput. in Civ.
Eng., accepted.
Zhang, J., and El-Gohary, N.M. (2014b). “Logic-based automated reasoning for
construction regulatory compliance checking.” J. Comput. in Civ. Eng.,
submitted.
Zhang, J., and El-Gohary, N.M. (2013). “Semantic NLP-based information extraction
from construction regulatory documents for automated compliance checking.”
J. Comput. in Civ. Eng., published ahead of print.
Zhong, B. T., Ding, L. Y., Luo, H. B., Zhou, Y., Hu, Y. Z., and Hu, H. M. (2012).
“Ontology-based semantic modeling of regulation constraint for automated
construction quality compliance checking.” Autom. Constr., 28(2012), 58–70.
Zhou, N. (2012). “B-Prolog user’s manual (version 7.8): Prolog, agent, and constraint
programming.” Afany Software.
<http://www.probp.com/manual/manual.html> (Dec. 28, 2013).
Zhou, P., and El-Gohary, N. (2014). “Ontology-based multi-label text classification
for enhanced information retrieval for supporting automated environmental
compliance checking.” Computing in Civil and Building Engineering (2014),
ASCE, Reston, VA, 2238-2245.

©  ASCE 8
Computing  in  Civil  Engineering  2015 181

Electrical Contractors' Safety Risk Management: An Attribute-Based Analysis

Pouya Gholizadeh 1 and Behzad Esmaeili2


1
Ph.D. Student, Durham School of Architectural Engineering and Construction,
University of Nebraska–Lincoln, 242 The Peter Kiewit Institute, 1110 S. 67th Street,
Omaha, NE 68182. E-mail: [email protected]
2
Assistant Professor, Durham School of Architectural Engineering and Construction,
University of Nebraska–Lincoln, 113 Nebraska Hall, Lincoln, NE 68588. E-mail:
[email protected]

Abstract
Given the critical role of specialty contractors in the construction industry, as the
part who is involved in manual construction tasks and therefore is faced with hazardous
attributes more than any others, finding innovative ways to enhance the safety
performance of these types of contractors is promising. To address this challenge, the
objective of this study is to utilize recently developed attribute-based risk assessment
method and create a safety risk database, which then will be used to develop a safety-
risk assessment tool (a mobile application) that can be operated by designers, project
managers, and particularly specialty contractors to identify and assess risk of
construction activities. The study focuses on electrical works, as this sector is one of the
most hazardous trades in the construction industry (e.g. contact with electricity is the
fourth main cause of fatalities in this industry). To build upon a reliable dataset, we
obtained 323 accident reports from the OSHA IMIS database to create attribute-based
safety risk models. Then, the attributes of construction objects and work tasks that cause
injuries for electrical contractors are identified through a content analysis of injury
reports and the relative risks associated with each attribute is quantified by recording
the outcome of injuries. To predict potential outcome of injuries, a principal component
analysis (PCA) and generalized linear model (GLM) conducted on the attribute based
database and the predictive power of developed models is calculated using a rank
probability skill score (RPSS). Ultimately, a mobile application has been generated to
issue the findings.

Keywords: Construction safety; Risk management; Electrical contractors

INTRODUCTION
The construction industry presents unique challenges for occupational health
research and prevention because it involves a large number of relatively small
employers, multiemployer work sites, many hazardous exposures, and a highly mobile
workforce. This diversity complicates worker safety, especially within arenas that
frequently interact with electricity. Contact with electric current is a major cause of
injury and death among construction workers (Janicak, 2008). In 2012, the Census of
Fatal Occupational Injuries (CFOI) data produced by the Bureau of Labor Statistics
(BLS) indicated that contact with electric current was the fourth leading cause of work
related deaths—after falls, transportation incidents, and contact with objects and
equipment (BLS, 2012). Therefore, finding innovative ways to identify, assess, and

©  ASCE
Computing  in  Civil  Engineering  2015 182

mitigate electrocution hazards in early stages of a project would save lives and prevent
injury.
For a safe completion of a project, safety hazards should be predicted before
starting activities. This enables safety managers to employ proactive measures to avoid
or reduce the hazards. The objective of safety predictive models is to establish a
relationship between safety performance (dependent variable) and some measurable
factors (independent variables) that may contribute to predict safety-related outcomes
(see Figure 1). There are several methods to measure safety performance (i.e., the
dependent variable) such as accident statistics, accident control charts, attitude scales,
severity sustained by the workers, safe behavior, and identifiable hazards (Brauer 1994,
Gillen et al. 2002; Cooper and Phillips 2004, Esmaeili and Hallowell 2012). A variety
of factors have been used to measure the predictor variables such as safety attitudes,
practices and characteristics of construction firms, safety program elements employed,
and construction trades and activities. Safety predictive models vary according to the
nature of these different types of predictive variables.

Figure 1. Independent and dependent variables in safety predictive models

An extensive literature review of previous studies indicated that there are three
limitations of the existing safety predictive models (Tam and Fung 1998; Gillen et al.
2002; Chen and Yang 2004; Fang et al. 2006; Rozenfeld et al. 2010): (1) most of the
previous models are based on subjective data obtained from field personnel; (2) these
models focus on unsafe behavior and ignore the importance of physical unsafe
conditions; and (3) the proposed models cannot be integrated in to the preconstruction
safety activities.
To address these limitations, the objective of our study is to develop safety
predictive models to forecast the probability of different injury outcomes and integrate
the results into a mobile risk assessment tool. This study departs from the current body
of knowledge by developing a novel mathematical model to predict the hazardous
situation in early stage of project. It is expected that the employed approach and the
resulting models will significantly improve proactive safety management. Specifically,
the predictive models can help practitioners to consider safety during design, choose
alternative means and methods of construction, identify high risk periods of project, and
select injury prevention practices more strategically.

RESEARCH DESIGN
The objectives of this research are to identify construction safety risk attributes
specific to electric work, develop regression models to predict potential severity of
accidents, and create a mobile application that performs the risk-assessment and
provides mitigation plans. The specific research protocol for each objective and the

©  ASCE
Computing  in  Civil  Engineering  2015 183

contribution to be made is discussed in detail below. For clarification, the different steps
conducted in the study are summarized in Figure 2.

Figure 2 – Research Framework

Predictive models and risk assessment techniques that are based on empirical
data provides higher validity for the users. Therefore, we decided to use Integrated
Management Information Systems (IMIS) database of accidents because this database is
publicly available and include a wide range of incidents described by OSHA
compliance officers. We limited the scope of study to industry group 1731 (electrical
work) from the OSHA IMIS database and collected 325 accident reports which were
reported from January 2009 to October 2012.

Identifying Attributes
The corresponding research objective is to identify and classify the attributes of
construction objects and work tasks that cause injuries through a content analysis of
accident reports. Content analysis is empirically grounded in the scientific method that
helps researchers gain insights into specific issues and quantify the frequency and
distribution of content in textual data (Krippendorf, 2004). As safety-risk attributes are
latent in accident reports and identifying them requires recognizing patterns in accident
reports, we decided to conduct content analysis on accident reports obtained from IMIS
database. Content analysis is a scientific method to analyze content in textual data
(Krippendorf, 2004).

Developing Predictive Models


In general, regression techniques aim to model the relationships among variables
by quantifying the magnitude that a response variable is related to a set of explanatory
variables. In this study, the independent variables are safety attributes and dependent
variables are outcome of injuries. The results of content analysis were used as the main
dataset for creating predictive models. To reduce the dimension of the data and remove
collinearity among variables, we conducted principal component analysis (PCA). This
technique helped us to reduce the dimensionality of the dataset which consisted of a
large number of interrelated variables by retaining the maximum possible variance. The
algorithm of PCA is implemented through the “prcomp” function in R (Stats Package),
which is an open source statistical program.
As the response variables in this study are categorical (e.g. severe/mild/non
severe), we used Generalized Linear Models (GLM). This modeling technique provides

©  ASCE
Computing  in  Civil  Engineering  2015 184

a very flexible approach for exploring the relationships among a variety of variables
(discrete, categorical, continuous and positive, extreme value) as compared to
traditional regression (McCullagh and Nelder, 1989). Model parameters in GLM were
determined in an iterative process called iterated weighted least squares (IWLS). This
algorithm was implemented by default through R’s standard GLM libraries, such as
“MASS,” “VGAM,” and “nnet.” To avoid over-fitting of the data and find a “best
model” that contains the right quantity of variables, we adopted a stepwise regression
approach that minimizes the Akaike Information Criterion (AIC) instead of a likelihood
function to evaluate goodness of fit in the stepwise search. This helped us to create
predictive models that reproduce the variance of the observations with the fewest
number of parameters (Wilks, 1995).
It is necessary to measure predictive power of models. In this study, the
performance of the model will be measured against the observed data through a rank
probability skill score (RPS), which indicates the degree to which the model predicts the
observed data. This method has been used in various climatological contexts to compare
the model’s skill in predicting categorical rainfall and stream flow quantities (Regonda
et al., 2006). A detailed description of the RPSS method has been provided by Wilks
(1995).

Developing Mobile Application Tool


Empowered with an easy risk assessment tool that automatically evaluates
alternative work plans against quantifiable safety-risk considerations, managers may
choose the plans that reduce safety hazards the most. Therefore, we also developed a
mobile application using Adobe® Captivate® 6 software (Adobe, 2013). The mobile
application enables designers to select hazardous attributes from a predefined checklist
and calculate the probability of fatality or severe injuries in the site. Alongside
predictive models, the requirements provided in OSHA’s Code of Federal Regulations
(CFR) 1926 Subpart K as well as referenced standards 1910.333 and 1910.334 are
incorporated into the program to help designers modify the construction features
according to regulatory guidelines.

RESULTS AND ANALYSIS


Content analysis conducted on accident reports to identify safety attributes and
quantify their frequencies. In total 22 attributes were identified and their frequencies are
summarized in Table 1. In addition to recording the occurrence of attributes, we
recorded outcome of each incident as dependent variable. It was expected that fatal and
severe accidents would dominate the accident outcome since the IMIS database
includes OSHA recordable injuries that have severe consequences. In total, 21 different
kinds of incident outcomes were identified from accident reports. However, this
characteristic can cause problems for predictive models because some of these
outcomes would have a very low frequency rate and it will be difficult to predict them
accurately. To resolve this challenge, we categorize the response variables into three
main groups: not severe, mild, and severe. This helped us to balance number of
response variables into relatively equal groups.

©  ASCE
Computing  in  Civil  Engineering  2015 185

Table 1. Safety risk attributes and their frequencies


Attributes Frequency
1 Working on/near live wiring or energized circuit 18%
2 Working on ladders 17%
3 Possibility of arc 8%
4 Working on aerial platform (lift, boomed vehicles) 6%
5 Working on/near other components (transformers/powerline) 5%
6 Working with electric light fixture 5%
7 Working on unsecured or unstable surface 4%
8 Working in a hanging position or on structural frames 4%
9 Struck by moving equipments 4%
10 Struck by falling or flying objects 4%
11 Working near unprotected edge/opening 3%
12 Working with electric tools 3%
13 Working on/near transformers 3%
14 Using and maintaining heavy equipments 3%
15 Working near active roadway - railway 3%
16 Lifting or transporting heavy materials 2%
17 Working on scaffold 2%
18 Working in swing area of a boomed vehicle 2%
19 Working near overhead powerline 2%
20 Using cranes or boomed vehicles near energized powerline 1%
21 High temperature (dehydration/heat exhaustion) 1%
22 Working on an equipment 1%

To explore the possibility of reducing the dimensionality of the potential


predictor variables (attributes), principal component analysis (PCA) was conducted on
the dataset. Then, the number of PCs that should be used in GLM model was selected
by visually investigating a scree plot of the variance captured. We selected three first
PCs that capture 42.73% of variance. As it is shown in Table 2, the PCA has
successfully reduced number of variables that should be entered to the regression model
from twenty two variables to five variables.
Table 2. Principal component analysis results
Variables Loading Variance (%)
PC #1: Working near energized circuit on the ground
Possibility of arc 0.369
Working on/near live wiring or energized circuit 0.844
Working on ladders -0.337 18.51
PC #2: Working near energized circuit on ladder
Working on/near live wiring or energized circuit -0.313
Working on ladders -0.871 15.73
PC #3: Working near energized circuit on aerial platform
Possibility of arc 0.726
Working on/near live wiring or energized circuit -0.279
Working on/near other components (transformers/powerline) 0.417
Working on aerial platform (lift, boomed vehicles) -0.260 8.30
Cumulative variance (%) - 42.54

©  ASCE
Computing  in  Civil  Engineering  2015 186

After selecting the PCs, GLMs with a logit link function were fit to the selected
PCs to predict the probability of not severe, mild, and severe injuries. In addition, to
find the parameter set that minimized the model AIC, a stepwise regression approach
was employed. Therefore, number of variables used in the model was reduced once
more. Then, the logit link function was used to transform responses, x, into the linear
predictor. The results of stepwise generalized linear models for these models are
summarized in Table 3. This table presents value of parameters that can be entered into
simultaneous equations and predict probability of various types of injuries based on the
existence of certain attributes.
Table 3. Overall results of stepwise generalized linear models
Predictor Estimate Std. Error
Intercept-1 0.669 0.209
Intercept-2 0.810 0.201
PC1-1 -2.720 0.493
PC1-2 -1.725 0.426
PC3-1 -3.537 0.757
PC3-2 -3.997 0.718
* All parameters are significant to p< 0.1.
By estimating parameters ( ), link functions ( ) can be calculated and by back-
transforming link functions with the inverse logit, probabilities of not severe, mild, and
severe will be obtained. As one can see, there are two values for each parameter in
Table 3. The first values are parameters for , and the second

values are the parameters for . The probability of response


variables for model can be calculated from solving the simultaneous equations below:

Equation 1

Equation 2

Equation 3
While the mathematics behind the models is complicated, the findings can be
easily used in practice. To find the probability of severe, mild, and not severe injuries
for an activity, one should select potential attributes that workers will be exposed to
during conducting the activity. Then, by solving the above mentioned equations, the
probability of not severe, mild, and severe accidents can be calculated.
The RPSS of model is calculated as 0.185 which represent a strong model
performance. One should note that the expected value of RPSS is less than zero (Mason
2004) which means that any value greater than zero indicate superior performance of
the model to the reference forecast. A diagnostic test was also conducted, and the
residual plots of R (being actual Y less projected Y) versus “predicted Y” show a
random distribution. This confirms that the assumption about the normality is valid.
There was no need to analyze multicolinearity among variables, because the PCs are
orthogonal and PCA remove any multicolinearity.

©  ASCE
Computing  in  Civil  Engineering  2015 187

PRACTICAL APPLICATIONS
There are several practical implications for the developed risk database and
predictive models; for example, they can be integrated in to the building information
modeling (BIM) software. One of the major barriers for a wide implementation of BIM-
based safety management programs is lack of valid safety risk data. However, the
results of this study addresses this challenge by presenting a list of limited number of
attributes that can be used to measure safety risk and predict severity of possible
injuries. There are several safety related feedbacks and reports that can be created by
BIM-based safety management program using the proposed attribute-based safety risk
management. After assigning the attributes to the tasks and objects, hazard analysis can
be conducted in the planning phase of the project. The hazards identified in this study
can also be used by designers to modify their designs to improve safety. If the hazards
cannot be prevented during the design, more attention should be paid to mitigate them
during the construction phase. In addition, a project manager can compare alternative
means and methods to see which ones provide more hazards for the workers.
Furthermore, a supervisor can identify hazardous activities or situations to highlight
them during job hazard analysis or tool box meetings. The database can also be used to
develop safety risk profiles. Using risk profiles provide an opportunity for managers to
allocate safety resources in accordance with risk fluctuation in project schedule. As
mentioned, to facilitate dissemination of results a safety risk assessment mobile
application was developed. Screen shots of the tool are provided in Figure 3. In future
steps of this study, the mobile application will be validated by the safety managers.

Figure 3. (a) Main page; (b) Overview; (c) Risk assessment

CONCLUSIONS
Predicting hazardous situations and the probability of injuries before start of a
project is an essential step towards adopting preconstruction safety activities and
mitigating risks. The current safety predictive models are not based on objective data
and can be implemented only in the construction phase of a project. To address this
limitation and facilitate adoption of preconstruction safety activities, predictive models
were developed that forecast severity of possible injuries using fundamental safety
attributes. The model has several advantages to previous studies. First, the model are
based on a reliable empirical database developed by conducting content analysis on a
large number of accident reports obtained from OSHA IMIS database. For the first
time, there was a dataset of sufficient size and quality to apply statistical techniques and
create mathematical models. Second, the predictive model shows which attributes are
more critical in causing accidents. Therefore, project managers can mitigate the risk of

©  ASCE
Computing  in  Civil  Engineering  2015 188

injuries by focusing on limited number of attributes. Third, the results of the study and
developed predictive models can be integrated into building information modeling and
enhance preconstruction safety practices. At the end, it is expected that the predictive
models could drastically change the way that potential injuries are considered during
planning, project financing, and safety controls.

REFERENCES
Adobe® Captivate® 6. (2013). Adobe Systems, 345 Park Avenue, San Jose, CA 95110-
2704. Available from: http://www.adobe.com/products/captivate/.
Brauer, R. L. (1994). Risk management and assessment. Safety and Health for
Engineers, Van Nostrand Reinhold, New York, 572-543.
Bureau of Labor and Statistics (2012). Census of fatal Occupational Injuries (CFOI).
http://www.bls.gov/iif/oshwc/cfoi/cftb0268.pdf.
Chen, J. R., & Yang, Y. T. (2004). A predictive risk index for safety performance in
process industries. Journal of Loss Prevention in the Process Industries, Vol.
17, pp. 233–242.
Cooper, M. D., & Phillips, R. A. (2004). Exploratory analysis of the safety climate and
safety behavior relationship. Journal of Safety Research, Vol. 35, pp. 497–512.
Esmaeili (2012). Identifying and quantifying construction safety risks at the attribute
level. Unpublished thesis (PhD), University of Colorado at Boulder.
Esmaeili, B., Hallowell, M. R., (2011). Using network analysis to model fall hazards on
construction projects. Safety and Health in Construction, CIB W099, August 24-
26, 2011, Washington DC.
Esmaeili, B., Hallowell, M. R., (2012). Attribute safety risk model for measuring safety
risk of struck-by accidents. In The 2012 Construction Research Congress
(CRC), May 21-23, West Lafayette.
Fang, D.P., Chen, Y., Louisa, W., (2006). Safety climate in construction industry: a case
study in Hong Kong. Journal of Construction Engineering and Management.
Vol. 132(6), pp. 573–584.
Gillen, M., Baltz, D., Gassel, M., Kirch, L., & Vaccaro, D. (2002). Perceived safety
climate, job demands, and coworker support among union and nonunion injured
construction workers. Journal of Safety Research, Vol. 33, pp. 33–51.
Hinze, J., Huang, X., and Terry, L. (2005). The nature of struck-by accidents. Journal of
Construction Engineering and Management., Vol. 131 (2), pp. 262-268.
Janicak, C. A. (2008). “Occupational fatalities due to electrocutions in the construction
industry.” Journal of Safety Research, 39: 617 621.
Joliffe, I. T., (2002). Principal Component Analysis. (2nd edition), Springer-Verlag.
Krippendorff, K. (2004). Content analysis: An introduction to its methodology.
Thousand Oaks, CA: Sage.
Mason, S. J. (2004). On Using Climatology as a Reference Strategy in the Brier and
Ranked Probability Skill Scores. Monthly Weather Review, Vol. 132, 1891-
1895.
McCullagh, P. and J.A. Nelder (1989). Generalized linear models. Chapman and Hall,
London.
Regonda, S., Rajagopalan B., Clark M. (2006). “A new method to produce categorical
streamflow forecasts.” Water Resources Research, 42.

©  ASCE
Computing  in  Civil  Engineering  2015 189

Rozenfeld, O., Sacks, R., Rosenfeld, Y., Baum, H., (2010). Construction job safety
analysis. Safety Science, Vol. 48, pp. 491–498.
Tam, C.M., Fung, I.W.H., (1998). Effectiveness of safety management strategies on
safety performance in Hong Kong. Construction Management and Economics.
Vol. 16(1), pp. 49–55.
Wight, P.C., Blacklow, B. and Tepordei, G.M. (1995). An epidemiologic assessment of
serious injuries and fatalities among highway and street construction workers.
Department of Transportation Grant No. DTFH61-93-X-00024.
Wilks D. S. (1995). Statistical methods in the atmospheric sciences. Academic Press.

©  ASCE
Computing  in  Civil  Engineering  2015 190

Ontology-Based Information Extraction from Environmental Regulations for


Supporting Environmental Compliance Checking

Peng Zhou1 and Nora El-Gohary2

1
Graduate Student, Department of Civil and Environmental Engineering,
University of Illinois at Urbana-Champaign, 205 North Mathews Avenue, Urbana,
IL 61801. E-mail: [email protected]
2
Assistant Professor, Department of Civil and Environmental Engineering,
University of Illinois at Urbana-Champaign, 205 North Mathews Avenue, Urbana,
IL 61801. E-mail: [email protected]

Abstract

Automated environmental regulatory compliance checking requires


automated extraction of regulatory requirements/rules from environmental
regulatory textual documents, such as energy conservation codes and
environmental protection agency (EPA) regulations. Natural language processing
(NLP) aims to enable computers to analyze and process natural text in a
human-like manner. Information extraction (IE) is an application of NLP that
aims to automatically extract specific information from text to support a specific
computational task. In the proposed automated compliance checking (ACC)
approach, after classifying the text for filtering out irrelevant regulatory
provisions, pattern-matching-based IE techniques are used for extracting
regulatory information, from the classified text, into certain predefined semantic
patterns. In their previous work, the authors have proposed a semantic, rule-based
methodology and algorithm for extracting information from building codes. This
paper builds on the authors’ previous work in three main ways. First, the proposed
IE algorithm is used in combination with text classification (TC) algorithms to
enhance the efficiency (by avoiding unnecessary computational processing of
irrelevant text) and performance (by avoiding potential noise and errors resulting
from processing irrelevant text) of IE. Second, the IE algorithm is adapted to
environmental regulatory text, which is different from building codes in terms of
its syntactic and semantic features. Third, to enhance performance, a deeper (more
detailed) ontology is used and a conceptual dependency structure is built to
capture dependency information to reduce text ambiguities. The proposed IE
algorithm was tested in extracting regulatory requirements from the 2012
International Energy Conservation Code, and the testing results showed 99.85%
recall and 99.55% precision.

©  ASCE
Computing  in  Civil  Engineering  2015 191

INTRODUCTION

Environmental compliance checking helps projects comply with


regulations and legislations, such as International Energy Conservation Code
(IECC), Clean Water Act, etc. Since manual compliance checking is
time-consuming (Eastman et al. 2009; Tan et al. 2010), much research effort has
contributed to automating the compliance checking process to reduce the checking
time. Some state-of-the-art research efforts include: (1) modeling regulatory
constraints using ontology to support construction quality compliance checking
(Zhong et al. 2012); and (2) encoding regulatory knowledge to support building
design compliance checking (Dimyadi et al. 2014; Tan et al. 2010). Despite of
their significance, these efforts still need intensive manual effort to extract
regulatory requirements and encode them for computer processing. Therefore,
compliance checking still remains not fully automated.
To address this limitation, an automated compliance checking (ACC)
approach that uses deontic modeling and natural language processing (NLP)
techniques (Salama and El-Gohary 2013) was proposed. Deontic modeling aims to
model the obligations that are defined in regulations (Salama and El-Gohary 2013),
and NLP aims to facilitate computer processing and analysis of the text in the
regulatory documents (Zhang and El-Gohary 2013). Information extraction (IE)
applies NLP techniques to identify, extract, and structure information from natural
language text (Moens 2006). To achieve full ACC, in their previous work, the
authors have proposed an IE methodology and algorithm for extracting
information from building codes (Zhang and El-Gohary 2013).
This paper presents an IE algorithm for extracting regulatory requirements
from environmental regulations. This algorithm builds on the authors’ prior work
in three main important ways: (1) it extracts regulatory requirements from
pre-classified text rather than unclassified text, which aims to improve the
efficiency (by avoiding unnecessary computational processing of irrelevant text)
and performance (by avoiding potential noise and errors resulting from processing
irrelevant text) of IE; (2) it adapts the IE method/algorithm to the environmental
regulatory domain, which is necessary because of the different nature of the text in
terms of its syntactic and semantic features; and (3) it uses a deeper (more detailed)
ontology and applies conceptual dependency theory to build a conceptual
dependency structure to capture and utilize dependency information, which aims
to reduce text ambiguities and improve the performance of IE. The proposed IE
algorithm was tested in extracting regulatory requirements from the 2012 IECC.

BACKGROUND

NLP is a subdiscipline of artificial intelligence that aims to enable


computers to understand human language (Manning and Schutze 1999). IE is an
application of NLP techniques [e.g., part-of-speech (POS) tagging, morphological
analysis] that aims to identify and extract information according to a predefined

©  ASCE
Computing  in  Civil  Engineering  2015 192

template or schema (Manning and Schutze 1999). There are two common
approaches to IE (Moreno et al. 2013; Moens 2006): (1) a rule-based approach
that requires manually defining a series of patterns based on syntactic features
and/or semantic features to guide the extraction; and (2) a machine learning (ML)
approach that applies algorithms such as Support Vector Machine, Hidden
Markov Model and Conditional Random Field to automatically learn those
patterns from training data. Although ML methods can save the human effort in
pattern recognition and rule development, rule-based methods are commonly-used
because of their expected higher performance (Moens 2006).
Ontology-based IE (OBIE) is a subfield of IE that aims to use an ontology
to assist in extracting semantic information that is specific to a domain
(Wimalasuriya and Dou 2010). A domain ontology represents domain knowledge
in terms of concepts, relationships, and axioms (El-Gohary and El-Diraby 2010).
Comparing to general IE which only depends on the syntactic information of the
text, OBIE further relies on semantic domain-specific information to extract
information of domain interest. OBIE has been well explored across different
domains, such as the business (Saggion et al. 2007), the legal (Moens 2006), and
the biology (Moreno et al. 2013) domain. However, OBIE efforts are still limited
in the construction domain: existing efforts either rely on mostly syntactic features
(Al Qady and Kandil 2010) or simple document structure features (HTML tags)
(Abuzir and Abuzir 2002). In comparison, this research integrates both syntactic
features (e.g., POS tags) and semantic features (e.g., concepts from an ontology)
to assist the extraction process.

PROPOSED INFORMATION EXTRACTION ALGORITHM

The proposed IE algorithm is composed of seven primary steps (see Figure


1). In step 3, the syntactic and semantic features were selected in parallel.

Figure 1. Proposed information extraction algorithm.


Step 1: Text Classification

Prior to extracting rules from regulatory documents, the documents were


first classified to filter out irrelevant text. A topic hierarchy was developed to

©  ASCE
Computing  in  Civil  Engineering  2015 193

identify the labels used in classification, and a sub-ontology was built for each
topic (label) in the hierarchy. After preprocessing the documents using
tokenization, stemming, and stopword removal, a document was assigned with
zero, one, or multiple labels by measuring the semantic similarity between the
document and each sub-ontology using a deep learning technique. The documents
assigned with a zero label are filtered out. For further details on the classification
methodology, the readers are referred to Zhou and El-Gohary (2014).

Step 2: Preprocessing

Preprocessing aims to prepare the raw classified (from Step 1) text for the
following analysis and processing steps. Preprocessing involves three primary
NLP techniques (Manning and Schutze 1999): (1) Tokenization: Tokenization
splits the raw text into tokens (e.g., words, numbers, punctuations, symbols,
whitespace); (2) Sentence splitting: This task splits text into sentences by
detecting sentence boundary indicators like question mark, exclamation mark, and
period; and (3) Morphological analysis: This task collapses different derivational
(e.g., affixes like “ly”, “ion”) and inflectional forms (e.g., plural, progressive) of a
word to its base form. For example, “realizations”, “realizing”, “realistically”, and
“unreality” are all mapped to “real”.

Step 3: Feature Selection

After preprocessing the text, syntactic features and semantic features were
selected for further extraction rule development (Step 5).
Syntactic features were selected using POS tagging and a set of gazetteers.
POS tagging assigns a tag to each word based on the syntactic word class (e.g.,
noun, verb, adjective, etc.) (Moens 2006). For example, tag “NN” was assigned to
each singular noun in a sentence. A gazetteer refers to a list of words that share a
common category (e.g., countries of world) (Wimalasuriya and Dou 2010). For
example, in this paper, a gazetteer measurement list was built to collect all
words/symbols that act as measurement units (e.g., “square feet”, “cfm/sf2”), and
these words/symbols were assigned with a tag named “unit”. Each tag was used as
a syntactic feature.
Semantic features were selected using an ontology. In this paper, an
ontology was built to capture the concepts of building energy conservation, which
is a subdomain of the environmental domain. The methodology of building an
ontology as described in El-Gohary and El-Diraby (2010) was followed. Each
concept from the ontology was used as a semantic feature.

Step 4: Identification of Target Semantic Information Elements (SIEs) and


their Conceptual Dependency Structure

©  ASCE
Computing  in  Civil  Engineering  2015 194

Before developing the extraction rules, the target information to be


extracted should be identified according to the specific requirements of the
application and domain. Five essential types of target semantic information
elements (SIEs) for representing quantitative requirements were used (Zhang and
El-Gohary 2013): “Subject”, “Attribute”, “Comparative Relation”, “Quantity
Value”, and “Quantity Unit/Reference”. These SIEs correspond to concepts and
relations in the ontology. For example, a “supply air system” (a concept in the
ontology) corresponds to a “Subject” (a SIE).
The conceptual dependency structure of the SIEs was developed based on
conceptual dependency theory. According to conceptual dependency theory, any
two linguistic structures of identical meaning should have the same conceptual
dependency structure (Moens 2006). In this research, the proposed IE algorithm is
used to extract quantitative requirements of energy conservation. Since all
quantitative requirements express the same meaning in terms of requirements
expressed in numerical values on an entity, these quantitative requirements could
be represented by the same conceptual dependency structure. The conceptual
dependency structure is composed of inter-dependent primary concepts and
relations (Moens 2006). Since a sentence is usually composed of multiple
concepts and relations, the sentence was analyzed to identify and extract those
primary concepts and relations that correspond to the target SIEs. After analyzing
the dependencies among the target SIEs, the conceptual dependency structure of
the SIEs was built, as per Figure 2. The conceptual dependency structure indicates
that: (1) there exists an extraction sequence that an SIE should be extracted only
after all its preceding SIEs are extracted; and (2) compiling the extraction rules to
extract an SIE may use the preceding SIEs to reduce ambiguities/errors.
Comparing to extracting an SIE isolatedly (i.e., extraction rules are compiled
without using dependency information), the use of dependency information in
compiling extraction rules imposes more stringent conditions on matching
information, thus ruling out information that does not match the conditions.

Figure 2. Conceptual dependency structure.


Step 5: Extraction Rule Development

After identifying the target SIEs, extraction rules were manually developed
to extract the instances of target SIEs. The left side of an extraction rule models
the pattern of the text in terms of semantic features (e.g., POS tags) and/or
syntactic features (i.e., concepts), while the right side defines the information that

©  ASCE
Computing  in  Civil  Engineering  2015 195

should be extracted when this pattern is matched. In developing the rules, the
dependency information among the SIEs assisted in defining the patterns. For
example, the following rule was developed to extract the instances of
“Comparative Relation” (a target SIE): (JJR IN):cr + QuantityValue
cr.ComparativeRelation. “JJR” and “IN” are POS tags for comparative adjective
and preposition. “JJR IN” is a pattern that matches information like “less than”.
When “JJR IN” is followed by “QuantityValue”, which is the dependency
information, the information matching “JJR IN” should probably be an instance of
“Comparative Relation”. Therefore, a pointer “cr” was set to pattern “JJR IN”, and
the information (which the pointer refers to) matching this pattern was extracted as
instances of “ComparativeRelation”. Similarly, Rule 7 (sj + VBZ +
ComparativeRelation sj.Subject”) was used to extract “supply air system” as an
instance of “Subject”, where “supply air system” is a concept defined in the
ontology.

Step 6: Extraction Implementation

The IE algorithm was implemented in the General Architecture for Text


Engineering (GATE) (Cunningham et al. 2011). The ontology was built using the
ontology editor of GATE. The extraction rules were encoded as Java Annotation
Patterns Engine (JAPE) rules (Cunningham et al. 2011). The IE algorithm was
tested in extracting quantitative requirements from Chapter 4 of 2012 IECC.

Step 7: Extraction Results Evaluation

A gold standard was manually built to evaluate the extraction results. The
performance was measured in terms of recall and precision (Moens 2006). Recall
refers to the percentage of the total number of correctly extracted instances out of
the total number of correct instances in the gold standard. Precision refers to the
percentage of the total number of correctly extracted instances out of the total
number of extracted instances.

PRELIMINARY EXPERIMENTAL RESULTS AND ANALYSIS

The preliminary experimental results are summarized in Table 1. The


number of patterns used to extract “Subject”, “Compliance Checking Attribute”,
“Comparative Relation”, “Quantity Value”, and “Quantity Unit/Reference” are 21,
6, 9, 14 and 14, respectively. The gold standard includes 153, 64, 153, 153, and
135 instances of “Subject”, “Compliance Checking Attribute”, “Comparative
Relation”, “Quantity Value”, and “Quantity Unit/Reference”. A performance of
99.85% recall and 99.55% precision was achieved, which indicates that the
proposed IE algorithm is effective in extracting regulatory requirements from
environmental regulations. The results also showed that the use of dependency
information was effective in reducing semantic ambiguities. For example, “above”

©  ASCE
Computing  in  Civil  Engineering  2015 196

could either be followed by a “Quantity Value” or a location, but only in the


former case “above” should be extracted as the “Comparative Relation”. The
proposed algorithm was able to avoid such semantic ambiguities by utilizing
dependency information. However, sometimes there is not enough dependency
information to address all ambiguities, especially when extracting a target
information at the top of the conceptual dependency structure. For example,
(“Quantity Value” + “Quantity Unit”) may appear as a quantitative restriction of
an existing (“Quantity Value” + “Quantity Unit”), which may result in mistakenly
extracting both instead of extracting the target one (the second one, in this case).

Table 1. Preliminary Experimental Results.

CONCLUSION AND FUTURE WORK

In this paper, a semantic, rule-based IE algorithm for automatically


extracting environmental regulatory requirements from textual regulatory
documents was presented. The proposed algorithm captures and uses dependency
information to reduce the semantic ambiguities of the text for enhancing the
performance of extraction. A conceptual dependency structure was built to
identify target SIEs and the dependency information among the target SIEs. Both
syntactic features (e.g., POS tags) and semantics features (i.e., concepts from an
ontology) were used in the extraction rules to define the patterns of the text. The
dependency information was used to assist in constructing the patterns in the
extraction rules.
The IE algorithm was tested in extracting quantitative requirements from
Chapter 4 of the 2012 IECC. Compared to the performance (94.4% recall and 96.
9% precision) of previous IE work on building codes (Zhang and El-Gohary 2013),
a 99.85% recall and 99.55% precision was achieved, on the testing data. The
performance indicates that the proposed IE algorithm is effective in extracting
requirements from environmental regulations.
In future work, the IE algorithm: (1) will be tested in extracting more
complex target SIEs like restrictions (i.e., SIEs that are composed of multiple
concepts and relations); and (2) will be tested on more environmental regulations
(e.g., EPA regulations).

ACKNOWLEDGEMENT

©  ASCE
Computing  in  Civil  Engineering  2015 197

The authors would like to thank the National Science Foundation (NSF).
This material is based upon work supported by NSF under Grant No. 1201170.
Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of NSF.
REFERENCES

Abuzir, Y., and Abuzir, M. O. (2002). “Constructing the civil engineering


thesaurus (CET) using ThesWB.” Inform. Tech. in Civ. Eng. Int. Workshop,
ASCE, Reston, VA, 400-412.
Al Qady, M.A., and Kandil, A. (2010). “Concept relation extraction from
construction documents using natural language processing.” J. Constr. Eng.
Manage., 136(3), 294-302.
Cunningham, H., Maynard, D., and Bontcheva, K. (2011). Text processing with
GATE (Version 6). University of Sheffield Department of Computer Science.
Dimyadi, J., Clifton, C., Spearpoint, M., and Amor, R. (2014). “Regulatory
knowledge encoding guidelines for automated compliance audit of building
engineering design.” 2014 Int. Conf. on Comput. Civ. and Build. Eng., ASCE,
Reston, VA, 536-543.
Eastman, C., Lee, J., Jeong, Y., and Lee, J. (2009). “Automatic rule-based
checking of building designs.” Autom. Constr., 18(8), 1011-1033.
El-Gohary, N., and El-Diraby, T. (2010). “Domain ontology for processes in
infrastructure and construction.” J. Constr. Eng. Manage., 136(7),
730-744.
Manning, C. D., and Schutze, H. (1999). Foundations of statistical natural
language processing, MIT Press, Cambridge, Massachusetts.
Moens, M. F. (2006). Information extraction: Algorithms and prospects in a
retrieval context. Springer-Verlag New York, Secaucus, New Jersey.
Moreno, A., Isern, D., and López Fuentes, A. C. (2013). “Ontology-based
information extraction of regulatory networks from scientific articles with
case studies for Escherichia coli.” Expert Syst. Appl., 40(8), 3266-3281.
Salama, D., and El-Gohary, N. (2013). “Automated compliance checking of
construction operation plans using a deontology for the construction domain.”
J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000298, 681-698.
Saggion, H., Funk, A., Maynard, D., and Bontcheva, K. (2007). “Ontology-based
information extraction for business intelligence.” The Semantic Web, Lecture
Notes in Computer Science, Springer Berlin, Heidelberg, 843-856.
Tan, X., Hammad, A., and Fazio, P. (2010). “Automated code compliance
checking for building envelope design.” J. Comput. Civ. Eng., 24(2),
203-211.
Wimalasuriya, D. C., and Dou, D. (2010). “Ontology-based information extraction:
An introduction and a survey of current approaches.” J. Inf. Sci., 36(3),
306-323.
Zhang, J., and El-Gohary, N. (2013). “Semantic NLP-based information extraction
from construction regulatory documents for automated compliance checking.”
J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000346, 04015014.

©  ASCE
Computing  in  Civil  Engineering  2015 198

Zhong, B. T., Ding, L. Y., Luo, H. B., Zhou, Y., Hu, Y. Z., and Hu, H. M. (2012).
“Ontology-based semantic modeling of regulation constraint for automated
construction quality compliance checking.” Autom. Constr., 28(2012), 58-70.
Zhou, P., and El-Gohary, N. (2014). “Ontology-based, multi-label text
classification for enhanced information retrieval for supporting automated
environmental compliance checking.” 2014 Int. Conf. on Comput. Civ. and
Build. Eng., ASCE, Reston, VA, 2238-2245.

©  ASCE
Computing  in  Civil  Engineering  2015 199

Laser Scanning Intensity Analysis for


Automated Building Wind Damage Detection

A. G. Kashani1, M.J. Olsen2, and A. J. Graettinger3


1
School of Civil and Construction Engineering, Oregon State University, 101
Kearney Hall, 1491 SW Campus Way, Corvallis, OR 97331;
PH (541) 737-4934; email: [email protected]
2
School of Civil and Construction Engineering, Oregon State University, 101
Kearney Hall, 1491 SW Campus Way, Corvallis, OR 97331;
PH (541) 737-9237; email: [email protected]
3
Department of Civil, Construction and Environmental Engineering, University of
Alabama, Box 870205, Tuscaloosa, AL 35487;
PH (205)348-6550; email: [email protected]
ABSTRACT

Spatial and spectral information provided by laser scanning or light detection


and ranging (lidar) systems have been applied to various applications in the geospatial
and civil engineering domains. In addition to precise 3D coordinates, most laser
scanning systems also record intensity information, which have proven beneficial in
point cloud segmentation and object extraction. In previous work, the authors
demonstrated within controlled laboratory conditions that intensity values are an
appropriate means for detecting wind-induced roof damage. However to be
implemented successfully in real-world settings, the influence of scanning parameters
such as the scanning angle of incidence should be investigated. The impact of these
factors on the associated intensity values becomes increasingly important in terrestrial
laser scanning when the point cloud data comprises registered scans that were
captured from different locations to provide consistency. This paper presents
examples of laboratory and field tests to investigate the influence of the scanning
angle of incidence on lidar intensity measurement and classification for automated
building wind damage detection. The tests indicated that captured lidar intensity
information with a Leica C10 scanner can be used to classify and detect damaged
areas on roofs when the scanning angle of incidence is less than 70 degrees.

Keywords: Terrestrial Laser Scanning, Lidar, Intensity, Segmentation, Damage


Detection

INTRODUCTION

Spatial and spectral information provided by laser scanning or light detection


and ranging (lidar) systems have been applied to various applications in the geospatial
and civil engineering domains. Laser scanners measure and deliver 3D coordinates of

©  ASCE 1
Computing  in  Civil  Engineering  2015 200

millions of points – known as a point cloud - from the surveying environment.


Different data processing algorithms have been developed for segmentation of lidar
points into different classes and extraction of meaningful objects from lidar point
clouds. Initial point cloud segmentation and object extraction efforts relied on
geometric features of points such as position, normal vector, planarity, etc. In addition
to precise 3D coordinates, most laser scanning systems also record intensity
information, which have proven beneficial in point cloud segmentation and object
extraction. To date, researchers continue to find new and innovative uses of these data.
Some examples of currently studied applications of lidar intensity include target
recognition (Coren and Sterzai, 2006), point cloud registration (González et al., 2009),
land cover classification (Antonarakis et al., 2008) and cultural heritage preservation
(González et al., 2010).
The study presented in this paper is a part of our recent research efforts for
developing an automated damage analysis approach based on ground based lidar data
collected after extreme wind events. In previous research efforts, we developed an
automated lidar based technique to detect and quantify roof and wall sheathing loss
(Kashani et al., 2014). Also, benefits of the proposed lidar-based analyses in tornado
wind speed estimation and structural fragility assessments were demonstrated
(Kashani et al., 2014). In recent work, we investigated the use of lidar intensity and
color information for automated detection of roof covering loss which is another
common type of wind induced damage in typical pitched roof buildings (Kashani and
Graettinger, 2015). This study indicated that the primary benefit of lidar intensity is
that it is related to surface reflectance and unlike color data is not influenced by the
damage size, roof color, and lighting conditions. Therefore, lidar intensity data can be
used to separate different materials (with different reflectance) in scans.
However, some other factors beside the surface reflectance can impact on
lidar intensity measurements, which should be taken into consideration. Theoretically,
factors impacting on lidar intensity measurements can be presented with the radar
range equation (Jelalian, 1992). This equation presents the relationship between the
transmitted and received signal powers. Under some simplifying assumptions (see
Hofle and Pfeifer, 2007), the radar range equation can be simplified to what shown in
Equation 1:
𝐼 =   𝐼 . 𝜌. 𝜂 . ∁ (Equation 1)
Where Ir is the received signal power; It is the transmitted signal power; 𝜌 is
the target reflectance; R is the range from scanner sensor to target; ηatm is the
atmospheric attenuation constant; 𝜃 is the incident angle which is the angle between
the direction of the laser beam and the surface normal at the target; and C is the
sensor system constant factor (Ding et al., 2013). The impact of the scanning angle of
incidence becomes increasingly important in terrestrial laser scanning when the point
cloud data comprises registered scans that were captured from different locations.
This paper presents experiments to investigate the influence of the scanning
angle of incidence on lidar intensity measurements. The study was designed
specifically to investigate the usage of lidar intensity data for automated detection of
wind induced roof covering loss following windstorm events.
The research presented in this paper includes tests with two separate datasets
collected in laboratory and real-word conditions. In the first test, a panel consisting of

©  ASCE 2
Computing  in  Civil  Engineering  2015 201

areas made from roof shingles, roof felt paper, and plywood - typical visible materials
on a damaged roof - was made and scanned with different angles of incidence. In the
second test, a building damaged by the 2013 Moore, OK tornado was scanned from
different locations. Using these lidar datasets, the influence of the incident angle on
lidar intensity measurements and wind-induce damage detection were investigated.

LABORATORY TEST

The laboratory test was run by scanning a model of a damaged roof panel with
varying angles of incidence. Figure 1 shows the laboratory test settings. The roof
panel was constructed by nailing shingles on a 1.22 m X 2.44 m (4ft X 8ft) sheet of
plywood. To represent wind induced defects, roof felt papers and plywood pieces
were cut and placed on the roof panel (see the close up view in Figure 1). Then as
shown in Figure 1, the roof panel was hung on the laboratory wall at height of 4 meter
off the ground. Five different scans were run and a crane system was used to change
the roof panel angle at each scan. A Leica C10 scanner was used, and all scans were
collected from 10 meters away from the roof panel with a medium scanning
resolution (i.e., 1 centimeter point spacing at 10 meter distance).

Figure 1. Laboratory scans

In each scan, the mean angle of incidence for the points captured on the roof
panel was determined. The RANSAC plane fitting algorithm was used to extract the
normal vector of the roof panel. Next, the mean angle of incidence was determined by
calculating the angle between the panel normal vector and the vector connecting the
scanner position to the panel centroid point. The points captured on shingles, felt
paper, and plywood were then manually separated and their intensity values were
extracted to be analyzed.

©  ASCE 3
Computing  in  Civil  Engineering  2015 202

Analysis of the test results indicated that the increase in the angle of incidence
decreases the reliability of separating shingles and plywood areas based on only
intensity information. Figure 2 illustrates how the lidar intensity measurements of
each material changed by changing of the angle of incidence. As shown in Figure 2,
unlike the intensity values obtained from shingles and felt papers, the intensity values
obtained from plywood areas decreased with increasing angle of incidence. Therefore,
when the angle of incidence was above 70 degrees, the intensity values obtained from
shingles and plywood were very similar, resulting in difficulties to properly classify
shingle and plywood points only based on the intensity information.

Figure 2. Intensity vs. angle of incidence

Figure 3 also shows this impact of degrading intensities with increasing angle
of incidence in another illustration. Figure 3 (a) and (b) shows the normal
distributions fitted to lidar intensity values obtained from scans with angle of
incidence of 50 and 80 degrees, respectively. It is clearly shown that intensity values
captured from the three materials were contained in different ranges when the angle
of incidence was 50 degrees. But when the angle of incidence reached 80 degrees, the
intensity values obtained from shingles and plywood areas converged with significant
overlap.

©  ASCE 4
Computing  in  Civil  Engineering  2015 203

Figure 3. Comparison of intensity value distributions at; (a) angle of


incidence = 50, (b) angle of incidence = 80.

FIELD TEST

The field test was performed by scanning a damaged building after the 2013
Moore, OK. Figure 4 shows the point cloud data and scanning locations. Scans were
collected with the same scanner and resolution used in the laboratory test. The two
roof surfaces shown by the red polygons in Figure 4 were selected for the analysis.
The lidar points obtained on the target roof polygons comprised four registered scans
captured from the locations shown in Figure 4. The angle of incidence was
determined for each point in the cloud. Points measured on shingles, felt paper, and
plywood areas were then manually separated, and their intensity values were
extracted for the analysis. Figure 5 shows a scatter plot of intensity values versus the
angle of incidence. In Figure 5, points related to different materials were shown with
different colors. Also regression lines were added to illustrate the general trend of
change in intensity values of each material.

©  ASCE 5
Computing  in  Civil  Engineering  2015 204

Figure 4. Field test data and scanning locations

The field test results supported the results of the laboratory test and indicated
that when the angle of incidence is close or above 70 degrees, it is hard to separate
shingles and plywood areas based on only intensity information. As shown in Figure
5 the scanning angle of incidence for the target roof points were between 65 and 75
degrees, and the intensity values obtained from shingles and plywood areas were very
close. Similar to the observations in the laboratory test, Figure 5 shows that the trends
of change in intensity values obtained from shingles and plywood areas (i.e. slopes of
the regression lines) are different. Therefore, it can be expected that in scans with
lower angles of incidence shingles and plywood points can be separated based on
their intensity.

Figure 5. Scatter plot of intensity vs. angle of incidence

CONCLUSIONS

This paper presented laboratory and field tests to investigate the influence of
the scanning angle of incidence on lidar intensity measurements and implications for
automated building wind damage detection. These tests indicated that in close range
scanning when the scanning angle of incidence is under 70 degrees, the captured lidar
intensity information with a Leica C10 scanner can be used reliably to detect and
separate the three exposing materials of roof shingle, felt paper, and plywood on the
damage building roofs. However, when the scanning angle of incidence gets close to
or reaches above 70 degrees, then distinguishing the shingle and plywood areas only
based on intensity measurements will be more difficult.
Based on the results of this study, we suggest the following strategies for
employment of lidar intensity measurements in building wind damage detection. 1)
The position and height of scanners during data collection should be determined

©  ASCE 6
Computing  in  Civil  Engineering  2015 205

properly to reduce the scanning angle of incidence as much as possible. 2) When the
angle of incidence is above 70 degrees, other spectral information such as color data
should be used in conjunction with the intensity data. 3) Scanning should be
completed at close ranges to minimizes the influence of angle of incidence.
In order to properly employ lidar intensity information in point cloud
segmentation and object extraction techniques, further studies are needed to
comprehensively analyze factors effecting intensity measurements. In this study, we
focused on specific application of lidar intensity in wind damage detection and
studied the impact of the angle of incidence. Also, we only used one type of scanner
(Leica C10) and our scans were run within close ranges. Lidar intensity analysis is an
active research and efforts are needed to investigate different effective factors such as
the scanning range, internal sensor system parameters, etc. Normalization and
calibration methods should be developed in order to reduce impacts of these factors
on lidar intensity values and extract true reflectance information.

REFERENCES

Antonarakis, A. S., Richards, K. S., & Brasington, J. (2008). Object-based land cover
classification using airborne LiDAR. Remote Sensing of Environment, 112(6),
2988-2998.
Coren, F., & Sterzai, P. (2006). Radiometric correction in laser
scanning.International Journal of Remote Sensing, 27(15), 3097-3104.
Ding, Q., Chen, W., King, B., Liu, Y., & Liu, G. (2013). Combination of overlap-
driven adjustment and Phong model for LiDAR intensity correction. ISPRS
Journal of Photogrammetry and Remote Sensing, 75, 40-47.
González, D., Rodríguez-Gonzálvez, P., & Gómez-Lahoz, J. (2009). An automatic
procedure for co-registration of terrestrial laser scanners and digital
cameras. ISPRS Journal of Photogrammetry and Remote Sensing, 64(3), 308-
316.
González, J., Riveiro-Rodríguez, B., González-Aguilera, D., & Rivas-Brea, M. T.
(2010). Terrestrial laser scanning intensity data applied to damage detection
for historical buildings. Journal of Archaeological Science, 37(12), 3037-3047.
Höfle, B., & Pfeifer, N. (2007). Correction of laser scanning intensity data: Data and
model-driven approaches. ISPRS Journal of Photogrammetry and Remote
Sensing, 62(6), 415-433.
Jelalian, A. V. (1992). Laser radar systems. Artech House.
Kashani, A. G., Crawford, P., Biswas, S., Graettinger, A., Grau, D. (2014).
Automated Tornado Damage Assessment and Wind Speed Estimation Based
on Terrestrial Laser Scanning. Journal of Computing in Civil Engineering,
ASCE.
Kashani, A. G., Graettinger, A. (expected 2015) Cluster-Based Roof Covering
Damage Detection in Lidar Point Clouds. Submitted to Journal of Automation
in Construction, Elsevier (Under Review)

©  ASCE 7
Computing  in  Civil  Engineering  2015 206

Recognition and 3D Localization of Traffic Signs via Image-Based Point Cloud


Models

Vahid Balali1 and Mani Golparvar-Fard2


1
Ph.D. Candidate, Department of Civil and Environmental Engineering, University of
Illinois at Urbana-Champaign, 205 N Mathews Ave., Urbana, IL 61801. E-mail:
[email protected]
2
Assistant Professor and NCSA Faculty Fellow, Department of Civil and
Environmental Engineering and Department of Computer Science, University of
Illinois at Urbana-Champaign, 205 N Mathews Ave., Urbana, IL 61801. E-mail:
[email protected]

Abstract

Recently, the US Departments of Transportation have pro-actively looked into


videotaping roadway assets. Using inspection vehicles equipped with high resolution
cameras, accurate information on location and condition of high quantity and low cost
roadway assets are being collected. While many efforts have focused on streamlining
the data collection, the analysis is still manual and involves painstaking and
subjective processes. Their high cost has also limited the scope of the visual
assessments to critical roadways only. To address current limitations, this paper
presents an automated method to detect, classify, and accurately localize traffic signs
in 3D using existing visual data. Using a discriminative learning method based on
Histograms of Oriented Gradients and Color, traffic signs are detected and classified
into multiple categories. Then, a Structure from Motion procedure creates a 3D point
cloud from the street level images, and triangulates the location of the detected signs
in 3D. The experimental results show that the method reliably detects and localizes
traffic signs and demonstrate a strong potential in improving assessments and
lowering cost in practical applications.

INTRODUCTION

Recognizing and localizing traffic signs are among the important components
of a roadway asset management system. Such data collection and analysis has to be
done for millions of miles of roads and the practice needs to be repeated every so
often. Nevertheless, the significant number of traffic signs can negatively impact the
quality of any manual data collection and analysis method. The time-consuming
process involved in manual data collection practices can further create potential
safety hazards for the inspectors.
To streamline current processes, many state Departments of Transportations
(DOTs) and local agencies have proactively looked into a variety of methods for
inventorying roadside assets. These methods vary based on equipment type, and the
time requirements for data collection and data reduction and can be categorized into
integrated GPS/GIS mapping systems, aerial/satellite photography, steel-level
photography using camera-mounted vehicles, terrestrial laser scanners, and mobile
mapping systems (i.e., vehicle-based LiDAR, and airborne LiDAR) (Balali et al.

©  ASCE 1
Computing  in  Civil  Engineering  2015 207

2012). These techniques have their own benefits and limitations. For example,
vehicle-mounted LiDAR, a relatively new type of mobile mapping system, is capable
of rapidly collecting large amount of detailed highway inventory data in 3D, but is
expensive and involves significant data reduction processes to extract the desired
highway inventory. Among all techniques, collecting high-resolution steel-level
images using inspection vehicles equipped with high resolution cameras has received
most attentions form the DOTs. These images can provide detailed and dependent
information on both location and condition of the existing traffic signs, yet analyzing
them is still done manually and involves painstaking and subjective processes.
To address current limitations associated with manual analysis, many
computer vision are developed that can detect and classify one or a few types of
traffic signs from large collections of street-level images (Balali and Golparvar-Fard
2015; Hu and Tsai 2011; Huang et al. 2012). These methods have potential to
minimize the subjectivity of the current processes (Balali and Golparvar-Fard 2015).
However, it would be a blunder to assume that the task is not challenging. To make
these detection and classification useful, additional research is needed to minimize
False Positive (FP) and False Negative (FN) rates, and also localize the detected signs
in 3D. The task is particularly challenging due to the interclass variability of the
traffic signs and expected changes in illumination, occlusion, and sign
position/orientation (Figure 1). To this end, this paper presents and validates an
efficient pipeline for detection and classification of traffic signs from 2D street-level
images and triangulate their locations in a 3D point cloud environment. In the
following, an overview of the state-of-the-art methods is provided. Next, the new
method and the experimental results are presented.

Figure 1. Examples of traffic sign intra-class variability in existing visual


datasets taken under different illumination and occlusion conditions.

RELATED WORK

Today, the development of object detection algorithms in computer vision has


enabled unique methods for detection and classification of traffic signs. As a result,
several new algorithms are developed that can automatically detect and classify
traffic signs from visual data in a discrete fashion. Most research in this area is
motivated by the focus on autonomous vehicle navigations; e.g., projects funded by
DARPA Defense Advanced Research Project Agency in the US. Google has also
focused on the joint application of laser scanners and cameras for the purpose of
autonomous vehicle navigation (Ali et al. 2014; Cimpoi 2014). Consequently, several
high-end vehicles are already equipped with driver assistance systems which offer
automated detection and classification for a few traffic sign categories (Brkic 2013).

©  ASCE 2
Computing  in  Civil  Engineering  2015 208

Scaling these methods to actual conditions is challenging. There are hundreds


of types of traffic signs which come in different variations such as dimension, color,
text, and font (Balali et al. 2015, Brkic 2013). (Ai and Tsai 2011) presented a sign
recognition algorithm that primarily focuses on speed limit signs. Their algorithm
benefits from a probabilistic color model, the likelihood of sign locations in the
images along with the traditional sign features (shape, color, and content features).
Others focus on features such as color (Maldonado-Bascon et al. 2007), shape (Balali
and Golparvar-Fard 2014; Kim et al. 2005), combined color and shape features (Zhu
et al. 2006), or geometrical and physical features along with text (Yangxing and
Ikenaga 2007). In fact, for feature detection, there is a wealth of options and the
choice is typically made in conjunction with learning/inference methods. So far the
most popular features are edges and gradients, but the application of advanced
features such as HOG and Haar wavelets is also investigated recently (Balali and
Golparvar-Fard 2015).
Another category of research has focused on methods for generating large-
scale 3D models from collections of street-level images. In particular (Furukawa et al.
2010; Gallup et al. 2010) demonstrate high density and accurate image based 3D
reconstruction results. In the context of infrastructure projects, (Golparvar-Fard et al.
2012, Brilakis et al. 2011; Oskouie et al. 2014; Rashidi et al. 2014) present methods
for image-based 3D reconstruction and filtering the models for specific engineering
tasks. In particular, Golparvar-Fard et al. (2012) presents a method for highway asset
detection and recognition based on sematic texton forest to simultaneously segment
images and localize roadway assets in 3D point clouds. (Timofte et al. 2014) also
focuses on 3D localization of traffic signs via 2D detection and classification from
overlapping images. Compared to these methods, the presented method in this paper
detects and classifies large variety of US traffic signs from street-level images, is
computationally efficient, and can localize detected signs in 3D.

METHOD

The new method for traffic sign recognition and localization from street-level
images involves several steps: (1) detecting and classifying traffic signs in each 2D
image; (2) reconstructing 3D point clouds from overlapping images; and (3)
localizing the detected traffic signs in the generated 3D environment. In particular
using a standard Structure from Motion (SfM) based procedure, the method generates
both sparse and dense 3D point cloud model of the roadway assets. Each detected
traffic sign is localized in 2D using a bounding box. From multiple detections of the
same traffic signs, the location of the detecting bounding boxes is triangulated in 3D
using linear and non-linear methods and the corresponding 3D points are labeled with
the category of the detected sign. In the following, each step is discussed in detail.

Detection and Classification

Different from the state-of-the-art methods, the proposed method does not
make any prior assumption on the 2D location of traffic signs in images. Rather, a 2D
template at multiple scales slides across the entirety of each image and extracts
candidates for traffic sign detection. While the aspect ratio for the detection window

©  ASCE 3
Computing  in  Civil  Engineering  2015 209

is considered constant (1:1), yet three different scale factors of (0.75, 1.00, 1.25) are
considered to account for different scales in traffic signs. To balance accuracy in 2D
localization vs. efficiency in computation, the overlap between the sliding windows is
6.67%.
.
Trained Linear SVM Classifiers
Histogram of
Real Image HOG Color
Extract New Extract HOG &
Candidates with Color Histogram
Sliding Window Features
Multiple Images
Detect if Sign Concatenate and
Exists in Candidate Form HOG+C
Window Descriptors

Localize Sign in 2D Visualize


with Non-Maxima Detection in Project Detected Traffic Sign
Suppression 2D Image into 3D Point Cloud

Structure from Motion Sparse Dense Point


Point Cloud Cloud
GPU-based Multi-core
Feature Detection Feature Matching Multi View
Stereo
Initial Multi-core Camera Registered
Reconstruction Bundle Adjustment Calibration Video Frames

Figure 2. An overview of proposed pipeline for recognition and 3D localization.

Inspired by (Balali and Golparvar-Fard 2015), for each candidate, the gradient
orientations and color information are locally histogrammed and concatenated as
HOG+Color descriptors. These descriptors are then fed into multiple one-vs.-all
linear Support Vector Machine (SVM) classifiers– which are trained in an offline
process– to classify the detected signs into multiple categories. Finally, a non-maxima
suppression step removes false positives and keeps high score detections for accurate
localization. As shown in (Balali and Golparvar-Fard 2015), the performance of the
method is independent of different image resolutions and is robust to noise and
changes in illumination.

Image-Based 3D Reconstruction

In this paper, the 3D image-based reconstruction module builds upon the


newly proposed Bundler (Snavely 2014) and is tested in the context of sequentially
captured images for traffic signs. In the SfM algorithm, first visual features are
independently extracted for each image. Next, using a new multi-core implementation,
the detected SIFT features per image are matched in pairs for a given sequence of
overlapping images. Those images which their matching features provide a well-
conditioned estimate of their locations, are incrementally added to the bundle
adjustment optimization process, until no remaining camera observes any
reconstructed 3D point. This process results in a sparse point cloud model, and
outputs intrinsic and extrinsic camera parameters for each steet-level image. Then a
Multi-view Stereo (MVS) procedure takes the camera calibration parameters and
generates a sparse set of 2D patches using Harris & DoG (Difference of Gaussian),

©  ASCE 4
Computing  in  Civil  Engineering  2015 210

filters false matches, and eventually expands them to make patches dense. The default
parameters are used for 3D reconstruction are as follows:
a) Images per segment: The maximum number of images to be clustered together for
simultaneous matching. This number is limited by availability of memory and
varies based on the density of the reconstruction.
b) Reconstruction level and Voxel size: These parameters vary the density at which
the matching is carried out. Denser matching requires more memory allocation
and is much slower than less dense matching. The level determines the number of
times images are decimated before matching. Finally, the voxel size determines
how often a match is attempted – i.e., every nth pixel in the x and y directions of
the sampled images.
c) Reconstruction threshold: A threshold that is related to correlation values in the
matching process and is used here to filter out bad matches. Larger (up to 1.0)
values mean fewer but more reliable points, smaller values retain more points, but
the quality can be lower.

Feature Extraction and Triangulation

The above SfM pipeline outputs camera projection matrices and a dense 3D
point cloud model from the street-level images. Assuming that the intrinsic and
extrinsic camera matrixes are accurate, the Direct Linear Transform method of
(Hartley and Zisserman 2003) is used to triangulate the detected traffic signs in 3D.
To do so, the camera matrices are extracted from the Bundler output file. The view
list begins with the length of the list, and is followed by quadruplets <camera>
<key> <x> <y>, where <camera> is a camera index, <key> is the index of the SIFT
keypoint where the point is detected in that camera, and <x> and <y> are the
positions of the detected keypoint. The pixel postitions are floating point numbers in a
coordinate system where the origin is the center of the image.
For each detected traffic sign, the SIFT features within the 2D detection
bounding boxes are extracted. Among these features, those whose feature tracks are at
least pass through four images are chosen to achieve reasonably accurate
triangulation. On an average, ten SIFT features are triangulated per traffic detection
per image. The corresponding features are individually projected onto the 3D
coordinates of the generated point cloud model by solving for a system of equations
involving the following transformation per SIFT feature:
s = PS (1)
where s refers to the point in the image (x,y,1), S refers to the point in 3D
(X,Y,Z,1), and P is the projection matrix defined as K * [ R | T ] where K is the intrinsic
camera matrix, R is the rotation of the camera with respect to world coordinates, and
T is the translation with respect to world coordinates.
The triangulation solving for these equations using a direct Linear
Transform (DLT)  is followed by Levenberg-Marquardt non-linear algorithm. Once
all feature points relevant to a particular traffic signs are triangulated in 3D, a 3D box
is placed over these points and is texture-mapped with an images that represents the
category of the detected traffic signs. Figure 3 illustrates the process of feature

©  ASCE 5
Computing  in  Civil  Engineering  2015 211

detection and matching followed by triangulating the location of the matched features
in 3D.

Inputs: Feature Tracks Outputs: 3D Cameras and Points Camera 1 Camera 2

Figure 3. 3D localization of the detected visual features.

DATA COLLECTION AND SETUP

For evaluating the performance of proposed method, the multi-class traffic


sign recognition method of (Balali and Golparvar-Fard 2015) is trained using images
collected from several highway and many secondary roadways in the US. This dataset
shown in part in Figure 4 contains different shapes and colors of traffic signs and
exhibits various viewpoints, scales, illumination, and intra-class variability. The
ground truth is created manually and the annotations are cropped so that each
contains only one single traffic sign. The dataset is split into different types of signs
based on shape and also corresponding number of positive and negative annotations.

Figure 4. Example of positive samples for each category of traffic signs.

RESULTS AND DISCUSSION

The performance of the method was validated for both accuracy in detection
and classification and also 3D reconstruction. In particular, precision and recall
metrics are used to measure the accuracy of classification for different types of traffic
signs. The average precision and recall among all types of traffic signs are 90.15%
and 99% respectively. Figure 5 shows example results for multi-class traffic sign
detection and classification on a part of the dataset that was collected on a street on
campus of the University of Illinois. This dataset contained 138 images.
The performance of the proposed 3D reconstruction and also localization
methods were also tested. In these experiments, Level had the biggest impact on
computational time and cell size had the biggest impact on quality of generated 3D
model. Minimum number of images had no major impact on time. The best
combination of parameters is shown in Table 1. The new implementation based on
GPU has significantly reduced the computation time (10 fold), making it feasible to

©  ASCE 6
Computing  in  Civil  Engineering  2015 212

reconstruct large areas that are typical in case of highway infrastructure assets. 89%
of the points of interest were successfully projected into the point cloud.
Table 1. 3D Reconstruction Parameters.
Parameter Value Description
Cell size 1 Optimal image quality
min number of images 4 Optimal image quality
level 2 Optimal computational time

The results of 3D reconstruction for the detected signs in Figure 5 are shown
in Figure 6. Here, based on the procedure described in the method section, the
locations for visual features are triangulated in 3D and their associated bounding
boxes are labeled with the traffic sign categories derived from the detection method.
In Figure 6, the location of the two detected traffic sign and the warning sign
shown by red and yellow boxes in Figure 5 are texture-mapped using images of
regulatory and warning signs respectively. Such visualizations enable the users to
select an asset category of interest and review their locations in 3D. The users can
also navigate through the geo-registered images or conduct visual observations in 3D
point

Figure 5. Multi-class traffic sign detection.

Figure 6.Triangulation and localization of detected traffic sign into 3D point


cloud.

CONCLUSION

This paper presented an image-based method for recognition and localization


of multi-class of traffic signs which has potential to provide quick and inexpensive
access to information about location and condition of roadway assets. The application
of the proposed method for detecting traffic signs using Google Street Images and
then localizing them on Google Maps is being carried out as part of ongoing research.

REFERENCES

Ai, C., and Tsai, Y. J. (2011). "Hybrid Active Contour–Incorporated Sign Detection
Algorithm." Journal of Computing in Civil Engineering, 26(1), 28-36.

©  ASCE 7
Computing  in  Civil  Engineering  2015 213

Ali, N. M., Sobran, N. M. M., Ghazaly, M., Shukor, S., and Ibrahim, A. T. "Traffic Sign
Detection and Classification for Driver Assistant System." Proc., The 8th Int. Conf.
on Robotic, Vision, Signal Processing & Power Applications, Springer, 277-283.
Balali, V., Depwe, E., and Golparvar-Fard, M. (2015). "Multiclass Traffic Sign Detection and
Classification Using Google Street View Images." Transportation Research Board
94th Annual Meeting, TRB, Washington, DC, USA.
Balali, V., and Golparvar-Fard, M. (2014). "Video-Based Detection and Classification of US
Traffic Signs and Mile Markers using Color Candidate Extraction and Feature-Based
Recognition." International Conference on Computing in Civil and Building
Engineering, ASCE, Orlando, FL, USA, 858-866.
Balali, V., and Golparvar-Fard, M. (2015). "Evaluation of Multi-Class Traffic Sign Detection
and Classification Methods for U.S. Roadway Asset Inventory Management."
Journal of Computing in Civil Engineering.
Balali, V., Golparvar-Fard, M., and de la Garza, J. M. (2012). "Video-based highway asset
recognition and 3D localization." International Workshop on Computing in Civil
Engineering, ASCE, Los Angeles, CA, USA, 379-386.
Brilakis, I., Fathi, H., and Rashidi, A. (2011). "Progressive 3D reconstruction of
infrastructure with videogrammetry." Automation in Construction, 20(7), 884-895.
Brkic, K. (2013). "An overview of traffic sign detection methods." Department of Electronics,
Microelectronics, Computer and Intelligent Systems, Unska, 3, 10000.
Cimpoi, M. (2014). "Traffic sign detection and classification in video mode."
Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R. "Towards internet-scale multi-view
stereo." Proc., Computer Vision and Pattern Recognition, 1434-1441.
Gallup, D., Frahm, J.-M., and Pollefeys, M. "A heightmap model for efficient 3d
reconstruction from street-level video." Int. Conf. on 3D Data Proc., Visualization
and Transmission,.
Golparvar-Fard, M., Balali, V., and de la Garza, J. M. (2012). "Segmentation and recognition
of highway assets using image-based 3D point clouds and semantic Texton forests."
Journal of Computing in Civil Engineering(04014023).
Hartley, R., and Zisserman, A. (2003). Multiple view geometry in computer vision,
Cambridge university press.
Hu, Z., and Tsai, Y. (2011). "Generalized image recognition algorithm for sign inventory."
Journal of Computing in Civil Engineering, 25(2), 149-158.
Huang, Y.-S., Le, Y.-S., and Cheng, F.-H. "A Method of Detecting and Recognizing Speed-
limit Signs." Proc., Intelligent Information Hiding and Multimedia Signal Processing
(IIH-MSP), 2012 Eighth International Conference on, IEEE, 371-374.
Kim, J. W., Jung, K. H., and Hyun, C. C. (2005). "A study on an efficient sign recognition
algorithm for a ubiquitous traffic system on DSP." Computational Science and Its
Applications–ICCSA 2005, Springer, 1177-1186.
Maldonado-Bascon, S., Lafuente-Arroyo, S., Gil-Jimenez, P., Gomez-Moreno, H., and
López-Ferreras, F. (2007). "Road-sign detection and recognition based on support
vector machines." Intelligent Transportation Systems, IEEE Transactions on, 8(2),
264-278.
Oskouie, P., Becerik-Gerber, B., and Soibelman, L. "Automated Cleaning of Point Clouds for
Highway Retaining Wall Condition Assessment." Proc., 2014 International
Conference on Computing in Civil and Building Engineering.
Rashidi, A., Brilakis, I., and Vela, P. (2014). "Generating Absolute-Scale Point Cloud Data of
Built Infrastructure Scenes Using a Monocular Camera Setting." Journal of
Computing in Civil Engineering, 04014089.
Snavely, N. (2014). "Bundler: Structure from Motion (SfM) for Unordered Image
Collections.", <http://www.cs.cornell.edu/~snavely/bundler/>.

©  ASCE 8
Computing  in  Civil  Engineering  2015 214

Timofte, R., Zimmermann, K., and Gool, L. V. (2014). "Multi-view traffic sign detection,
recognition, and 3D localisation." Mach. Vision Appl., 25(3), 633-647.
Yangxing, L., and Ikenaga, T. (2007). "Geometrical, physical and text/symbol analysis based
approach of traffic sign detection system." IEICE, 90(1), 208-216.
Zhu, S.-d., Zhang, Y., and Lu, X.-f. (2006). "Detection for triangle traffic sign based on
neural network." Advances in Neural Networks-ISNN 2006, Springer, 40-45.

©  ASCE 9
Computing  in  Civil  Engineering  2015 215

Automated Cycle Time Measurement and Analysis of Excavator’s Loading


Operation Using Smart Phone-Embedded IMU Sensors

N. Mathur1; S. S. Aria2; T. Adams3; C. R. Ahn4; and S. Lee5


1
Tishman Construction Management Program, Department of Civil and
Environmental Engineering, University of Michigan, 2350 Hayward St., G.G. Brown
Bldg, Ann Arbor, MI 48109. E-mail: [email protected]
2
Charles Durham School of Architectural Engineering and Construction, University
of Nebraska, W145 Nebraska Hall, Lincoln, NE 68588. E-mail: [email protected]
3
Charles Durham School of Architectural Engineering and Construction, University
of Nebraska, W145 Nebraska Hall, Lincoln, NE 68588. E-mail:
[email protected]
4
Assistant Professor, Charles Durham School of Architectural Engineering and
Construction, University of Nebraska, W145 Nebraska Hall, Lincoln, NE 68588.
E-mail: [email protected]
5
Associate Professor, Department of Civil and Environmental Engineering,
University of Michigan, 2350 Hayward St., G.G. Brown Bldg., Ann Arbor, MI 48109.
E-mail: [email protected]

Abstract

Measurement and analysis of duty cycle of construction equipment is essential


from the perspective of making decision with regards to controlling idle time and
realizing productivity improvement. However, current monitoring techniques like
Vehicle Health Monitoring Systems (VHMS) are either too expensive and/or are not
compatible with outdated equipment fleets and equipment across different
manufactures. To address these issues, we aim to develop a non-invasive technique of
using a smart phone to measure the various activity modes (e.g. wheel base motion,
cabin rotation and arm movement for excavator) and subsequently duty cycle of
construction equipment. The smart phone is mounted inside the cabin of construction
equipment to automatically capture engine vibration signatures in form of three-
dimensional acceleration. Various time and frequency domain features are extracted
from this raw data and are tested and classified into different equipment actions using
machine learning algorithms from WEKA (Waikato Environment for Knowledge
Analysis) data mining set. The classification accuracy on a random sample generated
from various experiments on hydraulic excavator (CAT 330CL) was turned out to be
between 72-86%. The average cycle time measurement accuracy based on predicted
labels for equipment actions was around 88.5%. This result demonstrates the potential
use of the proposed technique as an affordable system for automated and real time
measurement of construction equipment duty cycle to facilitate detailed productivity
analysis.

©  ASCE
Computing  in  Civil  Engineering  2015 216

INTRODUCTION

Duty cycle for any construction equipment is defined as a sequence of tasks


that is repeated to produce a unit of output (For example: cubic yard for an excavator)
(Abolhasani et al. 2008). The time taken to complete a duty cycle is defined as cycle
time for that particular set of tasks. Measurement of cycle time is very important from
the perspective of making decisions related to equipment fleet configuration and
productivity improvement. In general, shorter cycles increase production and reduce
cost (Bennink 2011). For example, in an excavation and hauling operation,
excavator’s maximum hourly production depends on the number of digging cycles
achievable per hour. The question that still remains with the estimators and planners
as what value of cycle time they should consider while calculating the productivity of
construction equipment. Common practice is to refer to some standard guides like RS
Means and use deterministic methods for calculation. Despite the wide use of these
methods, one acknowledged problem is that they ignore the effects of varying site
conditions which might lead to inaccurate results. Therefore, to better model the
randomness in construction operations, researchers (Chao 1999) have suggested
stochastic simulation methods for calculating probabilistic distribution of cycle times
including important statistical parameters such as mean and standard deviation.

Monitoring of day to day operations, especially measurement and analysis of


various activity modes/equipment actions is necessary to understand the duty cycle of
construction equipment. The most common monitoring system that exists in the
industry is Vehicle Health Monitoring System (VHMS), which makes use of
vehicle’s on-board diagnostic (OBD) support to track fuel use and engine status. The
problem with this technology is that it can be invasive as it requires installation
directly to the engine block. Also there are compatibility issues with older equipment
and equipment across different manufacturers.

Similarly in academia, various researchers are working on developing


methods and guidelines to measure the idle time and activity modes of construction
equipment. Heydarian et al. (2012), Gong and Caldas (2011), and Zou et al. (2007),
have implemented image processing and computer vision based methods to measure
the idle time and various activity modes of the construction equipment. Although
their approach is very useful, yet their method will have low performances in
recognizing equipment activity, in case that target activities do not have any notable
visual differences. Moreover, their accuracy of results is susceptible to camera
resolutions, lighting conditions and other variables. Another stream of researchers
including Pradhananga et al. (2013), El-Omari and Moselhi (2009), and Ergen et al.
(2007), and Chen et al. (2005) have utilized Radio-Frequency Identification (RFID)
and Global Positioning System (GPS) based methods for the automated location and
tracking of construction equipment. However, this approach is incapable of tracking
the stationary operation of construction equipment. Therefore, there is a need for a
low-cost, robust and automated method that can be applied to track construction
operations in real time and for operations under varying field conditions with
reliability and certainty.

©  ASCE
Computing  in  Civil  Engineering  2015 217

INERTIAL MEASURMEENT UNIT AND RELATED WORK

The recent advent of Inertial Measurement Unit (IMU) sensors (a sensor unit
with built-in accelerometer and gyroscope used to measure tilts, acceleration, etc.) has
resulted in their utilization for various applications. Within the construction industry,
the components of IMU sensors (accelerometer and gyroscope) have so far been used
for activity analysis of construction workers (Cheng et al. 2013), (Joshua et al. 2011),
construction vehicle tracking (Lu et al. 2007) and detecting 3-D orientation of
construction equipment (Akhavian et al. 2012). Most of these studies were aimed at
resource (equipment and material) tracking for construction planning and control.
Although their findings can be useful for performance monitoring, yet they were
mainly focused on tracking rather than analyzing the operations at activity level to
make decision regarding productivity.

Ahn et al. (2013) have first employed MEMS accelerometer for estimating the
working and idle modes of construction equipment. They relied on the assumption
that any engine operation of construction equipment, including idling, creates
distinguishable patterns of acceleration signals compared to background noise that is
generated during the engine-off mode. Also the stationary operating (e.g. for an
excavator) creates distinguishable patterns of acceleration signals, as compared to the
idling. They tested this approach on the operation of an excavator. The equipment
actions were classified into working and idle modes. The activity recognition
accuracies were found to be around 93% which resulted in less than 2% error in the
measurement of idle time. But as discussed above, this outcome is not adequate for
the purpose of detailed productivity analysis and therefore we have to study the
various activity modes and duty cycle of construction equipment in depth.

RESEARCH OBJECTIVE AND FRAMEWORK

The objective of this research is to test the hypothesis that signals captured by
smartphone (equipped with IMU sensors) that is mounted on a construction
equipment, can be used to classify the operation into various activity modes (For
example: boom movement, cabin rotation and wheel base motion for an excavator)
and to determine the duty cycle of the operation. We are extending the work done by
Ahn et al. (2013) related to broad-based equipment action classification (i.e. working,
idle and engine off modes only) to multi-mode classification as discussed above. The
underlying assumption behind this hypothesis is that every operation of construction
equipment (For example: boom movement, cabin rotation and wheel base motion for
an excavator) generates unique and distinguishable patterns of signals compared to
the idling and engine off-modes.

As of now, we have tested this approach on excavators for data collected


under numerous experiments and varying site conditions. We selected excavator for
our initial analysis because it is one of the most frequently used construction
equipment and it is easier to represent its duty cycle in terms of basic tasks (i.e. arm
movement, cabin rotation, wheel base motion) compared to other equipment such as:

©  ASCE
Computing  in  Civil  Engineering  2015 218

Crane. The experiments were performed on CAT 330CL hydraulic excavator working
on actual project sites to test the feasibility of this approach in the real world scenario.
The research framework is demonstrated in Figure 1. The smartphone with inbuilt
sensors (accelerometer and gyroscope) was mounted to a rigid block inside the cabin
of the excavator. Accelerometer and gyroscope collected data in the form of three
dimensional acceleration at fixed intervals based on the sampling frequency. The raw
data are segmented into windows with 50 % overlapping between consecutive
windows. The second by second operation of the excavator was being videotaped.
The starting time stamp from data collection and the video was matched. We used
detrending to remove gravity effect from raw data for FFT (Fast Fourier Transform)
processing.

Various time and frequency domain features were extracted from the data.
Time domain features included Resultant Acceleration, Mean Acceleration (3 Axis),
Standard Deviation of Acceleration (3 Axis), Peak Acceleration (3 Axis), Correlation
(3 Axis), Zero Crossing Rate (3 Axis), Kurtosis (3 Axis), Skewness (3 Axis),
Interquartile Range (3 Axis). Frequency domain features included Spectral Entropy (3
Axis), Spectral Centroid (3 Axis), Short Time Energy (3 Axis), Spectral Roll-Off (3
Axis). Same set of features were extracted from both accelerometer and gyroscope
data.

Figure 1. Research Framework

DATA CLASSIFICATION

An excavator has various degrees of freedom, but a typical operation can be


broadly classified into the following four categories for duty cycle estimation
purpose:
1. Idle: Engine is on but no movement of any components of excavator
2. Wheel Base motion : Excavator translating on wheels/tracks
3. Cabin Rotation : Only cabin is rotating, no other movement
4. Bucket/Arm Movement : Only bucket/arm movement
The acceleration raw data is plotted for these four classes in Figure 2. It is
apparent that all the four classes have different range of accelerations, which justifies
our case for differentiating them based on sensor based approach. While labelling, the
preference was given to the predominant motion, in case some actions are happening

©  ASCE
Computing  in  Civil  Engineering  2015 219

simultaneously. We considered predominant motion as the one which is likely to


cause higher amplitude of acceleration compared to the accompanying motion in a
particular scenario. So Arm Movement + Cabin rotation together is taken as cabin
rotation only. Arm movement + wheel base motion together is taken as wheel base
motion only. Cabin Rotation + Wheel base motion together is taken as wheel base
motion only. Features were extracted from raw data collected from multiple
experiments and combined into a comprehensive dataset (3752 data points/62.53 min).
Afterwards a training set was randomly sampled from this data in such a way that all
the classes have approximately the same number of data points. We have
implemented WEKA (Waikato Environment for Knowledge Analysis) machine
learning and data mining tool-kit for the purpose of data classification. The classifiers
selected from the available set are Multilayer Perceptron, J48, Sequential Minimal
Optimization (SMO), Random Forest, Logistic, Multi Class Classifier, Classification
via Regression, and Bayes Net. The results from data classification are presented in
Table 1.

Class a - Idle Class b - Wheel Base Motion


1 5
Acceleration - (m/sec2)

Acceleration (m/sec2)

0.6 3
0.2 1
-0.2 2 4 6 8 10 12 -1 235 240 245 250
-0.6 -3
-1 -5
Time (sec) Time (sec)

Class c - Arm/Bucket Movement Class d: Cabin Rotation


3 1
Acceleration (m/sec2)

Acceleration (m/sec2)

0.6
1
0.2

-1 2054 2059 2064 2069 -0.2 766 766.5 767 767.5 768 768.5 769
-0.6
-3 -1
Time (Sec) Time (Sec)

Figure 2. Raw Acceleration Data (x-axis)


The data classification accuracy came out to be between 72-86% for 80-20
train-test split and 68-83% for 60-40 train test split cases. We observed from our
analysis that random sampling of data helped in reducing confusion among different
classes. Among the set of classifiers: Multi-Layer Perceptron, Decision Tree J48,
Random Forest and Classification via Regression algorithms performed particularly
well on training data as well as testing data. Almost all the classifiers properly
distinguished idle (Class a) and wheel base motions (Class b) from rest of the classes,
but some confusion was observed in differentiating between arm/bucket movement
and cabin rotation. Selecting the appropriate features is the key in classification as
using a lot of features can cause the problem of over fitting. In future, we are planning

©  ASCE
Computing  in  Civil  Engineering  2015 220

to work extensively on feature engineering in WEKA using filter and ranker methods
for refining the accuracies.

Table 1. Data Classification Accuracies


Classifier Accuracy in %
On Training 10 fold cross 60-40 train- 80-20 train-
Set Validation test split test split
Multi-Layer 91.14 87.06 77.89 85.71
Perceptron
Decision Tree J48 97.28 83.79 76.19 82.31
SMO 70.98 68.12 66.33 68.03
Random Forest 99.59 89.23 83.67 86.39
Logistic 83.38 76.57 76.19 79.59
Multi Class 80.93 74.93 75.17 75.51
Classifier
Classification via 96.32 82.56 76.87 83.67
Regression
Bayes Net 82.01 73.30 68.70 72.11

DUTY CYCLE ANALYSIS

We have attempted to use the results obtained from classification phase to


calculate the duty cycle of the operation based on predicted activity action labels. The
Typical Duty cycle of an excavator involves the following operations:
1. Start Digging (Class 3: Bucket Motion)
2. Cabin rotation (Class 4: Cabin Rotation)
3. Dumping (Class 3: Bucket Motion)
4. Cabin rotation (cw/ac) (Class 4: Cabin Rotation)
5. Ready to dig (Class 3: Bucket Motion)

A video approximately 34.48 min long (2069 data points) was selected to
demonstrate the applicability of our approach. The features extracted from the raw
data were divided into training and testing set (60-40 split). The predictions for
activity actions were made on the testing set. The classification accuracy with
Random Forest algorithm was observed to be 74.7%. The whole duration of the
operation corresponding to the testing set was divided into small time segments. To
compute the cycle times based on predicted labels, we first calculated the number of
cycles in a particular segment by identifying and enumerating the points in time at
which labels change from cabin rotation to bucket/arm movement and vice versa.
This can be achieved by calculating the difference between consecutively occurring
labels in time. We ask the system to return a flag value for motions irrelevant to cycle
(Eg: Wheel Base Motion). A particular set of label difference always occurs in a
pattern and one such pattern represent one duty cycle. We can estimate the start and
end of each cycle by tracking this pattern. Once we knew the number of cycles, the
cycle time is calculated by dividing the duration of time segment by number of cycles.
A comparison between predicted and actual labels is shown in Table 2.

©  ASCE
Computing  in  Civil  Engineering  2015 221

Table 2. Comparison between cycle times obtained from classification results


and from the video
Actual Cycle Time Predicted Cycle Time
Time Segments
(Sec) (Sec)
1 16.00 16.00
2 11.43 10.00
3 10.00 7.27
4 13.33 7.27
5 11.43 8.00
6 13.33 8.00
7 11.43 13.33
8 10.00 11.43
9 13.33 8.00
10 16.00 11.43
11 10.00 10.00
12 11.43 16.00
13 13.33 10.00
14 13.33 13.33
15 15.39 9.52
16 13.33 18.18
17 11.11 11.77
Mean 12.60 11.15
Standard Deviation 1.91 3.19

For most of the cycles, cycle time from the predicted labels matched exactly
(or very close) with that from the video but for some of the others, the variation was
significant. After analyzing the video, we concluded that this variation is mainly due
to a lot of mixed motions (bucket/arm movement happening simultaneously) in these
duty cycles, that resulted in incorrect prediction of labels by classification algorithm
and subsequently affected the cycle time measurement. A higher standard deviation
for duty cycle measurement is probably because of confusion among classes such as
arm/bucket movement and cabin rotation. We are exploring other features which will
help us to better distinguish between cabin rotation and bucket/arm movement. This
is expected to improve cycle time measurement accuracy.

CONCLUSION

An attempt has been made in this paper to classify the various actions of a
hydraulic excavator (CAT 330CL) using smartphone sensor based approach. A novel
method is presented for calculating the cycle time of the excavator from data
classification results for various cycles. We observed that the estimation of mean
cycle time has better accuracy than classification accuracy. This is promising because
estimating the cycle time is what we eventually aim to achieve for improving
performance of equipment fleet operations. In future, we are planning to conduct a

©  ASCE
Computing  in  Civil  Engineering  2015 222

comprehensive duty cycle analysis for excavators to obtain probability distributions


for cycle times. We are also interested in testing the feasibility of smart phone based
approach for other frequently used construction equipment such as loaders and dozers.
Our ultimate goal is to create an easy to use, non-invasive and cost effective system to
monitor the real time operations of construction equipment so that we can take
important decision with regards to controlling idle time and improving productivity.

REFERENCES

Abolhasani, S., Frey, H.C., Kim, K., Rasdorf, W., Lewis, P., Pang, S.H. “Real-world
in-use activity, fuel use, and emissions for nonroad construction vehicles: a
case study for excavators” J Air Waste Manag Assoc. 2008 Aug;58(8):1033-
46.
Ahn, C. R., Lee, S.H., and Peña-Mora, F. (2013). “Accelerometer-based measurement
of construction Equipment operating efficiency for monitoring Environmental
performance.” Computing in Civil Engineering (2013).
Ahn, C. R., Lee, S.H., and Peña-Mora, F. (2012). “Monitoring System for
Operational Efficiency and Environmental Performance of Construction
Operations, Using Vibration Signal Analysis.” Construction Research
Congress 2012, West Lafayette, Indiana, May 21–23.
Akhevian, R., Behzadan A.H (2012), “Remote Monitoring of Dynamic Construction
Processes Using Automated Equipment Tracking” Construction Research
Congress 2012: pp. 1360-1369.
Bennink, C. (2011). “Dig Into Excavator Productivity” Equipment Today, <
http://www.forconstructionpros.com/article/10212146/dig-into-excavator-
productivity> (Nov. 15, 2014).
Chao, L.C. (1999). “Simulation of construction operation with direct inputs of
physical factors”, Construction Informatics Digital Library, paper w78-1999-
2287.content
Chen, J., Ahn, C. R., and Han, S. (2012). “Detecting the Hazards of Lifting and
Carrying in Construction through a Coupled 3D Sensing and IMUs Sensing
System." Computing in Civil Engineering (2014).
Gong, J., and Caldas, C. H. (2011). "An object recognition, tracking, and contextual
reasoning-based video interpretation method for rapid productivity analysis of
construction operations." Automation in Construction, 20(8), 1211-1226.
Heydarian, A., Golparvar-Fard, M., “Automated Visual Recognition of Construction
Equipment Actions Using Spatio-Temporal Features and Multiple Binary
Support Vector Machines”. Construction Research Congress (2012).
Joshua, L., Varghese, K. (2011), “Accelerometer-Based Activity Recognition in
Construction”. Journal of Computing in Civil Eng. 2011.25:370-379.
Pradhananga, N. and Teizer, J. (2013). “Automatic spatio-temporal analysis of
construction site equipment operations using GPS data”, Automation in
Construction, 29, p.107-122.
Zou, J., Kim, H. (2007) “Using Hue, Saturation, and Value Color Space for Hydraulic
Excavator Idle Time Analysis”, 10.1061/(ASCE)0887-3801(2007)21:4238.

©  ASCE
Computing  in  Civil  Engineering  2015 223

Deviation Analysis of the Design Intent and Implemented Controls of HVAC


Systems Using Sensor Data: A Case Study

R. Sunnam1,2; S. Ergan3; and B. Akinci1


1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213-3890. E-mail: [email protected]; [email protected]
2
Baumann Consulting, 1424 K Street NW, Suite 500, Washington, DC 20005.
3
Department of Civil and Urban Engineering, NYU Polytechnic School of
Engineering, Brooklyn, NY 11201. E-mail: [email protected]

Abstract
Sustainable building system design techniques aim to find an optimal balance
between occupant comfort and the energy performance of HVAC systems. Design
and implementation of effective heating ventilating and air conditioning (HVAC)
controls is the key to achieve these optimal design conditions. Any anomalies in the
functioning of a system component or a control system would result in occupant
discomfort and/or energy wastage. While occupant discomfort can be directly sensed
by occupants, measurement of waste in energy use would require additional sensing
and analysis infrastructure. One way of identifying such a waste is to compare as-
designed system requirements with the actual performance of the systems. This paper
presents an analysis of an air handling unit (AHU) in a five story office building and
provides the comparison results of design requirements against the sensor data
corresponding to the AHU parameters. One year sensor data for the AHU parameters
was analyzed to assess the correctness of the implementation of the design intent. The
design intent was interpreted from the sequence of operations (SOOs) and confirmed
with a commissioning engineer, who worked on the project. The design intent was
then graphically represented as a pattern that the sensor data corresponding to the
controls is expected to follow if it follows the design intent. Any deviation in the
sensor data as compared to the expected operation pattern of the design intent
indicated incorrect operation of the system with incorrectly implemented controls.
The findings in this paper substantiate the need to formally define the sequence of
operations and also point to the need to verify the implemented controls in a given
project to detect any deviations from the actual design intent.

Keywords: Sustainable building design; Sensing in buildings; Sequence of operations;


HVAC control systems; Deviation analysis

©  ASCE 1
Computing  in  Civil  Engineering  2015 224

INTRODUCTION
According to the American Society for Heating, Refrigerating and Air-
Conditioning Engineers (ASHRAE) handbook, the primary purpose of HVAC
equipment and systems is to provide the desired indoor environment to the occupants
of a facility as it greatly impacts their productivity. HVAC design engineers calculate
the thermal loads based on the internal thermal loads and the thermal resistance of the
building envelope. They select an appropriate HVAC system design that meets the
standard design guidelines as well as the goals related to indoor environment and
energy performance requirements (ASHRAE, 2012). The controls necessary for the
optimal functioning of these systems are conveyed to the other project team members
through construction documentation. The construction documents contain textual
narratives called the Sequence of Operations (SOOs) to specify HVAC controls.
Research by the National Building Controls Information Program (NBCIP)
highlighted the frequency of occurrence of control problems and the energy impact of
these control problems (Barwig et al., 2002). However, the study by NBCIP involved
identifying control related problems in retrospect without looking at the design intent
of the controls. These control related issues could have been introduced as part of the
designed controls for these buildings. This paper presents a case study of the controls
for an AHU in a newly constructed office building.
The designed and implemented controls for the AHU were compared to
identify any possible deviations. Within the current practice, sequence of operations
(SOOs) written by mechanical engineers are the main medium to share the design
intent and the SOOs and design drawings for the studied AHU were examined to
understand the designer’s intent for controlling the system. The design intent that was
interpreted from the SOOs was then verified with the commissioning authority (CxA)
for the project. The sensor data from the AHU was collected over a period of one
year and this was graphically compared against the design intent of the mechanical
designer. This comparison of the design intent and the actual implemented controls
indicated inefficiencies in the present approach to exchange information related to
HVAC controls. The findings substantiate a need for formal representation of SOOs
to foster the exchange of information related to HVAC controls.

OVERVIEW OF THE CASE STUDY


An AHU from a five story LEED™ platinum certified office building was
chosen for this study. The building is located in Nürnburg, Germany - a city with
mild summers and cold winters. Figure 1a is a view of the case-study building. The
building is a new construction and has been in operation since January 2013. A roof
top packaged AHU was designed to condition 1,300 m² of floor area of the following
space types - corridors, restrooms, tea-kitchens, and two conference rooms. The
schematic drawing and the SOOs for the AHU were obtained from the project team.
The AHU supplies 100% outside air to the spaces at a flow rate of 8,800 m³/h and an
exhaust air flow rate of 9,100 m³/h. The air velocity for both supply and exhaust air
streams is 1.5 m/s. Figure 1b shows the AHU installed on the roof of the building.
The rest of the building is conditioned by natural ventilation, radiators, high mass
radiant heating and cooling systems (hot/cold water pipes through the concrete slab).
District heating is the primary heating source and cooling is provided by an air cooled

©  ASCE 2
Computing  in  Civil  Engineering  2015 225

chiller in the building. Energy efficiency was of prime importance for this project as
the owner aimed for a LEED™ platinum rating. Apart from an efficient HVAC
system, the design also incorporates high quality envelope, automated solar shading
devices on all windows, occupancy and daylight sensors to achieve the energy
efficiency goals.

a. View of case study building b. View of the roof top packaged AHU

Figure 1. Case-study photographs (Image courtesy: Baumann Consulting)


AHU Components: The AHU for the building was not provided with cooling
components as the design takes advantage of the mild summer temperatures and
supplies direct outside air during summers for cooling. During winters, outside air is
pre-heated by a cross-flow heat exchanger and then further heated as required by a
hot water heating coil to be supplied to the zones. Figure 2 shows the schematic
diagram of the AHU that was analyzed.

Figure 2. Schematic diagram of the AHU from the case-study


ANALYSIS OF THE DESIGNED CONTROLS

©  ASCE 3
Computing  in  Civil  Engineering  2015 226

The AHU is controlled by a direct digital controller (DDC) system in the


building. A DDC control system contains several microprocessor based controllers
that take input signals from sensors and send output signals to the actuating devices.
The input and output (I/O) signals are either binary (on/off), such as the fan status,
the pump status, or analog (variable), such as the temperature and pressure
measurements. The signals to and from the controller are transmitted by a change in
voltage, current, or resistance. The controllers interact with both hardware and
software points to implement the controls. The controller also interacts with virtual
objects like the set point values or schedules for operation (ASHRAE, 2004)
The controls of the AHU being studied were described in the SOOs by the
mechanical designer through six control sequences corresponding to – the return air
temperature, supply air pressure, return air pressure, return water temperature, CO2
level and system operation. The SOOs were thoroughly analyzed to interpret the
controls and any missing or unclear information was acquired through discussion
with the CxA involved in the commissioning of the AHU. The interpreted controls
related to the supply air temperature control are presented as a pseudo code in Table 1.
The following variables were used in the pseudo-code: Outside air temperature
(OAT), Return air temperature (RAT), Return air temperature set point (RAT:SP),
Supply air temperature (SAT), Supply air temperature set point (SAT:SP), Hot water
return temperature (HWRT), Hot water return temperature set point (HWRT:SP),
Supply air pressure (SAP), Supply air pressure set point (SAP:SP), Return air
pressure (RAP), Return air pressure set point(RAP:SP).

ANALYSIS OF THE TEMPERATURE CONTROLS


The primary objective of this control loop is to maintain the required supply
air temperature. Since the AHU is placed on the roof-top, safety feature of freeze
protection during the heating period (winter) is also an important design
consideration. The control of the supply air temperature in the AHU is an example of
a closed loop controller. The functioning of the AHU components has a direct impact
on the controlled variable (supply air temperature). The closed loop controller
compares the measured value of the controlled variable with its set point and signals
to the controlled devices for corrective action (ASHRAE, 2013). Pseudo code for
control of the temperature controlling components in the AHU, as given in Table 1,
was developed using the system schematic drawing and the SOOs.
Although it is not explicitly stated in the SOOs, the role of the cross-flow heat
exchanger was interpreted through discussions with the commissioning engineer of
the project. The system actively uses energy to heat the supply air using the heating
coil and utilizes the cross-flow heat exchanger to reduce the load on the heating coil
by passively pre-heating the supply air. The heat exchanger by-pass damper
modulates to a closed position when the heat exchanger functioning ramps up. The
heating coil valve ramps up to enable the heating coil to start functioning. Based on
the designed control logic, the heating coil valve should start modulating from 0% to
100% only when the heat exchanger by-pass damper is at 0%. This is to allow for the
heat exchanger to reach its maximum capacity before the heating coil starts
functioning. Alternately, the heat exchanger by-pass damper should modulate from
0% to 100% before the heating coil valve starts to modulate (i.e., when it is at 0%). If
the sensor measurements of the by-pass damper position and the heating coil valve

©  ASCE 4
Computing  in  Civil  Engineering  2015 227

position are plotted against each other, the plot can be compared to the expected
operation pattern to assess if the control follows the design intent.

Table 1. Pseudo code for control of supply temperature


If (RAT-RAT:SP)>0 and (HWRT-HWRT:SP)>0 ; //typical heating period
Heat recovery:Enabled & Heat recovery damper:Ramp up & By-pass damper:Ramp down;
If (SAT-SAT:SP)<0; // additional heating not required from heating coil
Heating coil valve:Ramp down & Hot water pump:Off;
If (SAT-SAT:SP) >0; //additional heating required from heating coil
Heating coil valve:Ramp up & Hot water pump:On;
If (RAT-RAT:SP)>0 and (HWRT-HWRT:SP)<0; //heating period, heating coil freeze protection
Heating coil valve:Ramp up & Hot water pump:On;
Heat recovery:Enabled & Heat recovery damper:Ramp up & Bypass damper:Ramp down;
If (SAT-SAT:SP)>0; //heat recovery ramped down to reduce heating
Heat recovery damper:Ramp down & Bypass damper:Ramp up;
If (SAT-SAT:SP)<0; //heat recovery ramped up to provide more heating
Heat recovery damper:Ramp up & Bypass damper:Ramp down;
If ((RAT-RAT:SP)<0 and (HWRT-HWRT:SP)<0) or ((RAT-RAT:SP)<0 and HWRT-
HWRT:SP>0)); //Heat exchanger and heating coil freeze protection or Heat exchanger freeze
protection only
Heat recovery:Disabled & Heat recovery damper:Closed & By-pass
damper:Open;
If (SAT-SAT:SP)>0; //excessive heating from the heating coil
Heating coil valve:Ramp down & Hot water pump:On;
If (SAT-SAT:SP)<0; //inadequate heating from the heating coil
Heating coil valve:Ramp up & Hot water pump:On;

If (RAT-RAT:SP)>0 and (HWRT-HWRT:SP)>0; // cooling period


Heat recovery:Disabled & Heat recovery damper:Closed & By-pass
damper:Open;
Heating coil valve:Closed & Hot water pump:Off;

This can be graphically represented as shown in Figure 3a. Any points falling
in the green section of the graph are predicted to follow the designed controls and the
points falling in the red region of the graph are predicted to deviate from the designed
controls.

©  ASCE 5
Computing  in  Civil  Engineering  2015 228

Figure 3. Deviation analysis of the implemented control against the design intent
DEVIATION ANALYSIS OF IMPLEMENTED CONTROLS
The Pia tool in MATLAB can be used for deviation analysis of the
implemented controls by visualizing the corresponding sensor data. The tool allows
the creation of a matrix of scatter plots and provides an option of coloring the data
points interactively within the scatter plot matrix (Isakson, 2002). Trend data
collected from April 2013 to March 2014 at a 15 minute time interval were used in
this analysis for the following parameters: the by-pass damper, hot water valve
position, hot water pump status, supply fan status, hot water return temperature, hot
water return temperature set point, return air temperature, return air temperature set
point. The heat exchanger has been provided to reduce the heating load on the heating
coil to save energy in the AHU.
The deviation analysis presented here analyzes the implemented control to
verify if the design intent of saving energy is being achieved during the system
operation. The data collected at 15 minutes time interval is classified as – “Follows
design intent” (green area) or “Doesn’t follow design intent” (red area) based on the
interpreted designed controls. This expected operation pattern of the by-pass damper
and heating coil valve position based on the interpreted controls is graphically
presented in Figure 3a. The hot water valve is expected to modulate only when the
by-pass damper is completely closed during occupied heating hours when both the
heating coil and the heat exchanger are not in the freeze protection mode.
The sensor trend data points were plotted against one another in a scatter plot matrix.
The plot of the heating coil valve position vs. the by-pass damper position (Figure 3b)
was chosen from the matrix and the data points that corresponded to the green region
of the operation pattern (as shown Figure 3a) were colored in green and the rest of the
points were colored in red. Further, all instances corresponding to unoccupied hours
(Supply fan – off), no hot water flow (hot water pump- off), the freeze protection of
the heating coil (HWRT – HWRT:SP <0) and the freeze protection of the heat
exchanger (RAT-RAT:SP<0) were filtered out. These filtered out instances are
shown as black points in Figure 3b. The remaining red points indicate the instances
when the implemented control deviated from the design intent. In this particular
example being discussed, the system loses opportunities to save energy at these

©  ASCE 6
Computing  in  Civil  Engineering  2015 229

instances marked in red. The heating coil valve should have been modulating only
when the heat recovery by-pass damper is completely closed according to the design
intent, however both are open above 0% for the instances marked in red. Majority of
these instances occurred in September and October 2013 when the room temperatures
modulated between 20°C to 26°C.
It is impossible to assess why the heat exchanger does not reach its maximum
capacity by only looking at the room temperatures, as the heating requirement would
be compensated by the heating coil. Hence it is important to look at the control
relationship between the heating coil valve and the bypass damper which may not
always be explicitly described in the SOOs. In the case example being studied, the
control relationship was determined from the pseudo-code developed based on the
discussion with the commissioning engineer for the project. Formal representation of
SOOs can be used to clearly interpret all the control relationships required to
implement the design intent.

LESSONS LEARNED FROM THE DEVIATION ANALYSIS


Identifying the anomalies in the implemented controls ensures that HVAC
systems perform according to their designed controls and can help save energy
(Baumann, 2003). Salsburgy et al. described two testing approaches for
commissioning HVAC control systems (Salsbury et al., 2003):
(1)Passive tests: Testing approaches that evaluate the implemented controls by only
observing the normal system functioning. The relevant variables of the controls are
selected and the corresponding sensor data is recorded for analysis every day;
(2)Active tests: Testing approaches within which the normal functioning of the
system is disturbed to verify the responses by various system components to the
disturbances. The component responses are verified against the expected behavior
based on the designed controls.
It is essential to interpret the design intent from the SOOs to compare against the
actual system functioning for both of the testing approaches. Previous work in this
research identified the challenges in interpreting the design intent from the SOOs due
to the absence of a guideline that states the various information items that are to be
included in the SOOs (Sunnam et al., 2013). For example, the sensor that corresponds
to the parameter that is directly associated to the controller of the heating coil valve is
not explicitly stated in the extract from the SOOs for the studied AHU.
“The return temperature of the heating coil is controlled to a minimum
temperature set point. When the system is not functioning, the controller has
direct access to the Heating coil valve. During the operation of the system, a
maximum value is selected from the control values of the heating coil return
water temperature controller and the supply air temperature controller.”
Hence, the challenges in interpreting the design intent of the controls also impede
the controls testing tasks during commissioning. Formal representation of SOOs can
help to clearly interpret the design intent that can be compared to the implemented
controls.
The deviation analysis presented in this paper is an example of a passive test.
It utilizes the trend data from the BAS and visually analyzes the system functioning

©  ASCE 7
Computing  in  Civil  Engineering  2015 230

for twelve months. Both the active and passive testing approaches help in identifying
the differences in the implemented and designed controls. However, these diagnostic
approaches give little information about the changes required for the implemented
controls to eliminate the deviations. Formally associating the sensor data points to
the respective control parameters can further help in identifying the changes that need
to be made to the implemented controls in the event of deviations.
CONCLUSION
This paper highlights a need for the formal representation of SOOs and also
formal approaches for comparing the design intent from the SOOs to the
implemented controls. Presently, several challenges are associated with interpreting
the design intent of HVAC controls from the SOOs, such as missing information
items, missing set points, or insufficient descriptions. Hence the implemented
controls cannot always be compared to the design intent during commissioning. The
deviation analysis from the case study presented in this paper shows the importance
of testing the implemented controls as they help identify control issues that may lead
to energy wastages. Formal representation of SOOs can greatly improve this process
of controls testing by enabling clear interpretation of the design intent. Also, the
testing approaches used for commissioning the HVAC control systems presently do
not clearly indicate the changes that are required to be made to resolve any identified
deviations of the implemented controls from the design intent. The analysis of the
case study shows that associating the trend data points from the BAS to the control
parameters can be used to effectively identify any deviations of the designed and
implemented controls. Once the correct implementation of the controls is established,
the controls can further be optimized to achieve energy savings up to 35% (Wang et
al., 2011). Future work will focus on formalizing the deviation analysis of the
designed and implemented controls to exactly identify the changes that are needed to
be made in the BAS programming in the event of any deviations.
ACKNOWLEDGEMENT
The authors thank the team at Baumann Consulting for their expert advices and
supporting this research.

REFERENCES
Barwig, F. E., House, J. M., Klaassen, C. J., Ardehali, M. M., & Smith, T. F. (2002).
The national building controls information program. In Proc. ACEEE
Summer Study on Energy Efficiency in Buildings, Washington D.C., August
18 – 23, 2002.
Baumann, O. (2003). Operation Diagnostics – Verification and Optimization of
Building and System Operation by Multi-Dimensional Visualization of
BEMS Data, ICEBO – International Conference for Enhanced Building
Operations, , Berkeley CA, October 13-15, 2003.
Guideline, A. S. H. R. A. E. (2004). Guideline 13-2000 Specifying Direct Digital
Control Systems. ASHRAE Publications, Atlanta, GA.
Handbook, A.S.H.R.A.E. (2012). HVAC systems and equipment. American Society
of Heating, Refrigerating, and Air Conditioning Engineers, ASHRAE
Publications, Atlanta, GA.

©  ASCE 8
Computing  in  Civil  Engineering  2015 231

Handbook, A.S.H.R.A.E. (2013). Fundamentals, American Society of Heating,


Refrigerating, and Air Conditioning Engineers, ASHRAE Publications,
Atlanta, GA.
Isakson, P. (2004). Pia-Manuals, Department of Building Sciences, Royal Institute of
Technology Stockholm (KTH), KTH Publications, Sweden.
Salsbury, T. I., & Singhal, A. (2003). Control System Commissioning for Enhanced
Building Operations. In Proceedings of the 3rd International Conference for
Enhanced Building Operations, Berkeley, CA, October 13-15, 2003.
Wang, W., Katipamula, S., Huang, Y., & Brambley, M. R. (2011). Energy Savings
and Economics of Advanced Control Strategies for Packaged Air-
Conditioning Units with Gas Heat (No. PNNL-20955). Pacific Northwest
National Laboratory (PNNL), Richland, WA (US).

©  ASCE 9
Computing  in  Civil  Engineering  2015 232

Implications of Micro-Management in Construction Projects-An Agent Based Modeling


Approach

Jing Du1 and Mohamed El-Gafy2


1
Assistant Professor, Department of Construction Science, University of Texas at San Antonio.
E-mail: [email protected]
2
Associate Professor, School of Planning, Design and Construction, Michigan State University.
E-mail: [email protected]

Abstract

Micro-management refers to a management style whereby the managers closely observe and
control the work details of subordinates or employees. Although micro-management generally
has a negative connotation, the implications of adopting micro-management in construction
projects remain unclear. This paper proposes the use of Agent Based Modeling (ABM) to
investigate the impacts of micro-management on the efficiency, effectiveness, quality, and
employee stress level in construction projects. A comprehensive simulation platform, Virtual
Organizational Imitation for Construction Enterprises (VOICE), has been developed to simulate
the proposal development of an EPC (Engineering, Procurement and Construction) project. The
simulation results show that the micro-management has complex effects in the studied project,
whereby decisional, behavioral, technical and institutional factors are interdependent. Micro-
management in certain cases improves the efficiency and quality of proposal development. This
paper contributes to the simulation studies in investigating social and behavioral problems in
construction.

INTRODUCTION

Management styles can be grouped into two categories according to how coordinators involve
themselves in the decision making and managerial actions; the categories are micro-management
and not (Burton et al. 1998). Micro-management is the custom of being heavily engaged in the
daily affairs and specific tasks of subordinates while the opposite is giving a degree of autonomy
to subordinates. The organizational literature often refers to micro-management as a “bad
management” practice (Alvesson and Sveningsson 2003). In general “it takes away the decisions
from the people that should take the decisions” (Alvesson and Sveningsson 2003), and results in
interference with productivity of people and the efficiency of projects and processes (Chambers
2009). Despite the evidence from general organizational literature, the implications of adopting
micro-management in construction projects, however, remain unclear. This study proposes the
use of Agent Based Modeling (ABM) to investigate the implications of micro-management in
construction projects.

©  ASCE 1
Computing  in  Civil  Engineering  2015 233

LITERATURE REVIEW

ABM is an emerging tool for use in social research to study human and organizational issues in a
diversity of areas (North and Macal 2007). It is a computational method that builds a common
environment for heterogeneous and autonomous agents to share, and allows the agents to
simultaneously interact with each other for self-interest (Ligmann-Zielinska and Jankowski
2007). Unlike top-down modeling approaches (e.g., System Dynamics), in ABM the collective
behavior of the simulated system is not predefined, but emerges from individual agents who act
based on what they perceive to be their own interests. Thus, ABM is capable of reproducing the
emergent properties of the studied systems (Macal and North 2007). ABM has been utilized by a
small but growing community of scholars to tackle a range of difficult problems in the
construction area, including engineering design (Soibelman and Pena-Mora 2000), project
organizations and network (Du and El-Gafy 2010, 2012; Horii et al. 2005; Jin and Levitt 1996;
Taylor and Levitt 2007), construction operations (Kim 2010; Mohamed and AbouRizk 2005;
Watkins et al. 2009), project management (Christodoulou 2010), supply chain (Xue et al. 2005),
and construction safety (Walsh and Sawhney 2004).

METHODOLOGY

An ABM model has been developed, namely Virtual Organizational Imitation for Construction
Enterprises (VOICE). VOICE tailors Robbins’ model of organizational behaviors (Robbins 2005)
to suit construction organizations, with three main components modeled (Fig.1): (1) Work:
construction organizations are project based organizations (PBOs), and thus projects and
corresponding tasks are modeled as the sole input as that in Robbins’ model; (2) Actors: project
tasks are performed by the individuals in a construction organization, whose personalities, value
and attitudinal factors affect the perceptions toward the tasks, leading to diverse micro-level
behaviors directly related to the work performance; and (3) Organization: a variety of
organizational structures that arranges lines of authority, work and communications, and
allocates rights and duties. In addition, key performance indicators of project team performance
are modeled as the main output. The architecture illustrated in Fig. 1 reflects the bottom-up
process of organizational behavior (input-individual level process - group process -
organizational process - output) as suggested by Robbins (2005). VOICE conceptualizes and
integrates all components into a comprehensive and integral model. Table 1 summarizes the
model rules of VOICE.

Figure 1. Model architecture of VOICE

©  ASCE 2
Computing  in  Civil  Engineering  2015 234

Table 1. Summary of VOICE model rules


• A project can be divided into a sequence of executable work efforts called tasks;
• Multiple projects can be handled by a team simultaneously.
• A task is the most basic executable work effort for a project team member;
Work

• Task amount is measured by “hours”, i.e., how many hours it takes to finish a task by a team
member with average competence;
• Some tasks need approval from managers or president, or additional information before processing;
If a task is dependent on another one which has not been finished, it cannot be processed;
There are three major roles of actors in a construction project team, including president, manager and
staff member. In VOICE, an actor will first examine the situation. Then based on judgment on the
situation and preference, a certain behavioral module will be triggered.
• Prioritizing: An actor can only process one task at a time; therefore prior to further actions, an actor
may order all tasks in hand based on the readings of their priorities;
• Processing: Processing a task means reducing certain amount from the task every simulation tick.
The amount reduced depends on task difficulty and competence of the actor. During this process,
actors may commit mistakes shown as a mistake percentage of the task;
• Submission: Once a task is finished and there is no successive actors (according to work process),
the actor will submit the task to his/her superior;
• Assigning: A manager may assign a task to his/her subordinates based on the assigning preference
(e.g., speed driven or quality driven);
• Requesting/approving: some tasks require approval from superiors; in this case, the actor will render
this task to superiors, who approves the task or render it again to superiors based on the technical
information of the task and authority level of the actor.;
Actors

• Conflict management: If a conflict cannot be solved by staff members, it will be raised to the
manager or coordinator for further actions;
• Information exchange: If the available information for a task is less than the required information,
the actor will send this task to another actor. After a while, the task will be returned to the requestor
with necessary information;
• Meeting: If the number of all exceptions in a team is bigger than the threshold of the president, a
meeting will be held. After a meeting, all tasks are approved, information is provided, and
exceptions are cleared;
• Monitoring: If the mistake percentage of a task is bigger than the threshold of a staff member or a
manager, it will be returned to the original actor, or will be corrected at a cost of additional time
depending on the preference.
• Correction/rework: If an actor receives a returned task marked as unqualified, he/she will redo it to
improve quality. The time spent on correcting/redoing a task depends on the mistake percentage of
the task and competence of the actor;
• Stress-coping: An actor sums up total amount of tasks (burden) in hand – if this number is bigger
than his/her capacity, he/she will suspend working, and return new tasks to the manager. The
manager will reassign it to a staff member with smaller level of burden.
• Reporting structure: It is assumed that construction project team has a three level hierarchical
Organization

organizational structure;
• Work process: The procedure of processing a task; it shows the sequence of delivering a task among
team members. It always starts from a manager;
• Information flow: The channel connects information requestors and providers. Information only
refers to task related information, i.e., that is needed for processing a task.

©  ASCE 3
Computing  in  Civil  Engineering  2015 235

• Efficiency: Finished task amount per unit time (hour);

Performance
• Effectiveness: Ratio of productive time versus total time. Productive time is defined as time directly
spent on processing tasks;
• Quality: The mistake percentage of a project, which equals weighted average of the mistake
percentages of all its tasks.
• Work pressure: Total work amount of tasks in hand for an actor;

SIMULATION ANALYSIS

A case study was conducted with a large EPC company denoted as A. In order to enhance its
competitive power in the EPC market, Company A acquired an engineering design firm several
years ago to design all of A’s new EPC jobs. Proposal development is the responsibility of A’s
project proposal team. But because of the specialty of work, A’s proposal team highly relies on
the technical and quantity information from the engineering team to develop proposals. This
study utilized VOICE to explore the implications of micro-management in the proposal
development at A, especially with the interdepartmental cooperation between the engineering
and proposal teams (Fig.2). The magnitude of micro-management was measured with the
acceptable number of iterations for information exchange before raising the issue to coordinators
(Kristof-Brown and Stevens 2001). A smaller acceptable number of interactions means the
coordinators prefer to micro-management. In addition, it has been found that two other
sociotechnical factors affect the implications of micro-management: 1) goal congruence, i.e.,
aligned perceptions of behavioral standards and ranking of management criteria among
stakeholders (Thomsen et al. 2005); and 2) task dependence, i.e., the relationships among tasks
which determine the order in which activities need to be performed (Jin and Levitt 1996). In the
simulation, goal congruence was quantified as a value from 0 to 1, where 1 means the best goal
congruence. As for task dependence, a probability was used to determine whether a newly
generated task can be processed or not while preceding tasks are ongoing. Monte Carlo
simulation was performed to explore the entire uncertainty space. Uniform distributions were
used to simulate the changes of micro-management, goal congruence and task dependence.

Vice President: engineering team Report structure


Vice President: proposal team
Work process
Information flow
Projects

Engineering Coordinator
Proposal
Coordinator

Engineer Staff

Figure 2. Snapshot of VOICE simulation

Influences of micro-management + goal congruence Fig. 3 illustrates the combined influences of


goal congruence and micro-management preference on performance. Micro-management and goal
congruence between teams together can alter the shape of performance landscapes. A further ANOVA

©  ASCE 4
Computing  in  Civil  Engineering  2015 236

analysis (Table 2) found that micro-management’s influence shows different features under different
levels of goal congruence:
• Efficiency: The effects of micro-management on efficiency vary depending on the level of goal
congruence. When goals are less congruent between two teams (0.1 and 0.2), micro-management
can help improve efficiency. But when goals are highly congruent between two teams (0.8 and 0.9),
too much micro-management hurts efficiency.
• Effectiveness: The effect of micro-management on effectiveness also depends on level of goal
congruence. When goals are less congruent, such as at a level of 0.1 or 0.2, micro-management
improves effectiveness. Otherwise, micro-management sacrifices effectiveness. This indicates that
micro-management helps with effectiveness only when goals are incongruent.
• Quality: Result shows that autonomy sacrifices quality in most cases. Micro-management can
always help reduce mistakes. However, this is not true when the goals of two teams are highly
congruent (e.g., greater than 0.9). In this case, micro-management will slightly increase the chance
of committing more mistakes. This indicates that when teams share the same goals, micro-
management leads to mistakes.
• Work related pressure: the ANOVA indicates there is a significant relationship (p-value<0.0001)
between micro-management and work related pressure at each level of goal congruence: less micro-
management or higher level of autonomy for the staff means a higher level of work related pressure.

Figure 3. Influences of goal congruence and micro-management


Table 2. p-values of micro-management’s influence under levels of goal congruence
Congruence Efficiency Effectiveness Quality Pressure
0.1 <0.0001* <0.0001* <0.0001* <0.0001*
0.2 0.0002* <0.0001* <0.0001* <0.0001*
0.3 0.0064 <0.0001* <0.0001* <0.0001*

©  ASCE 5
Computing  in  Civil  Engineering  2015 237

0.4 0.3628 <0.0001* <0.0001* <0.0001*


0.5 0.3477 <0.0001* <0.0001* <0.0001*
0.6 0.479 <0.0001* 0.0134* <0.0001*
0.7 0.1075 <0.0001* 0.8255 <0.0001*
0.8 0.1533 <0.0001* 0.3698 <0.0001*
0.9 0.0003* <0.0001* 0.2294 <0.0001*
1.0 0.0024* <0.0001* 0.0001* <0.0001*
Influences of micro-management + task dependence The experiment also examined the combined
impacts of micro-management and task dependence on performance. Fig.4 demonstrates the results
based on 52,800 simulations.

Figure 4. Influences of task dependence and micro-management


An ANOVA revealed the differing effects of micro-management under different levels of task
dependence (Table 3).

• Efficiency: the influence of micro-management becomes less noticeable when task dependence is
considered. Only when tasks are very independent (task dependence is 0 through 0.2), is micro-
management able to improve efficiency; otherwise, it exerts no influence. This indicates that micro-
management is beneficial only when tasks are highly dependent.
• Effectiveness: similar to efficiency, the influence of micro-management on the effectiveness of
proposal development is not significant when task dependence is considered.
• Quality: autonomy sacrifices quality. When coordinators prefer the autonomy of team members, the
team will commit more mistakes. Worth noting, however, is that the opposite trend occurs when
task dependence equals 0, and is due to the abnormal data points in the simulation.
• Work pressure: ANOVA does not show a significant relationship between micro-management and
work related pressure under most task dependence levels. Only when tasks are highly independent

©  ASCE 6
Computing  in  Civil  Engineering  2015 238

(dependence is smaller than 0.4) do the results show that micro-management can reduce work
related pressure.
Table 3. p-values of micro-management’s influence under levels of task dependence
Dependence Efficiency Effectiveness Quality Pressure
0.0 <0.0001* <0.0001* 0.003* <0.0001*
0.1 0.0027 <0.0001* <0.0001* <0.0001*
0.2 0.0343 0.0238* <0.0001* <0.0001*
0.3 0.0789 0.0862 <0.0001* 0.0253*
0.4 0.1237 0.1658 <0.0001* 0.0448*
0.5 0.3364 0.0822 <0.0001* 0.1864
0.6 0.6238 0.1561 0.0134* 0.3762
0.7 0.1595 0.1231 0.0013* 0.4579
0.8 0.4423 0.0684 0.0002* 0.2688
0.9 0.5864 0.0402* 0.0002* 0.3357
1.0 0.3522 0.0355* 0.0052 0.4238

DISCUSSION AND CONCLUSION

The general organizational science literature always refers to micro-management as a bad practice.
However, the implications of adopting micro-management in construction projects remain unclear.
This study proposes to investigate the micro-management and its implications in project proposal
development from the behavioral perspectives. Unlike previous efforts, it also highlights the
importance of considering diverse human behaviors relevant to the proposal development in a
comprehensive manner rather than just one or several critical behaviors, as the interactions of various
human behaviors set the foundation of understanding complex institutional and behavioral
phenomenon. An ABM model, called VOICE, was built to perform a series of simulation experiments
on the impacts of micro-management, with the implications of goal congruence and task
interdependence. Results indicate that the impacts of micro-management are complex depending on a
variety of factors. For example, when team members share congruent goals, micro-management will
hurt performance but it will improve performance when team members have incongruent goals.
Admittedly, this work is in its infancy. The future work will be focusing on expanding the factors and
processes modeled by VOICE to capture a wider range of organizational behaviors. More real data
from different companies will be collected in order to define behaviors, work process, and interactions.
This will result in more realistic results.
APPENDIX: SUPPLEMENTARY INFORMATION

The behavior rules in VOICE were based on a survey conducted in 2011 and a meta-analysis. For
summaries please refer to https://sites.google.com/site/dujresearch/working-papers.
REFERENCES

Alvesson, M., and Sveningsson, S. (2003). "Good visions, bad micro-management and ugly ambiguity:
contradictions of (non-) leadership in a knowledge-intensive organization." Organization Studies,
24(6), 961-988.
Burton, R. M., Obel, B., Hunter, S., Søndergaard, M., and Døjbak, D. (1998). Strategic organizational
diagnosis and design: Developing theory for application, Kluwer Academic Pub.
Chambers, H. E. (2009). My Way Or the Highway: The Micromanagement Survival Guide: Easyread
Super Large 18pt Edition, ReadHowYouWant. com.

©  ASCE 7
Computing  in  Civil  Engineering  2015 239

Christodoulou, S. (2010). "Scheduling Resource-Constrained Projects with Ant Colony Optimization


Artificial Agents." Journal of Computing in Civil Engineering, 24, 45.
Du, J. and El-Gafy, M. (2010) “Virtual Organizational Imitation for Construction Enterprises (VOICE):
Managing Business Complexity Using Agent Based Modeling.” Construction Research Congress
2010: pp. 398-408.
Du, J. and El-Gafy, M. (2012). ”Virtual Organizational Imitation for Construction Enterprises: Agent-
Based Simulation Framework for Exploring Human and Organizational Implications in
Construction Management.” J. Comput. Civ. Eng., 26(3), 282–297.
Horii, T., Jin, Y., and Levitt, R. (2005). "Modeling and analyzing cultural influences on project team
performance." Computational & Mathematical Organization Theory, 10(4), 305-321.
Jin, Y., and Levitt, R. (1996). "The virtual design team: A computational model of project organizations."
Computational & Mathematical Organization Theory, 2(3), 171-195.
Kim, K. (2010). "Case Study on the Evaluation of Equipment Flow at a Construction Site." Journal of
Computing in Civil Engineering, 1, 28.
Kristof-Brown, A. L., and Stevens, C. K. (2001). "Goal congruence in project teams: Does the fit between
members' personal mastery and performance goals matter?" Journal of Applied Psychology, 86(6),
1083.
Ligmann-Zielinska, A., and Jankowski, P. (2007). "Agent-based models as laboratories for spatially
explicit planning policies." Environment and Planning B: Planning and Design, 34(2), 316-335.
Macal, C., and North, M. "Agent-based modeling and simulation: desktop ABMS." IEEE Press
Piscataway, NJ, USA, 95-106.
Mohamed, Y., and AbouRizk, S. (2005). "Framework for building intelligent simulation models of
construction operations." Journal of Computing in Civil Engineering, 19, 277.
North, M., and Macal, C. (2007). Managing business complexity: discovering strategic solutions with
agent-based modeling and simulation, Oxford University Press, USA.
Robbins, S. (2005). "Organizational Behavior (11th)." Prentice Hill.
Soibelman, L., and Pena-Mora, F. (2000). "Distributed multi-reasoning mechanism to support conceptual
structural design." Journal of Structural Engineering, 126, 733.
Taylor, J., and Levitt, R. (2007). "Innovation alignment and project network dynamics: An integrative
model for change." Project Management Journal, 38(3), 22-35.
Thomsen, J., Levitt, R. E., and Nass, C. I. (2005). "The Virtual Team Alliance (VTA): Extending
Galbraith’s Information-Processing Model to Account for Goal Incongruency." Computational &
Mathematical Organization Theory, 10(4), 349-372.
Walsh, K., and Sawhney, A. "Agent-Based Modeling of Worker Safety Behavior at the Construction
Workface." 779-792.
Watkins, M., Mukherjee, A., Onder, N., and Mattila, K. (2009). "Using Agent-Based Modeling to Study
Construction Labor Productivity as an Emergent Property of Individual and Crew Interactions."
Journal of Construction Engineering and Management, 135, 657.
Xue, X., Li, X., Shen, Q., and Wang, Y. (2005). "An agent-based framework for supply chain
coordination in construction." Automation in Construction, 14(3), 413-430.

©  ASCE 8
Computing  in  Civil  Engineering  2015 240

A Data Quality-Driven Framework for Asset Condition Assessment Using


LiDAR and Image Data
Pedram Oskouie1; Burcin Becerik-Gerber2; and Lucio Soibelman3
1
Ph.D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA. E-mail: [email protected]
2
Assistant Professor, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA. E-mail: [email protected]
3
Professor and Chair, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA. E-mail: [email protected]

Abstract
Laser scanners provide high-precision geometrical information, however, due
to various scan errors, the generated point clouds often do not meet the data quality
criteria set by project stakeholders for accurate data processing. Although, there exists
several studies on identifying scan errors in literature, there is limited research on
defining a data quality-driven scan plan for accurate detection of geometrical
features. The authors propose a novel framework to integrate image-processing
methods with point cloud processing techniques for defining a data quality-driven
approach for scan planning. The framework includes the following steps: 1) capturing
images of a target using a commercially available unmanned aerial vehicle (UAV)
and generating a 3-D point cloud, 2) recognizing the project’s geometrical
information using the point cloud, 3) extracting the features of interest (FOI) using
the point cloud, 4) generating multiple scanning scenarios based on the data
processing requirements and the extracted features, 5) identifying the best scan plan
through iterative simulation of different scanning scenarios, and 6) validating the scan
plan using real life data. The framework was evaluated using preliminary results of a
case study. The results showed that the data quality requirements of the case study
were met using the proposed framework.
INTRODUCTION AND BACKGROUND
Visual remote sensing technologies, such as Light Detection and Ranging
(LiDAR) systems and digital cameras, are prevalently used for generating as-built/as-
is 3-D models. Reconstructing accurate 3-D models by using such technologies have
been extensively studied in recent years (Anil et al. 2011; Balali and Golparvar-Fard
2014; Balali and Golparvar-Fard 2015; Tang et al. 2010). Performances of 3-D laser
scanners and digital cameras in creating realistic 3-D models have been also
compared in many studies (Dai et al. 2012; Zhu and Brilakis 2009). According to Dai
et al. 2012, the performance of laser scanners is more consistent and their accuracy is
10 times higher than image and video-based methods. Researchers have identified
factors that influence scan-data accuracy. For example, Shen et al. (2013) found that
scan resolution, scanner’s distance, color and intensity of the scanned objects are the

1
©  ASCE
Computing  in  Civil  Engineering  2015 241

factors that contribute most to scan errors. In order to have an accurate 3-D point
cloud and to minimize the errors for different purposes, such as Scan to BIM, it is
essential to have an accurate scan plan, which takes all parameters that could affect a
scan’s accuracy into account. Most of the scan planning studies focus on analyzing
the visibility requirements and do not consider the data quality requirements (e.g. data
density and tolerance). The use of a uniform scan plan can result in redundant level of
detail (LOD) in parts of a point cloud and lack of the required details in other parts. A
scan plan should also consider the project specific LOD requirements, which may
vary in a project. The LOD requirements of a geometrical feature of a project could
be initially defined based on the asset condition assessment (ACA) goals. For
instance, an ACA goal for a bridge could be the inspection of concrete columns for
crack detection. In order to monitor the columns using a laser scanner, point clouds
should provide detailed data points on the columns so the data processor could
analyze the severity of cracks or damages. If the rest of the bridge is scanned with
settings similar to the columns, a scan plan based solely on the visibility requirements
could potentially result in a time consuming and costly scan process.
Recently, researchers have developed new scan planning approaches focusing
on scan data quality. Pradhan and Moon (2013) introduced a simulation-based
framework to identify scan locations for better capturing the critical components of a
bridge such as piers and girders. Based on their findings, capturing geometrical
measurements of certain parts of a structure is more critical than the others since
different portions of structures have different levels of importance in terms of
performance/response metrics. Song et al. (2014) proposed a novel approach for scan
planning, which integrated 3-D data quality analysis and clustering methods to group
the geometrical features based on their LOD requirements. Their study showed that
the automatic scan planning algorithm results in a denser point cloud without the
need to increase the data collection time. Also, their study is based on the assumption
that a BIM model of the project is available, however, this is not always the case,
especially when the as-is condition of an infrastructure is different than the archived
as-built/designed models. Moreover, the line-of-sight and portability limitations of
terrestrial laser scanners have to be considered in the scan plan. For instance, a scan
plan has to provide a solution for capturing the features that are located in the blind
spots of a laser scanner. Therefore, a hybrid scan plan including different LiDAR
equipment such as long range scanners and aerial imaging sensors may be required to
scan infrastructure systems with accessibility limitations.
FRAMEWORK
In order to define a high quality scan plan, there is a need for a holistic
approach that takes into account project specific characteristics and data quality
requirements. The framework proposed in this paper was designed to enable project
stakeholders to define an integrated scan plan, which is centered on data quality
requirements. (Figure 1). The input data of the proposed framework is comprised of
every project’s specific information. The main information that drives the scan plan is
realization of the condition assessment goals. The LOD requirements for different
geometrical features of the project are defined by project stakeholders based on these
goals. LOD requirements are usually not constant and may change throughout the

2
©  ASCE
Computing  in  Civil  Engineering  2015 242

project. The next input data to be derived is the list of all available LiDAR sensors
and their parameters. This information is directly tied to the project’s constraints,
such as time and budget, weather conditions, accessibility, etc. For instance, in order
to overcome the project’s accessibility constraints, an alternative solution would be
using long range terrestrial laser scanners, aerial LiDAR sensors, or UAV imaging.
The last input data for the framework is the 3-D model of the project (if available). In
this paper, the authors propose using an image-based 3-D model for planning the scan
when the updated BIM model is not available.

Figure 1. An overview of data quality-driven scan planning framework


Image-based 3-D models can provide an acceptable representation of a
project’s layout. However, compared to 3-D laser scanners, they might not offer an
accurate data for detailed analysis required for an ACA. Nonetheless, comparing to
other remote sensing technologies and with respect to their low cost, UAVs have less
mobility constraints and therefore, they can complement the scanning of a project by
covering the laser scanners’ blind spots. It is noteworthy to mention that using UAV
for image collection has its own challenges. For instance, UAV’s flight trajectory and
elevation have to be determined prior to flight as they can affect the quality of
collected images.
Once the image collection process is completed, a 3-D model is reconstructed
using the structure from motion (SfM) technique. In order for the 3-D model to be
used for scan planning, it has to be augmented with semantics about FOI’s
geometrical information in the project. Hence, two principal information is extracted
from the 3-D model: 1) the project’s layout that includes the boundary coordinates
which is then integrated with the project’s accessibility constraints information, 2)
FOI (points, planes, or 3-D objects in general) geometrical and RGB data. The
project’s layout information assists the scanning crew to identify the feasible areas for
scanner circulation and establishing scan positions. Using the extracted information,
the features are classified based on their locations, orientations in space, and the LOD
requirements. The classification criteria are selected based on their direct
relationships with scanner’s position and data quality. The interactions of features
classification criteria and sensor parameters (range, accuracy, and angular resolution)
pinpoint the best scan position. In the next step, the features are prioritized based on
their classes and a simulation-based decision making process determines the best scan

3
©  ASCE
Computing  in  Civil  Engineering  2015 243

position across different sides of the project. The output of the decision making
process is a scan scenario, which is then evaluated by a sensor simulation software
(i.e., Blensor). The sensor simulation output enables evaluation of the proposed scan
scenario by measuring the data quality for different features. If the data quality is not
satisfying, the decision making process is repeated with the new information using
the sensor simulation output. The other module of the proposed framework focuses
on integrating LiDAR and UAV-based imaging data to generate a single coherent 3-
D point cloud, in which all the blind spots of the target are covered. Due to the fact
that point clouds from different sensors might have non-equal spatial resolution,
precision, and accuracy, their registration process is challenging. In the case of
having point clouds from multiple sensors, an iterative registration process using
multiple common features/control points could improve the accuracy of final point
cloud. Once the registration process is completed, if there are missing data in the final
point cloud, there will be a need to augment the 3-D model by extrapolating the
missing data using their surrounding points’ coordinates and RGB values.
CASE STUDY
In order to provide a preliminary evaluation of the proposed framework, the
authors selected the Mudd Hall building’s courtyard located on University of
Southern California campus. The Mudd Hall building has been named as one of the
Historical Cultural Monuments by the Los Angeles City Council and the fact that it is
a feature rich building makes it a suitable case study for the purpose of this research.
Figure 2b shows some of the architectural features on the building’s courtyard.
UAV Image Collection. 3-D scene reconstruction using image sequences has
been a growing field of inquiry across multiple disciplines such as cultural heritage
preservation (Stanco et al. 2011), archaeology (Kersten and Lindstaedt 2012), and
construction (Golparvar-Fard et al. 2009; Ham and Golparvar-Fard 2013; Rodriguez-
Gonzalvez et al. 2014). Generating 3-D models from 2-D images follow the SfM
technique which includes: 1) feature extraction and description using Scale-invariant
Feature Transform (SIFT) algorithm, 2) pairwise matching of images using SIFT
descriptors, 3) estimation of motion and structure using the matched images, 4)
refining the estimates using Bundle Adjustment, and 5) creating surface meshes using
image-based triangulation.
The authors used a DJI Phantom 2 Vision Plus drone to take images from the
case study building. The drone comes with a 14 Megapixel built-in camera, which is
mounted on a gimbal, making it stable during its flight. The camera has a large field
of view (FOV = 125°), therefore the images are highly distorted. We selected the
lowest available FOV option (85°), even then the collected images were slightly
distorted. We then calibrated the camera to circumvent the distortion effect and to
rectify the images. We installed 10 ground control points (GCP) and scanned them
with a Leica TS06-plus Total Station to be able to geo-reference the 3-D
reconstructed model (Figure 2a). A total of 236 images were taken from the
building’s courtyard using a drone.
Image-based 3-D Reconstruction. We used commercially available and
open source tools, such as VisualSfM (Wu 2011), Autodesk Recap 360, and Agisoft
Photoscan Pro to generate a dense point cloud and a 3-D mesh of the courtyard. After
visual inspection of the reconstructed models, we decided to use Agisoft’s output as it

4
©  ASCE
Computing  in  Civil  Engineering  2015 244

provided a denser point cloud comparing to VisualSfM and since Autodesk Recap
only provides 3-D mesh. Note that the selection of the software tools might have
effects on the results; however, optimum selection of the tools will be part of the
future directions of this research. We then geo-referenced the model by assigning
surveyed coordinates to GCP on the point cloud. The 3-D reconstruction process was
completed in 155 mins using a Microsoft© Windows workstation laptop with Intel®
Core i7 processor, 32 GB RAM memory, and NVIDIA© K2000M graphic card.

Figure 2. Case Study-Mudd Hall Courtyard

DATA PROCESSING AND RESULTS


Geometrical Features Recognition. Following the steps in our proposed
framework, we used the images and corresponding point cloud to detect, extract, and
study the FOIs. For the purpose of this study, the west bound of the courtyard was
examined, where four large windows were selected as the FOIs (Figure 2). The
authors used Haar-like features to identify the FOIs from the images. Previous
research has shown that improved Haar-like features yields less rate of False
Positives (FP) and False Negatives (FN), as oppose to other prevalent image-based
object detection methods, such as HOG and etc. (Zhang et al. 2013). Haar-like
features were introduced by Viola and Jones (2001) who proposed convolving images
with Haar wavelet sequences. Haar-like features are found by calculating the
difference between sums of pixel intensities in surrounding rectangles of a location in
an image. The Haar-like classifier slides a window over an image and looks for
objects of interest that were previously used for training the classifier. Considering
the fact that using sums of pixel intensities as the only feature would result in a weak
learner, a cascade classifier, which includes a large number of Haar-like features is
used in the Viola and Jones framework.
We ran pilot tests to determine the optimum input parameters for training
Haar-like cascade classifier. The parameters were determined based on the number of
pictures and the quality of them: number of training stages = 4, sliding window size =
128*128, false positive rate = 0.2, and true positive rate = 0.99. We trained the
cascade classifier using different numbers of positive images (images that only
include objects of interest with different poses (position/orientation)) to evaluate the
performance of the Haar-like object detector. Note that we selected a low FP rate due
to the low number of positive images. Table 1a shows the results for different number
of training sizes. The precision, recall, and accuracy are computed based on
, ,
, respectively. Note that the maximum desired number of true
positives in each image was four (there are only four large windows in the courtyard),

5
©  ASCE
Computing  in  Civil  Engineering  2015 245

which is a low number and therefore has resulted in large jumps in precision and
recall values. Once the FOIs were detected in the images, the corresponding points
should be localized in the point cloud. A reverse engineering method using the SfM
principles could be employed to address the localization problem. Figure 3 illustrates
different steps to match 2-D pixels on the images to 3-D points on the point cloud.
The images were previously pair-wised by matching similar SIFT features during the
SfM process and a tree data structure was made for the pair-wised images. Therefore,
a tree-based search can identify co-visible images that contain a particular FOI. Using
the previously estimated camera poses of the co-visible images, fiducial planes (G1
and G2) of the two viewpoints along with the rotation matrix (R1,2) are computed and
used to localize the detected object in the 3-D model. For this preliminary study, we
manually localized the FOI in the point cloud to ensure the accuracy of the feature
classification and the following steps. As part of the future work, we will examine the
proposed automated localization approach.
Table 1. Results and LOD definition
a) Haar-Like Results b) Scan Results c) LOD Definition

Geometrical Features Classification. In this step, the features (windows


including their trimming) were classified based on their location, orientation, and
LOD requirements. The process of identifying FOI’s orientation begins with
estimating the feature’s surface normals. Two methods could be used for this
purpose. The first one is generating a mesh from the point cloud and estimating the
triangular planes’ normal. The second method, which is more accurate due to
studying points rather than approximated planes, is estimating the normal to a point
by solving a least-square plane fitting problem. The overall steps for the latter method
are: getting the k nearest neighbors of the point P on the point cloud, computing the
surface normal, and finally determining the sign of the normal. Determining the sign
of a surface normal in point clouds that are created from multiple views is a
complicated problem, however, it can be simplified by determining the correct
normal sign based on the neighbor surfaces’ normals. Due to space limits of this
paper, the detailed mathematics of the problem will be discussed in our future work.
The surface normals of the points on the FOIs were computed using the second
method and the dominating normal was used to represent the orientation of the
feature with respect to the point cloud.

6
©  ASCE
Computing  in  Civil  Engineering  2015 246

Figure 3. Steps for Localization of Features of Interest in 3-D Point Cloud


Laser Scanner Position Determination. We propose defining the new notion
of Pose-Quality (PQ) vector to represent 3-D objects. In order to decrease the
computational complexity to determine the best scan position, we simplify the
studying of 3-D objects by representing them as a PQ vector. The PQ vector’s size
represents the LOD for the feature, the coordinates are defined based on the
orientation of the feature, and are located on the center of a feature. Our current
method computes the resultant of all the PQ vectors for every bounds of the scan
target to determine the best scanning position for the laser scanner. The optimal
number of scan positions for capturing the required detail for FOI is computed, using
the overlap between identified scan positions from different bounds, through an
iterative simulation process. The PQ vectors for all the features on the west bound of
the courtyard (north side in Figure 4) were computed and the resultant was derived.
The west bound of the courtyard has four large windows (the case study’s FOIs) and
a door on the right corner. Standardized LOD requirements were acquired from US-
GSA (2009) (Table 1.c). According to US GSA, historical documentation of
buildings requires LOD 1 or 2. We assigned LOD 1 requirement to the windows.
Also, in order to have different data quality requirements within the west bound, we
assumed LOD 2 requirement for the door. Figure 4 shows the determined scan
position using the proposed framework. Finally, we scanned the courtyard using the
determined scan position with a RIEGL VZ-400 terrestrial laser scanner. We set the
vertical and horizontal angular resolutions to 0.06 (0.01 m * 0.01 m * 100 m), which
is known as a medium resolution setting. The length of the windows were manually
measured using a Total Station and were used as the ground truth data. We then
measured the same length using the scan data to evaluate the results (Table 1b). In
addition, we took point cloud samples of (50 mm * 50 mm) from the four windows to
verify the required data density is met. The results indicated that the scan data
successfully satisfied the LOD 1 requirements for all the windows. As can be seen in
Table 1c, the resolution and accuracy have decreased for the windows that are farther
and have greater angle of incidence. The comparison of the proposed scan planning
results with common industry practices will be part of the future work.

7
©  ASCE
Computing  in  Civil  Engineering  2015 247

Figure 4. Determination of Scanner Position based on Features Orientations


CONCLUSION
This paper presented a data quality-driven scan planning framework and a
pilot case study for preliminary evaluation of the framework. The authors did not
study all the existing geometrical features of the case study and the laser scanner
position was determined through a semi-automated approach based on limited
parameters. In addition, the reconstructed point cloud had some missing data due to
the distortion in the images. The future work will focus on: 1) improving the accuracy
of 3-D reconstructed model using the UAV sequence images as well as exploring
alternative imaging methods such as stereo vision camera systems, 2) creating an
ontology of all parameters that have to be considered for a data quality-driven scan
planning, 3) studying the interactions between different scan planning parameters, 4)
evaluating existing image-based and 3-D point cloud-based feature detection
techniques for extracting FOI, and 5) proposing an optimization algorithm to define
the scan plan based on the feature map of the project. Finally, the computational
complexity of the proposed framework will be evaluated and necessary modifications
will be made.
ACKNOWLEDGEMENTS
The authors would like to thank Mrs. Eloisa Dezen-Kempter and Mr. Meida
Chen for their supports particularly in data collection for this research.
REFERENCES
Anil, E., Akinci, B., and Huber, D. (2011). "Representation requirements of as-is building information
models generated from laser scanned point cloud data." ISARC.
Balali, V., and Golparvar-Fard, M. (2014). "Video-Based Detection and Classification of US Traffic
Signs and Mile Markers using Color Candidate Extraction and Feature-Based Recognition."
Computing in Civil and Building Engineering (2014), 858-866.
Balali, V., and Golparvar-Fard, M. (2015). "Segmentation and recognition of roadway assets from car-
mounted camera video streams using a scalable non-parametric image parsing method."
Automation in Construction, 49, Part A(0), 27-39.
Dai, F., Rashidi, A., Brilakis, I., and Vela, P. (2012). "Comparison of Image-Based and Time-of-
Flight-Based Technologies for 3D Reconstruction of Infrastructure." Construction Research
Congress 2012, 929-939.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2009). "Application of D4AR–A 4-Dimensional
augmented reality model for automating construction progress monitoring data collection,
processing and communication." ITcon, 14, 129-153.
Ham, Y., and Golparvar-Fard, M. (2013). "EPAR: Energy Performance Augmented Reality models for
identification of building energy performance deviations between actual measurements and
simulation results." Energy and Buildings, 63(0), 15-28.

8
©  ASCE
Computing  in  Civil  Engineering  2015 248

Kersten, T., and Lindstaedt, M. (2012). "Image-Based Low-Cost Systems for Automatic 3D Recording
and Modelling of Archaeological Finds and Objects." Progress in Cultural Heritage
Preservation, Springer Berlin Heidelberg, 1-10.
Pradhan, A., and Moon, F. (2013). "Formalized Approach for Accurate Geometry Capture through
Laser Scanning." Computing in Civil Engineering (2013), 597-604.
Rodriguez-Gonzalvez, P., Gonzalez-Aguilera, D., Lopez-Jimenez, G., and Picon-Cabrera, I. (2014).
"Image-based modeling of built environment from an unmanned aerial system." Automation
in Construction, 48(0), 44-52.
Shen, Z., Tang, P., Kanaan, O., and Cho, Y. (2013). "As-Built Error Modeling for Effective 3D Laser
Scanning on Construction Sites." Computing in Civil Eng., 533-540.
Song, M., Shen, Z., and Tang, P. (2014). "Data Quality-oriented 3D Laser Scan Planning."
Construction Research Congress 2014, 984-993.
Tang, P., Huber, D., Akinci, B., Lipman, R., and Lytle, A. (2010). "Automatic reconstruction of as-
built building information models from laser-scanned point clouds: A review of related
techniques." Automation in Construction, 19(7), 829-843.
US-GSA (2009). "BIM Guide for 3D Imaging." http://www.gsa.gov.
Viola, P., and Jones, M. "Rapid object detection using a boosted cascade of simple features." Proc.,
Computer Vision and Pattern Recognition, 2001. CVPR 2001.
Wu, C. (2011). "VisualSFM: A Visual Structure from Motion System."
Zhu, Z., and Brilakis, I. (2009). "Comparison of optical sensor-based spatial data collection techniques
for civil infrastructure modeling." Journal of Computing in Civil Engineering, 23(3), 170-
177.

9
©  ASCE
Computing  in  Civil  Engineering  2015 249

Efficient Management of Big Datasets


Using HDF and SQLite: A Comparative Study
Based on Building Simulation Data
Wael M. Elhaddad1, 2 and Bulent N. Alemdar1, 3
1
Bentley Systems, Inc., 2744 Loker Avenue West, Suite 103, Carlsbad, CA, 92009
2
Ph.D. candidate, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA 90089, Associate Software
Engineer, Bentley Systems, email: [email protected]
3
Senior Manager, Software Research Engineer, email: [email protected]

ABSTRACT

In recent years, Structural Engineering practice has started adopting


performance-based building design, a concept that produces a more resilient structural
design as it accounts for the probabilistic nature of natural hazards such as earthquakes.
Often times, a large number of dynamic building simulations have to be explored,
producing results datasets that might typically be tens or hundreds of gigabytes. It is
imperative to manage these datasets efficiently, to enable engineers run a larger number
of simulations which helps produce more robust designs. Hierarchal Data Format (HDF)
is a scientific data management library - to allow storing large amounts of data in
hierarchal form on different platforms. SQLite is a relational database library based on
the SQL language that can manage data efficiently on the local disk, without a server.
In this study, the two libraries (HDF and SQLite) are explored for efficient dataset
management, by comparing their efficiency using different performance metrics for
storing and retrieving data in different scenarios. Two reading performance metrics
were then considered, one based on data being read directly from disk (cold cache),
and another for data being cached in memory (warm cache). Based on different
building design simulations, performances of both libraries are compared for writing
and reading with both warm and cold cache. According to the comparative study, it is
concluded that the HDF library is faster in storing and retrieving big chunks of data
and also required less storage space. Although the study was performed on structural
simulation datasets, the conclusions are likely applicable to different kinds of datasets
often encountered in many different engineering disciplines.

INTRODUCTION AND MOTIVATION

In the past few years, Performance-based building design (PBD) is being adopted by
the structural engineering community, especially for tall buildings [1, 2]. The main
concept of PBD is to define multiple performance objectives that a building has to
satisfy, in order for the design to be accepted [1]. The method helps stakeholders make
better decisions about buildings design, as it takes into account uncertainties involved
in predicting the building performance, resulting in a more robust design procedure [3].

©  ASCE
Computing  in  Civil  Engineering  2015 250

In performance based design, analyses with at least three different ground motions are
generally required. Several design guidelines (such as in Tall Buildings Initiative [2])
mandate at least 7 ground motions. Los Angeles Tall Buildings Structural Design
Council (LATBSDC) recommends using 7 ground motions or more [4]. If less than 7
ground motion simulations are used, the absolute maximum of the evaluated responses
from these simulation shall be used to evaluate the performance. However, if 7 ground
motions or more are used, then the average of responses from the different simulations
can be used to evaluate the performance, as using more simulations produces a design
that is more robust against uncertainties in the ground motions [2]. Dynamic earthquake
simulations of buildings requires a large amount of data to be analyzed, processed and
stored, often times it can easily require tens of gigabytes of storage. For instance, an
earthquake simulation of El Centro 1940 earthquake requires evaluating responses at
1501 time steps (30 seconds with 0.02 second time step), that is 1501 times the amount
of storage of a static load case simulation. For a model that consists of 5000 nodes,
2000 frame elements and 200 shell elements, a single dynamic simulation would
require around 750 Mega Bytes (MB) of storage. It can easily be seen that the multiple
dynamic simulations required for PBD would consequently consume gigabytes of
storage. For some models, it might also be necessary to evaluate nonlinear response
time histories, which normally require substantially more storage than linear time
history as smaller time steps are often required to accurately evaluate nonlinearities [5].
It is imperative for the design engineers to be able handle these big datasets efficiently,
in order to make better decisions by exploring different design alternatives.

BIG DATA LIBRARIES

In this section, the two big data libraries that are used for this study are discussed. First,
the format of the simulation data that is required to be stored is described, then an
explanation of how the two libraries can be adapted to store data is presented. In
addition, performance metrics used later for testing is defined.

Results Data Format


This study investigates the performance of storing/retrieving simulation results for
dynamic earthquake analysis of building models. The data format that is assumed to be
used for different elements in the structural model is illustrated in Table 1. It should be
noted in the table that each value refers to 1 double precision floating point number (8
bytes).

Table 1 Data formats of different kinds of elements used in structural simulations


Element Type Results Data Format
Nodes 6 values per each Node (3 translations and 3 rotations)
Frames 12 values per frame (3 forces and 3 moments at each end)
Shells 24 values per shell (3 forces and 3 moments at each corner)

Hierarchical Data Format (HDF)


HDF is a scientific data management library that was originally developed in the
University of Illinois, Urbana Champaign. Currently it is developed and maintained by

©  ASCE
Computing  in  Civil  Engineering  2015 251

the HDF Group [6]. The library is used for handling large complex data very efficiently
in different scientific domains. For instance, NASA has developed its own variant of
HDF that is being used to store data from the earth observation system, which is a
collection of satellites gathering data about the earth [7].
HDF uses a hierarchy to store data which is very similar to directory/file structure in
modern computers. The library relies on three main entities, groups, datasets and
attributes. A group can contain other groups and/or datasets (i.e., subsets). A dataset is
a table that contains data. Finally, attributes can be attached to groups and datasets, in
order to store additional metadata about the contents.
In order to accommodate the data format as mentioned in Table 1, an upper layer of
groups are used to define different load cases, then an intermediate layer of groups are
used to define different kinds of elements (e.g.   nodes,   frames….etc). Inside each
elements group, datasets can be used to store results in a table form (Figure 1). Finally
attributes are used to store additional metadata about the load cases or elements. For
instance, an attribute can be added to the nodes group to define the units of the
displacements stored for each node.
Shells
Dataset
Node 1
Load Case 1 Frames
Dataset
Nodes Node 2
Model
Shells Dataset
Node 3
Load Case 2 Frames

Nodes Etc.
Figure 1 Hierarchy used for HDF files to store simulation results data

SQLite
SQLite is an open source, light weight, library available in the public domain [8]. The
library is a relational database that does not require a server and uses SQL as its query
language. It is popular for being very efficient in handling big data and for its very
small footprint as its size can be less than 500 Kilo Bytes (KB).

In order to accommodate the data formats mentioned in Table 1, the SQLite database
file includes two basic tables, one to hold information about different load cases and
another to hold information about the different elements and the type of quantities
stored for each of the elements (Figure 2). It has to be noted that these two tables can
also hold other metadata such as units used for each of the stored quantities or number
of steps for each load case. In addition, the file has three tables to hold elements results,
one table for each kind of elements. Each row in the elements tables represent the
results of one element for a specific load case and specific time step, so each table has
two additional columns to hold indices defining these two values, in addition to a
column that contains the index(Identification number) of the element.

©  ASCE
Computing  in  Civil  Engineering  2015 252

Groups Subgroups Nodes

PK ID INTEGER PK Name VARCHAR(256) PK ID INTEGER

Name VARCHAR(256) Dataset_Template VARCHAR(256) GroupID INTEGER


Step REAL TemplateOrder INTEGER RecordNo INTEGER
StepOrder INTEGER Units VARCHAR(256) D1 REAL
RefName VARCHAR(256) D2 REAL
D3 REAL
D4 REAL
D5 REAL
D6 REAL

Figure 2 Schemas of the Groups, Subgroups and Nodes tables used for the SQLite database.
Note: For SQLite, “PK”  indicates  primary  key  column in the table, REAL indicates a double precision floating
point number and VARCHAR(n) is a character array (string) with a variable length that is less than or equal to n

Caching and Performance Metrics


Operating systems, such as Microsoft Windows 8, performs automatic caching in
order to speed up processes of reading and writing data to storage drives. This can
significantly affect the performance of read/write operations, especially when the data
being transferred can fit in the system memory. Figure 3 illustrates this process of
automatic data caching. Readers interested in more details about this process can read
further details in Windows Documentation [9]. Based on the caching process, the
following performance metrics are defined and used for the rest for this study:
 Writing data to storage
1. Disk Performance: data is being written directly to disk without using
disk write caching.
2. Cache Performance: data is being written to memory (buffered), then to
disk using the disk write caching process.
 Reading data from storage
1. Cold Cache: data is being read directly from disk, as all the previous
cached buffers of the file were forced to be flushed from memory.
2. Warm Cache: data is being read from memory as the file has been
cached during previous read or write cycle and it can fit in the available
memory.

Read/Write Directly
(A) Process Permament Storage
to/from Disk
(Disk)

Write/Read Volatile
Write/Read
(B) Process
to/from Cache
Storage Permament Storage
(Memory) to/from Disk
(Disk)

Figure 3 Schematic diagram of (A) Reading/writing files without caching, (B) Automatic
caching process for reading/writing files (Adapted from Microsoft Windows
Documentation [9])

©  ASCE
Computing  in  Civil  Engineering  2015 253

RESULTS AND DISCUSSION

The main purpose of this study is to compare the performance of the considered two
big data libraries, HDF and SQLite. Two different performance tests were performed
for each data library, one is for hypothetical model with nodes only, and the other for
a typical structural model of a building that includes results for 5000 nodes, 2000 frame
elements and 1000 shell elements, based on two dynamic load cases. Results for all the
tests were based on results values obtained from a pseudorandom number generator.
All the tests in this section were carried out on a Lenovo ThinkPad W510 mobile
workstation, with an Intel i7 Q820 processor clocked at 1.73 GHz, a 16GB of memory
and 7200 RPM hard drive, running a 64 bit Windows 8.1 operating system.

The first test is conducted for reading and writing results for nodes based on different
caching schemes (warm or cold). Tables 2 and 3 compare performances of both
libraries for writing and reading nodes results, respectively. In addition, a closer
comparison for writing and reading results of 10,000 nodes is portrayed graphically
in Figures 5 and 6, respectively.

Based on the test to write results for 10,000 nodes


(Figure 5), HDF was 2.5 times faster when writing 1500
directly to disk, whereas if using caching it was
about 4 times faster. Similarly, HDF is also 1000
1045
significantly faster than SQLite for reading with
500 733
both cold and warm caches. When the results for
10,000 Nodes is being read directly from the disk, 0
HDF was almost twice as fast as SQLite, while if the File Size [MB]
file is completely cached in memory, HDF5 is about
18 times faster than SQLite (Table 3 and Figure 6). HDF5 SQLite
In addition, it was found that that HDF was also Figure 4 Size of the resulting
more efficient in terms of storage requirements data files for model with 10,000
producing a data file that is 30% smaller in size than nodes
SQLite data file as shown in Figure 4.

Table 2 Time to write nodes results to data file Table 3 Time to read nodes results from data
measured in seconds, for both cases when file measured in seconds, for both libraries
writing to disk and when using cache with cold and warm caches
Nodes HDF SQLite Nodes HDF SQLite
Disk Cache Disk Cache Cold Warm Cold Warm
10 0.11 0.075 0.22 0.21 10 0.04 0.0013 0.065 0.026
100 0.34 0.311 1.49 1.33 100 0.45 0.014 0.55 0.23
1000 5.3 3.14 13.25 13.23 1000 2.42 0. 13 7.73 2.3
10000 55 35.9 141.1 137.6 10000 26 1.23 47.5 22.9

©  ASCE
Computing  in  Civil  Engineering  2015 254

160 50
140 47.5
137.6 141.1 40
120
100 30
80
20 26
60 22.9
40 55 10
20 35.9 1.23
0 0
Write Cache On Write Cache Off Cold Cache Warm Cache

HDF5 SQLite HDF5 SQLite

Figure 5 Time to write results for 10,000 nodes Figure 6 Time to read results of 10,000
in seconds nodes in seconds

For the typical structural model described earlier


(i.e., model composed of nodes, frames and shells), 3
the same reading/writing tests of results data are 2.5
performed. Results of these tests show a very 2.55
2
similar performance comparison. For writing data, 1.5 1.75
HDF was significantly faster for writing each
different kind of elements (Table 4), the overall 1
speedup of HDF compared to SQLite is about 3 0.5
times when writing directly to the disk without 0
caching. When caching is used the overall speedup File Size [GB]
was about 7.5 times (Figure 8). HDF also produced
HDF5 SQLite
significantly smaller data file in this case, 32%
smaller in size than SQLite data file, as shown in Figure 7 Size of the resulting data
Figure 7. Similarly, for reading performance, HDF files for typical model
was almost twice as fast SQLite with a cold cache
reading. For warm cache reading performance, the speedup was even higher, with HDF
more than 20 times as fast as SQLite (Table 5).

Table 4 Time to write typical model results to Table 5 Time to read typical model results from
data file measured in seconds, for both cases data file measured in seconds, for both libraries
writing to disk and writing to cache with cold and warm caches
Elements HDF SQLite Elements HDF5 SQLite
Disk Cache Disk Cache Cold Warm Cold Warm
Nodes 60.2 4.14 142 29.02 Nodes 26.1 1.27 46.5 22.8
Frames 20.7 6.12 84.6 43.8 Frames 13.6 0.59 31 14.2
Shells 14 1.60 58.7 18.5 Shells 11.6 0.43 19.6 11.9
Total 95 12.04 285.6 91.5 Total 51.3 2.3 97.2 48.9

©  ASCE
Computing  in  Civil  Engineering  2015 255

300 120
250 285.6 100
97.2
200 80
150 60
100 40 51.3 48.9
91.5 95
50 20
12.04 2.3
0 0
Write Cache On Write Cache Off Cold Cache Warm Cache
HDF5 SQLite HDF5 SQLite

Figure 8 Time to write results for typical Figure 9 Time to read results of typical
structural model in seconds structural model in seconds

CONCLUSIONS

Based on the results discussed in this paper, it is found that HDF is faster than SQLite
in writing and reading simulation results, especially when a large amount of data is
being transferred in a single process. In addition, it is observed that HDF is more
efficient in terms of storage requirements, as it produced data files that were 30%
smaller in size than SQLite databases. Although the tests were performed on building
simulations data, the previous conclusions might also be applicable to applications in
other disciplines with data of similar format to the data described in this paper.
Additional comparative studies are needed to evaluate the merits of these libraries in
other disciplines or applications with different data formats.

Further studies might also be needed to fully understand the performance of big data
libraries. The current study is carried out on a single hardware configuration.
Performance testing using different hardware configuration may be needed. In
particular, performance testing on Solid-state hard drives (SSD) is necessary. Both
HDF and SQLite libraries offer data compression capabilities that also need to be
investigated. Although, compression can save storage space, it can significantly hinder
the reading/writing performances. Additional tests are required to investigate
performances with different compression algorithms. More advanced performance
optimizations can also be investigated such as exploiting concurrency or parallelism.
Finally, it has to be noted that SQLite supports advanced queries (e.g. Max, Min, Count,
Sum, etc.) through the use of SQL query language, whereas HDF does not provide
queries. To perform queries on HDF data, a custom code has to be written that reads
the data from the file first, then computes the result of the query. Further research needs
to be performed to compare the performances of both libraries in such scenarios.

©  ASCE
Computing  in  Civil  Engineering  2015 256

REFERENCES

[1] B. Ghobarah, "Performance-based design in earthquake engineering: state of


development," Engineering Structures, vol. 23, pp. 878-884, 2001.
[2] Tall Buildings Initiative, "Guidelines of Performance-Based Seismic Design of
Tall Buildings," Pacific Earthquake Engineering Research Center (PEER), 2010.
[3] Applied Technology Council, "Next-Generation Performance-Based Seismic
Design Guidelines," Federal Emergency Management Agency (FEMA-445),
2006.
[4] Los Angeles Tall Buildings Structural Design Council, "An Alternative
Procedure for Seismic Analysis and Design of Tall Buildings Located in the Los
Angeles Region," LATBSDC, Los Angeles, 2014.
[5] A. Whittaker, M. Constantinou and P. Tsopelas, "Displacement Estimates for
Performance-Based Seismic Design," ASCE Journal of Structural Engineering,
vol. 124, no. 8, p. 905–912, 1998.
[6] "Hierarchical Data Format, version 5," The HDF Group, 1997-2014. [Online].
Available: http://www.hdfgroup.org/HDF5/. [Accessed December 2014].
[7] L. Klein and A. Taaheri, "The HDF-EOS5 Data Model, File Format and
Library," NASA, 2007.
[8] "SQLite Library," [Online]. Available: http://www.sqlite.org/. [Accessed
December 2014].
[9] "File Caching," Microsoft Corporation, [Online]. Available:
http://msdn.microsoft.com/en-
us/library/windows/desktop/aa364218%28v=vs.85%29.aspx. [Accessed
December 2014].

©  ASCE
Computing  in  Civil  Engineering  2015 257

Trip Characteristics Study through Social Media Data


Chuan-Heng Lin1,* and Albert Y. Chen2
1
Graduate Research Assistant, Department of Civil Engineering, National Taiwan University, No.1, Sec.
4, Roosevelt Road, Taipei, 10617, Taiwan; PH +886-2-33664255; FAX:+886-2-23639990; email:
[email protected] (*Corresponding Author)
2
Assistant Professor, Department of Civil Engineering, National Taiwan University, No. 1, Sec. 4,
Roosevelt Road, Taipei, 10617, Taiwan; PH +886-2-33664255; FAX +886-2-23639990; email:
[email protected]

ABSTRACT
Sentiment analysis of social media has become a popular approach in many
research areas. Most of the works were based on data collected from Twitter, which is
a data source composed of text and geographic information. Many transportation
researchers attempted to enrich and provide more accurate transport statistics through
this approach. However, text data cannot fully provide a suitable explanatory of
human’s behavior. In this paper, Instagram, which is a data source containing not only
text and geographic information but also photographs, is utilized for the analysis.
With computer vision techniques, there is the potential to better understand human’s
travel behavior.
Keywords: Data Mining, Spatial analysis, Temporal Pattern, Image processing,
Social Media, Instagram

INTRODUCTION
!
The Origin-Destination (OD) table is a common type of transportation data. It is
mostly generated by the public transportation systems, and reflects the number of trips
between places in the network from both ticket and card records. However, this
common used data cannot show traveler behavior and their trip purpose. To collect
trip purpose data, transportation agencies need to spend great amount of time and cost
for public survey. With the improvement of mobile devices and the increasing number
of social media users, travelers’ behavior and their tendency can be inferred by their
posts and geospatial references. Social media such as Twitter, Facebook, Foursquare,
provides users’ rich textual and geo-referenced data, which can be useful to the
understanding or estimation of trip purposes. In addition, a relatively new form of
social media called Instagram provides not only the data mentioned but also
photographs. It is an application through which users share their life by uploading

©  ASCE
Computing  in  Civil  Engineering  2015 258

photographs. These photographs, text and geographic data can be considered as


important factors to estimate users’ trip purposes.
The objective of this paper is to use aggregated Instagram data, extract features
from photographs, and to discover spatio-temporal patterns using machine learning
and computer vision. In this paper, a Mass Rapid Transit (MRT) line – Songshan line
in Taipei is selected as a demonstrative case. Two weeks of data (2014.11.10 –
2014.11.21) has been crawled through the Instagram API. In addition to the original
data crawled from Instagram, social network of users is also considered. Computer
vision approaches, such as the Scale-Invariant Feature Transforms (SIFT) and
Speeded Up Robust Features (SURF) features, are compared. The remainder of this
paper is organized as follows. The Literature Review Section gives details of the
current literature, the Methodology Section explains the dataset, data mining
technique on the Instagram data, and the visualization. The Conclusion Section
summarizes this work and future directions.

LITERATURE REVIEW

Instagram. Instagram is a social media application for mobile phones. Users of


Instagram take pictures and tags people with geo-references. Based on Instagram’s
official site, it has 200 million users in Instagram and 20 billion posts in 2014
(http://blog.instagram.com). Hochman, N., and Manovich, L. (2013) used Instagram
for cultural analysis on different cities. Manikonda et al. (2014) analyzed and
compared Instagram to other social network such as Twitter and Flickr, and showed
that Instagram has higher percentage of geographic information. Hu et al. (2014) did a
general survey on the content and basic background of Instagram. They used image
features such as SIFT to categorize photographs into 5 categories that are derived by
clustering and adjusted by human editing. In addition, five user types were also
defined in this research, and the purpose was to analyze photographs by unsupervised
learning with image features. Spatial information was not considered in this research.
To the best knowledge of authors, unlike Twitter and other social network, there is
relatively rare research conducted using Instagram.

Spatial-Temporal Information. Transportation researchers have analyzed spatial and


temporal social media data for travel demand modeling and activity patterns (Collins
et al., 2013; Hasan, 2013; Hasan et al., 2013; Hasan and Ukkusuri, 2014; Jin et al.,
2013; Jin et al., 2014; Yang et al., 2014; Ni et al., 2014; Alesiani et al., 2014).
However, none of them have used Instagram as the data source. Taking advantage of
Instagram’s photographs can provide more information about individual’s behavior.

Image Features. Feature selection and extraction are important factors for pattern
recognition. The basic assignment of feature extraction is to find the most significant
features. Two feature selection methods are reviewed in this paper. The first is SURF,
a speeded-up local feature selection method (Bay et al 2008). The second is SIFT, a
method invariant to uniform scaling, orientation, and partially invariant to affine
distortion and illumination changes.

Table 1 is based on research conducted by Luo and Gwun (2009). SURF has
better performance on time and vague images. SIFT can find similar descriptors in
rotation and scaled images.

©  ASCE
Computing  in  Civil  Engineering  2015 259

Table 1. Comparison of SURF and SIFT


Time Scale Rotation Blur Illumination
Better Method SURF SIFT SIFT SURF SURF

METHODOLOGY

The Instagram crawled dataset we have used contains 147269 post and 37912
user ID. Time period is from 2014-11-10 to 2014-11-21. The dataset was collected in
proximity of each MRT station using a 2km buffer. The data structure of the
Instagram data is shown in Figure 1.

Figure1. Data Structure of Instagram Data

The data contains the user ID, Location, Photo’s Link, Comments, Captions,
Users tagged in photo, the number of liked and the relationship between users.
However, some private users’ information is not included. In this paper, tagged users
are considered as friends of the user who posted the photo. Since the case study is a
MRT line in Taipei City, description of each station was added into the dataset.

Figure 2. Individual data visualization

Given the Instagram data, individual’s movement can be observed. The number

©  ASCE
Computing  in  Civil  Engineering  2015 260

of geo-tagged data in a certain area is much higher than other locations. In other
words, a user posts in this area much more often. There is good chance that the area is
either the user’s workplace or home.

Figure 3. Time series data visualization

As shown in Figure 3, the dataset is aggregated in time with hour granularity.


The pattern in weekdays is clear and the high peak on each weekday is during
17:00-18:00. In this figure, the number of people in Zhongshan station is always the
highest which reveals that this station has more activities and it may have most users
commute to this station. At the weekend, the number of users has increased
dramatically. The data reflects the activities in weekends are more than weekdays.

Figure 4. Spatial- Temporal data visualization

Stations are plotted with different colors in Figure. 4. This figure presents the
distribution the spatial and temporal data. The visualization is constructed on
CartoDB (cartodb.com), which is an online Geographic Information System platform.

©  ASCE
Computing  in  Civil  Engineering  2015 261

Figure 5. SURF features

Figure 6. SIFT features

Photographs in Figure 5 and Figure 6 are randomly picked for showcasing the
computer vision feature representation. There are 1470 (left) and 2803(right) SURF
features in Figure 5, and 2153 (left) and 2499 (right) SIFT features in Figure 6. The
number of descriptors does be determined the performance of feature extraction.
From these pictures, the SURF features took brightness into account. It is easily to
detect highlighted spots. When the picture is illuminated, SURF can easily detect the
outline of objects. The SIFT features in Figure 6 has almost the same features that
were detected by SURF. From this random case, the performance is almost the same.
In large-scale image detection, SURF is more often used because it requires less
computational time.

CONCLUSION & FUTURE WORK

Visualization and data mining are both important. Visualization provides a more
comprehensive representation of the data. For example, the patterns of aggregated
traveler data or individual traveler behavior are visualized in this study. The presented
topic is an ongoing work. Currently, we have acquired several datasets on public
transport volume and related demographic data. The main future direction of this
work is to add into the dataset of aggregated Instagram as input features for the
prediction of more accurate traffic demand forecast. He et al., (2013) used tweet
semantic to improve the estimation of traffic flow volume, and showed the possibility
to estimate the traffic flow volume by incorporating social media. Modeling of the
data will also be conducted for purposes such as capturing of system behavior for
transportation.

©  ASCE
Computing  in  Civil  Engineering  2015 262

REFERENCE

Alesiani, F., Gkiotsalitis, K., & Baldessari, R. (2014). “A Probabilistic Activity Model
for Predicting the Mobility Patterns of Homogeneous Social Groups Based on
Social Network Data.” The 93rd Annual Meeting of Transportation Research
Board.
Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). “Speeded-up robust features
(SURF).” Computer vision and image understanding, 110(3), 346-359.
Collins, C., Hasan, S., and Ukkusuri, S. V. (2013). “A Novel Transit Rider
Satisfaction Metric: Rider Sentiments Measured from Online Social Media
Data.” Journal of Public Transportation, 16(2).
Hasan, S. (2013). “Modeling urban mobility dynamics using geo-location data.”
(Doctoral dissertation, PURDUE UNIVERSITY).
Hasan, S., Zhan, X., and Ukkusuri, S. V. (2013). “Understanding urban human
activity and mobility patterns using large-scale location-based data from online
social media.” Proceedings of the 2nd ACM SIGKDD International Workshop
on Urban Computing. ACM(p. 6).
Hasan, S., and Ukkusuri, S. V. (2014). “Urban activity pattern classification using
topic models from online geo-location data.” Transportation Research Part C:
Emerging Technologies, 44, 363-381.
He, J., Shen, W., Divakaruni, P., Wynter, L., and Lawrence, R. (2013). “Improving
traffic prediction with tweet semantics.” In Proceedings of the Twenty-Third
international joint conference on Artificial Intelligence, AAAI Press, pp.
1387-1393
Hu, Y., Manikonda, L., and Kambhampati, S. (2014). “What We Instagram: A First
Analysis of Instagram Photo Content and User Types.” The International AAAI
Conference on Weblogs and Social Media
Hochman, N., and Manovich, L. (2013). Zooming into an instagram city: Reading the
local through social media. First Monday.
Jin, P. J., Cebelak, M., Yang, F., Ran, B., and Walton, C. M. (2013). “Urban Travel
Demand Analysis for Austin TX USA using Location-based Social 
Networking
Data 
” Transportation Research Board 92rd Annual Meeting (No. 13-2374)
Jin, P. J., Cebelak, M., Yang, F., Ran, B., and Walton, C. M. (2014). “Location-Based
Social Networking Data: An Exploration into the Use of a Doubly-Constrained
Gravity Model for Origin-Destination Estimation.” Transportation Research
Board 93rd Annual Meeting (No. 14-5314).
Luo, J. and Gwun, O. (2009). “A comparison of sift, pca-sift and surf.” International
Journal of Image Processing (IJIP), 3(4), 143-152.
Lowe, D. G. (1999). “Object recognition from local scale-invariant
features.”Computer vision, 1999. The proceedings of the seventh IEEE
international conference on (Vol. 2, pp. 1150-1157). IEEE
Manikonda, L., Hu, Y., and Kambhampati, S. (2014). Analyzing User Activities,
Demographics, Social Network Structure and User-Generated Content on
Instagram. arXiv preprint arXiv:1410.8099.
Ni, M., He, Q., & Gao, J. (2013). “Using Social Media to Predict Traffic Flow under
Special Event Conditions.”(Doctoral dissertation, State University of New York
at Buffalo).
Yang, F., Jin, P. J., Wan, X., Li, R., and Ran, B. (2014). “Dynamic Origin-Destination
Travel Demand Estimation Using Location Based Social Networking Data.”
Transportation Research Board 93rd Annual Meeting (No. 14-5509).

©  ASCE
Computing  in  Civil  Engineering  2015 263

1 2 3 4
5

1 2 3 5 4

©  ASCE
Computing  in  Civil  Engineering  2015 264

CO2

©  ASCE
Computing  in  Civil  Engineering  2015 265

O = [0, 1]

L = {l1 , l2 , · · · , ln } n

P = {p1 , p2 , · · · , pn } n

©  ASCE
Computing  in  Civil  Engineering  2015 266

©  ASCE
Computing  in  Civil  Engineering  2015 267

©  ASCE
Computing  in  Civil  Engineering  2015 268

©  ASCE
Computing  in  Civil  Engineering  2015 269

©  ASCE
Computing  in  Civil  Engineering  2015 270

©  ASCE
Computing  in  Civil  Engineering  2015 271

Visual Complexity Analysis of Sparse Imageries for Automatic Laser Scan


Planning in Dynamic Environments

Cheng Zhang1 and Pingbo Tang1


1
School of Sustainable Engineering and the Built Environment, Arizona State
University, 651 E. University Drive, Tempe, AZ 85287-0204. E-mail:
[email protected]; [email protected]

Abstract

Laser scanning technologies have significantly improved the efficiency of


spatial data collection to deliver the needed as-built information of job sites. However,
many imageries contain unneeded dense data that waste time for data collection and
processing, or missing data that are in need. Targeted laser scan data collection is
pivotal to avoid such problems. For example, engineers could avoid densely sampling
simple geometries (e.g. flat walls) for saving time to focus on edges and openings.
Scan planning algorithms can produce data collection plans based on targeted objects
marked by users. Unfortunately, manually defining objects and their needed level of
detail (LOD) is impractical in dynamic environments, such as construction sites. This
research proposes an approach that identifies visually complex regions through
discontinuity analysis in rapidly captured sparse imageries for guiding the imaging
planning. First, we fuse 3D point cloud data with sparse laser-scan data. A visual
complexity analysis algorithm then detects locations in sparse imageries that contain
discontinuous 2D and 3D patterns (e.g., color change) for identifying parts deserving
detailed laser scan. A frequency analysis for each location would then estimate the
LOD necessary for assessing each targeted region. Finally, a sensor-planning
algorithm generates laser scan plans based on visually complex regions and LOD
requirements.

INTRODUCTION

Effective management of construction projects requires a close-loop control of


jobsites using timely collected field data as feedback. Figure 1 depicts such job site
observation and control process from an “nD data and model” point of view. From
the jobsite, a data collection process gathers n-Dimensional (nD) jobsite data that
capture as-built geometries, materials layouts, and varying productivities. Data
processing will then generate an nD model that integrates a 4D as-built BIM (space
plus time), site conditions, labor performances, and material consumptions across job
sites. According to this nD model, civil engineers can adjust the jobsite accordingly.
Being a vital step in this control loop, data collection determines the jobsite
data quality as well as the accuracy of the nD model. Missing, inaccurate or un-
timely data is the cause of a major portion of problems observed on construction sites
(Taneja, Akinci, Asce, Garrett, & Soibelman, 2012). To acquire quality data,

©  ASCE
Computing  in  Civil  Engineering  2015 272

construction engineers applied new measurement technologies to replace or assist


tradition manual approaches of site monitoring. These measurement technologies
include LiDAR, RFID, photograph, and video clips, etc. Among these measurement
technologies, laser-scanning technology is able to acquire real-time, high-quality
jobsite information, and thus becomes popular recently.
Generally, areas of the jobsite that have complex geometries or change
frequently may require detailed, accurate, and frequent data collection. As the job site
geometries vary and projects progress, laser-scan data collection requirements may
vary across the site. However, uncertain environmental changes and variations, such
as changing layouts of the sites, also bring the unpredictability for data collection
needs. Without prior knowledge, field engineers can only make data collection plans
based on their observation and experiences, which may not be able to optimize the
data coverage, levels of accuracy (LOA) and detail (LOD), and data collection time.
As a result, the engineers may collect redundant data causing unnecessarily time and
labor wastes, or miss data needed for decision-making. Some studies use as-designed
models or drawings to guide the planning of data collection, while having to handle
objects missing in the models while existing in the field. The dilemma becomes how
to acquire reliable site conditions to guide effective and efficient laser scan in the
field.
This paper proposes a computational framework of the visual complexity
analysis (step 1 in Figure 2) for resolving the dilemma of laser scan planning. The
concept is to use rapidly acquired sparse imageries to identify visually complex areas
that deserve detailed 3D data collection, and thus guide laser scan planning. Figure 2
shows the procedure of the proposed approach that plan detailed data collection based
on sparse imageries. The algorithm automatically identifies visually complex regions
in sparse imageries through discontinuity analysis, and derives LOD requirements of
these regions through frequency analysis of visual patterns. The detected visually
complex areas along with their LOD requirements then guide the data collection
process for capturing comprehensive data with sufficient details on visually complex
areas.

RELATED STUDIES

The importance of timely and accurate jobsite information in construciton


projects is widely discussed in previous studies. (Taneja et al., 2012) shows that the

©  ASCE
Computing  in  Civil  Engineering  2015 273

collection of accurate, complete, and reliable field data is not only essential for active
management of construction projects, but also for facility and civil infrastructure
management. In facility management area, (Lee & Akin, 2009) found that missing
jobsite information, caused by poor quality data from manual documentation, could
lead to huge fieldwork inefficiency.
Laser scanning, as an efficient and effective measuring method, are addressing
the desire for quality data in the field and has many applications in construction.
(Park, Lee, Adeli, & Lee, 2007) uses laser scanner to measure deformation of
construction. In the domain of civil engineering, Haas et al. investigated the technical
feasibility of integrating a CAD model and laser scanned data for automated
construction progress monitoring (Bosche & Haas, 2008; Turkan, Bosche, Haas, &
Haas, 2012). As another method of acquireing 3D imaging data, photogtammetry is
gaining interest of researchers. (Hwang, Weng, & Tsae, 2008), and (Yang & Kang,
2014) use digital photos to reconstruct as-built BIM and assessing the accuracy of
point clouds; (GORE, SONG, & ELDIN, 2012) use the photogrammetric techniques
to assist space planning on complex and congested construction sites.
Although laser scan technologies are becoming popular in construction
industry, few research projects studied about how to acquire laser scanning data of
unknown site conditions with certain data quality. The process of visual complexity
analysis and data colleciton planning shown in Figure 2 will fill the gap of acquiring
quaility laser-scan data in dynamic and unpredictable environments. This paper
focuses on visual complexity while data collection planning will be the focus of a
case study.

OVERVIEW OF VISUAL COMPLEXITY ANALYSIS

The purpose of visual complexity analysis is using sparse imagery data


collected beforehand to find areas in the jobsite with visual complexity, and then
determine the data quality needed for ensuring sufficient confidence level about the
data. Figure 3 shows an IDEF0 process model showing the computational framework
of visual complexity analysis. The input is sparse 2D and 3D imagery data (photos
and low-resolution laser-scan data) for rapid visual complexity analysis. The controls
include three categories of domain requirements: time limits, schedule limits and
labor and budget limits. The reason of using sparse imageries is to ensure that the
visual complexity analysis would not bring extra time requirements that overweight
the benefits of using it for guiding more effective laser-scan data collection.

©  ASCE
Computing  in  Civil  Engineering  2015 274

Figure 3. IDEF0 process model showing the computational framework of visual


complexity analysis
Under the control of a sensor model and domain requirements, a three-step
algorithm forms the main mechanism of visual complexity analysis using the idea of
discontinuity detection of visual information, as detailed below. The output of this
visual complexity analysis is visually complex locations, representing areas with
visual complexity so that further inspection and detailed analysis are necessary.
Specifically, the output visually complex locations would have the following
attributes: 1) x, y, z coordinates indicating the positions of visually complex parts; 2)
normal vector representing the facing direction of the local surface, which will
influence its visibility; 3) the corresponding LOD requirement of each point of
complexity, which indicates the data density needed for capturing the geometric
details.

3D POINT CLOUD GENERATION FOR VISUAL COMPLEXITY ANALYSIS


FROM SPARSE IMAGERIES

In this approach, we use sparse imageries, which are jobsite photos and low-
resolution laser-scanned point cloud, to generate 3D point cloud for visual complexity
analysis. Photography has the advantages of easy to use, fast data collection, portable
devices, and popularization. Sparse photos allow fast detection of potential changes
and spatial complexities across a job site. On the other hand, the disadvantage of
photos is that photos do not contain sufficient information about absolute geometries
of objects. Therefore, we used sparse laser-scanned point cloud to fuse with point
cloud from jobsite photos for scaling and more detail of certain area.
First, we generate 3D point clouds from the jobsite images using Scale-
invariant Feature Transform (SIFT) feature points because SIFT features points are
generally salient features containing potential visual complexities. SIFT algorithm is
used for feature extraction and image matching (Lowe, 2004). SIFT feature points are
robust and invariant for spatial scale, rotation angle and brightness of image, which
means SIFT can detect the same object from different images by matching SIFT
feature points. As a result, SIFT will detect all the distinguishable feature of the
object. So we can process visual complexity analysis based on SIFT feature points
instead of original images for time and space efficiency.
In practice, we used Microsoft Photosynth, a software based on SIFT feature
points detection and matching, to accomplish 3D point cloud generation from the
jobsite images. This point cloud will keep all of the matched SIFT feature points of

©  ASCE
Computing  in  Civil  Engineering  2015 275

the images, wh hich is the source inform mation of viisual compleexity analysiis. Figure 4
shoows the poin nt cloud froom photos of o a campuss building named n Colleege Avenue
Com mmons (CA AVC) in Arizzona State Unniversity (ASU).
Second,, the authorrs integrate 3D point cllouds generrated from photos p with
spaarse laser-scaan data for absolute
a geommetric informmation of jobb sites and more
m details
of complicated
c d areas. The laser scanneer used in thhis research integrates a camera so
thatt it can pro oduce 3D laaser scanninng point cloouds along with panoraamic photo
aliggned with th he point clouud. Through SIFT we match m imagess taken by LiDAR
L with
imaages that ussed for poinnt cloud genneration. Ussing three pairs of matcching SIFT
featture points, we can condduct point-too-point regisstration betw ween point clouds
c from
phootos and laseer scan. After this processs, we will have
h a fused point cloud with a real-
worrld scale and d more detaills about the complicatedd part of the jobsite.
j
We cou uld also use other
o methodds to scale thhe point clouud from imagges (e.g. set
artiificial target)). In this appproach, we juust use pointt clouds from
m photos. We
W are in the
proocess of testting the usees of laser scanning
s daata for scalinng photo-baased spatial
infoormation, an nd will present the results in future publications.

VIS
SUAL COOMPLEXITTY ANALLYSIS: US
SING DIS
SCONTINU
UITY TO
QU
UANTIFY VISUAL
V CO
OMPLXITY
Y

In this step, we prrocess visuaal complexitty analysis using


u 3D point clouds
gennerated fromm sparse imaggeries. We quantify
q the concept of visual
v compllexity using
disccontinuity annalysis in thhree dimensiions: disconntinuity of density, discoontinuity of
norrmal and disscontinuity ofo color. An area in the point cloudd is visually complex if
the area shows discontinuuity in any of the threee dimensioons that satiisfy certain
conndition. Fig gure 5 shoows visual complexityy caused by b differentt kinds of
disccontinuities. Usually, discontinuity
d y of normaal indicates corners off the wall;
disccontinuity of
o density inndicates thee edge of thhe wall. Diiscontinuities of colors
inddicate differeent objects, different arreas, or diff
fferent materrials. The goal
g of this
reseearch is to detect visuually compleex locationss showing discontinuity
d y of visual
infoormation froom the point cloud of thee jobsite.

Fiigure 4. Poin
nt cloud froom photos Figure 5.. Three Diffferent discontinuities

©  ASCE
Computing  in  Civil  Engineering  2015 276

Noticing
g two featurres of the pooint cloud inn civil enginneering appliications, we
cann start from analyzing thet visual discontinuity
d y on X-Y pllane, and thhen analyze
disccontinuity in
n 3D space to
t get the cooordinates off visually coomplex locattions. These
twoo features innclude: 1) the
t z-axis of o the pointt cloud is always
a poinnting zenith
or every piicture used will provide a preccise zenith orientation
orieentation, fo
infoormation because of thee gravity sennsor in the devices;
d 2) the
t actual joobsite often
connsists of wallls that are perpendicullar to the grround, whichh is differennt from the
natuural landscap
pe.

VIS
SUAL COM
MPLEXITY
Y ANALYSIIS: IN X-Y PLANE
P

Consideering the two t featuress of the point


p cloud in civil engineering
e
appplications, viisual compleexity analysiis on X-Y plane consistss of four stepps. First, we
acqquire the topp view of thhe point clouud, Fig. 6(a)). The top view
v of the point
p cloud
cleaarly shows areas
a having densely located SIFT feeature pointss and thus reconstructed
3D points, indiicating geneeral layout of the jobsitee. Second, we w recolor thhe top view
imaage into a grreyscale imaage, shown in i fig. 6(b). Instead of directly
d transferring the
RGGB color into o grayscale color,
c we rellate the brighhtness of anyy pixel in thhe greyscale
imaage to the nuumber of feaature points ata the correspponding x-yy coordinate in the point
clouud. The brig ghter the pixxels are, thee more featuure points are sharing thhe same x-y
cooordinates, which
w meanss features of this area are frequenntly captureed by more
picttures. Third
d, we apply low-pass nooise filter and a Gaussiann blur to thhe image to
mittigate the inffluence of noise,
n shownn in fig.6(c). The local maxima in this t blurred
imaage indicate the concentrration of SIF FT feature pooints in the point
p cloud, such as the
corrner of the wall,
w the frammework struucture, etc. So
S the last step
s would be b detecting
locaal maxima in n this blurreed image andd we will geet the x-y cooordinate of SIFT
S points
conntaining posssible visual complexity,
c as shown inn fig.6(d).

(a) (b) (c) (d)


Figgure 6. Fourr steps of 2D
D visual com
mplexity analysis: (a) toop view; (b)) greyscale
r
recoloring; (c)
( noise filttering & bluurring; (d) visually
v com
mplex locatioon in x-y
plane

©  ASCE
Computing  in  Civil  Engineering  2015 277

VIS
SUAL COM
MPLEXITY
Y ANALYSIIS: IN 3D SP
PACE

This steep of the algorithm


a determines thhe z-axis correspondin
c ng to every
knoown x-y coorrdinates of visually
v com
mplex locatioons. For any possible z cooordinate at
given x-y locaation , the algoriithm calculaates the disscontinuity ofo all three
attrributes (denssity, normal, color) usingg the followiing function of the z paraameter:

Where is the diiscontinuity of an attribbute (e.g., density,


d normmal, color);
is the aveerage value of certain attributes
a off the neighbborhood aloong the
direection, and by
b this analoogy. Figure.7 shows thee calculationn of discontinnuity of the
dennsity of any aloong the z dirrection.
The lasst step is too pick the local maxim ma in this function too get the z
cooordinate, beccause the loccal maxima means the variation
v of discontinuitty and such
varriation often indicates visual compleexity, such ass the end off an edge, thee rim of the
areaa with differrent colors, etc.
e Figure.88 shows the visually
v commplex locatioons detected
in the
t point clou ud of CAVC C.

F
Figure 7. 3D
D discontinu
uity detection (z+ & z- Figuree 8. Visuallyy complex
direction example)
e locationns of CAVC C building

LO
OD DETERRMINATIO
ON OF VIS
SUALLY COMPLEX
C LOCATIO
ONS AND
LA
ASER SCAN
N PLANNIN
NG

In this step,
s the autthors are tryying to use detected
d visuually compleex locations
andd frequency analysis off photos forr deriving data d quality requiremennt—level of
detaail (LOD), and
a then guiide a compreehensive lasser-scan plannning (determ mination of
scaanning positiions and ressolutions). We W will discuss the maathematic deefinition of
LOOD of a pointt cloud in fuuture publicaations. Here we use LOD D to represent the point
dennsity of the neighborhood
n d of certain point
p of visuual complexiity.
LOD determinationn is to deteermine the samplings ratte of 3D-im
maging data
colllection for reconstruct the originnal signal (rreal-word joobsite, reprresented by
origginal photoss) with acceeptable detaail loss. Froom the deteected visuallly complex
locaations, we can
c find the correspondiing points inn original im mages. By appplying 2D
disccrete frequeency analysiis (Fourier transform or o Wavelet transform) around the
neigghborhood of these pooints, we will w get the frequency domain d infoormation of
certtain visuallyy complex location.
l Thhen we can apply Nyquuist-Shannonn Sampling

©  ASCE
Computing  in  Civil  Engineering  2015 278

theorem (Shannon, 1949) to determine the sampling rate of the image, with which the
original image can be reconstructed from sampling point with acceptable detail loss.
If the algorithm detected more details in the neighborhood of a feature point, that area
needs higher data density of 3D imaging measurements, and vice versa. Last step is
translating the sampling rate of the image to the real word scale. If the LOD
requirement of each visually complex location is satisfied, the point cloud collected
will have approximately the same amount of detailed information as the 2D jobsite
image at each point of complexity. Future publications will give detailed description
of LOD determination.
Finally, the scan planning algorithm uses visually complex locations and LOD
requirements as inputs, and calculates the optimal data collection plan. In the dynamic
construction environments, the data collection plan would guide civil engineers to
adjust scanning parameters (resolution, scanning locations, etc.) for ensuring the data
quality in order to support proactive construction progress monitoring, safety analysis,
and quality control. Due to space limit, details of the scan planning algorithm will be
in relevant publications. Real-world experiments validate that proposed complexity
analysis and scan planning would guarantee quality jobsite data.

SUMMARY AND FUTURE RESEARCH

Using discontinuity as a way for quantifying visual complexity, this research


examined a visual complexity analysis on sparse jobsite imageries to guide further
detailed imagery data collection and ensure satisfaction of LOD requirements in a
changing construction environments. Evaluation results on a campus building show
the potential of this approach on adequately identifying the visually complex parts of
the environment (e.g. geometric, color discontinuities). Visual complexity
information will not only guide targeted data collection, but also targeted data
processing, e.g. registration, and 3D modeling based on 3D imagery data (laser scan
data, photos, etc.).
On the other hand, we identified several challenges for further developments
of this visual complexity analysis approach: 1) Current speed of image feature
matching and point cloud generation is relatively slow (more than 10 minutes), faster
feature detection and complexity analysis methods are needed; 2) To satisfy other
data quality requirements, such as level of accuracy requirements, corresponding
visual complexity analysis is also necessary; 3) The integration of visual complexity
analysis and other procedure in construction data-model cycle (e.g. model
reconstruction) still needs further exploration. The authors will address these
challenges in future studies.

REFERENCE

Bosche, F., & Haas, C. T. (2008). Automated 3D data collection (A3dDC) for 3D
building information modeling. The 25th International Symposium on
Automation and Robotics in Construction. ISARC-2008, 279–285.
GORE, S., SONG, L., & ELDIN, N. (2012). Photo-modeling for Construction Site
Space Planning, 1350–1359.

©  ASCE
Computing  in  Civil  Engineering  2015 279

Hwang, J., Weng, J., & Tsae, Y. (2008). 3 D Modeling and Accuracy Assessment- A
Case Study of Photosynth, 3–8.
Lee, S., & Akin, Ö. (2009). Shadowing tradespeople: Inefficiency in maintenance
fieldwork. Automation in Construction, 18(5), 536–546.
Lowe, D. G. (2004). Distinctive Image Features from Scale-Invariant Keypoints.
International Journal of Computer Vision, 60(2), 91–110.
Park, H. S., Lee, H. M., Adeli, H., & Lee, I. (2007). A New Approach for Health
Monitoring of Structures: Terrestrial Laser Scanning. Computer-Aided Civil and
Infrastructure Engineering, 22(1), 19–30.
Shannon, C. E. (1949). Communication In The Presence Of Noise. Proceedings of the
IRE, 86(2), 447–457.
Taneja, S., Akinci, B., Asce, M., Garrett, J. H., & Soibelman, L. (2012). Sensing and
Field Data Capture for Construction and Facility Operations, 137(10), 870–881.
Turkan, Y., Bosche, F., Haas, C. T., & Haas, R. (2012). Automated progress tracking
using 4D schedule and 3D sensing technologies. Automation in Construction, 22,
414–421.
Yang, L., & Kang, J. (2014). Application of Photogrammetry: 3D Modeling of a
Historical Building (pp. 219–228).

©  ASCE
Computing  in  Civil  Engineering  2015 280

Automated Post-Production Quality Control for Prefabricated Pipe-Spools

Mahdi Safa1,*; Arash Shahi2; Mohammad Nahangi3; Carl Haas4; and Majeed Safa5
1,2,3,4
Department of Civil and Environmental Engineering, University of Waterloo, 200
University Avenue West Waterloo, ON, Canada N2L 3G1.
5
Department of Agricultural Management and Property Studies, Lincoln University, New
Zealand.

Abstract

Prefabrications has been gaining popularity in the construction industry over the past decade as it
provides a safer and more sustainable operation as well as higher quality and cheaper
construction components, due to its controlled conditions during fabrication, in-house quality
assurance systems, and lower amount of waste. Despite the advantages of prefabrication
methods, the current quality management processes, particularly in piping fabrication, are
labour-intensive, time-consuming, expensive and rather inaccurate. This paper investigates
automated solutions for improving the quality management system associated with the
prefabrication of piping assemblies in the industrial sector of the construction industry. The
scope of this research is the quality management processes conducted at the post-production
stage of the fabrication process. The findings of this paper indicated that 3D laser scanning and
photogrammetry techniques can both be successfully used in quality assurance systems for post-
fabrication of pipe-spools. It was concluded that these methods exceeded the accuracy
requirements of the current systems, while substantially improving the efficiencies of the quality
assurance processes for prefabrication operations.
INTRODUCTION AND BACKGROUND
Within the context of the construction industry, a quality assurance (QA) system refers to the
method by which owners and contractors use systematic quantitative and qualitative
measurements to ensure, with adequate confidence level, that a product, process, or service
conforms to design specifications or contract requirements (Burati et al., 1992). Given the
dynamic construction environment, failure to achieve adequate quality levels in construction
processes has long been an obstacle to the delivery of projects on time and on budget. Current
QA processes primarily involve paper forms and manual human operations, which are
inaccurate, time-consuming, expensive, and labour intensive (Fredericks, 2005). The associated
problems may cause delays in completion of the project and may trigger claims by the owner and
other parties. Effective improvements in the quality assurance systems associated with
construction processes therefore offer significant promise (Rounds and Chi, 1985). Also, the
construction industry has been exploring alternatives to its traditional operations in order to deal
with challenges inherent in working in a dynamic, unique, and continuously evolving
construction environments.
The application of pre-fabrication techniques, as an alternative to traditional construction
practices, has resulted in a profound change in the construction industry worldwide (Tucker,
1982; Safa et al., 2014). Prefabrication can be defined as "a manufacturing process, generally
taking place at a specialized facility, in which various materials are joined to form a component
1
©  ASCE
Computing  in  Civil  Engineering  2015 281

part of a final installation" (Tatum et al., 1987). Any component that is manufactured offsite and
is not a complete system can be considered prefabricated, including pipe spools and pipe
modules. Benefits include improved quality, enhanced design, reduced project time, and less
reliance on site labour. For some projects, these benefits come with increased costs, which could,
however, be minimized in the future, as the construction industry becomes more familiar with
the technology (Yeung et al., 2002). In general, modularization and prefabrication represent
aspects of a trend toward more efficient construction that has been developing in the construction
industry in Canada and the US over the past few decades. Improved productivity is a key driver
for the use of prefabrication (Eastman and Sacks, 2008). In addition, supply chain requirements
in Canada and abroad necessitate the use of modular design and prefabricated systems across
nationally and often globally distributed supply chains.
Due to logistical challenges of global supply chains, successful transportation and delivery of
prefabricated materials has always been a key challenge. Despite recent substantial advances in
modularization and prefabrication and the use of an extra 10 % to 20 % of structural material for
bracing and supporting modules, significant damage still occurs during shipment, requiring
rework after arrival at the site: an undesirable secondary effect. Other factors that lead to rework
and increased costs are fabrication errors and inaccuracies, which are due mostly to human
interaction and challenging material behaviours in the fabrication process. The Construction
Industry Institute (CII) has reported that in 2001 the cost of rework across the entire construction
industry was approximately US $ 15 billion (CII, 2002). Defects frequently become evident
during the construction phase, which is costly for both the contractors and the owners. It has
been estimated that approximately 10 % of the cost of construction rework is caused by delays in
detecting the defects (Akinci et al., 2006). Timely and accurate quality management practices
can hence save money and expedite project schedules. Current approaches for quality control
measurements on piping fabrication and installation are not as effective as they could be in
identifying defects early in the construction process. As a result, defects can go undetected until
later phases of construction or even to the maintenance phase. It was reported by the
Construction Industry Institute in 2003 that 13.3% of all fabricated pipe spools in the industrial
sector required rework (CII, 2003; Safa et al., 2011).
Addressing deviations due to fabrication errors and shipping damage requires an automated,
integrated, and continuous inspection and quality control management system. Combining 3D
imaging with 3D design information opens up a wide range of potential solutions. Applying this
technique for automating the measurement process of QA provides project management with an
outstanding opportunity for visualization of the as-built point-cloud data, which needs to be
properly registered with the as-planned point-cloud (Golparvar-Fard et al., 2011). The feasibility
of the use of these technologies has been the subject of numerous research studies involving the
analysis of construction progress and other computer vision and construction management
applications (Brilakis et al, 2011; Shahi et al. 2014). Such tools make it possible to automate
tasks related to quality control and quality assessment, including (1) the automated quality
assessment of fabricated assemblies, (2) the remote identification of exceeded tolerances, and (3)
the remote and continuous quality control of assemblies as they are being fabricated. Solving
these problems and automating these tasks will reduce the risk of fabrication errors and thus
decrease project cost as well as enhance schedule and productivity performance.

2
©  ASCE
Computing  in  Civil  Engineering  2015 282

While the use of prefabrication is increasing in all construction sectors, the scope of the research
presented in this paper was limited to the development of a QA model of the built dimensional
quality of prefabricated pipe spools and pipe modules that have been produced using a
prefabrication process. However, a comprehensive quality control system would need to be able
to identify and prevent production, post-production, and on-site defects. Detecting on-site defects
was beyond the scope of this research. Instead, this paper focuses on using automated systems to
assist with post-production quality control processes. Historically, this is the stage where the
majority of the quality issues can be identified, while providing sufficient warning for resolving
the issues before the prefabricated elements are shipped to the construction site.
AUTOMATED QUALITY ASSURANCE FOR PREFABRICATED SPOOLS
In current QA practice, pipe spools are tested and measured only when the production
department places them at the end of the production line. At this post-production stage, the pipe
spools are considered end products of the fab-shop if they pass the QA tests. The automated
post-production QA process was developed in this research with the potential to improve on the
current practice.
Pipe spools should be assembled in a way that avoids forcing, cutting, or otherwise weakening
the structural members. Even one pipe-spool with small deficiencies and defects could cause
multiple problems and challenges: rework costs, process or service audits, supplier surveillance,
client complaints, etc. Given the QA requirements surrounding piping construction, it ranks
ahead of other construction QA categories with respect to the need for technological
advancement (Kim et al., 2013). Unfortunately, the construction industry as a whole is about 20-
30 years behind some other industries, including manufacturing, when it comes to adopting new
technological tools (Shahi et al., 2012). While several new technologies could be applied as a
means of improving QA piping construction, in this research the use of 3D laser scanning and
photogrammetric techniques were investigated.

For the use of 3D laser scanning, three suitable laser scanner locations were identified in the
corners of the quality control room at the end of the production process, shown in Figure 1. The
appropriate location of the scanners minimized occlusion and enabled as many sides of the
spools as possible to be covered in the scans. Separate scans were merged together by finding the
common points among the separate point clouds. The ambient temperature was about 15ºC to 20
ºC, which would not affect the scanner results. One of the advantages of laser scanners is that
they can tolerate small temperature changes (0-5°C), without the need for lengthy calibrations.
Of course, large temperature changes could negatively affect the accuracy (Ahmed et al., 2011).

3
©  ASCE
Computing  in  Civil  Engineering  2015 283

Fiigure 1: Loccation of Sccanners for Conductingg Post-Production Meaasurement


A FARO O laser scann ner LS 840 HE was ussed in order to capture the data forr this study. This
technologgy is comm monly know wn as time of flight (toof) and is used
u in moost availablee and
affordablle laser scan
nners. FAROO Scene scannner softwaree, was used for mergingg the point-cllouds
captured.. The scanniing time for each scan depends
d on thhe quality of the scan ass well as thee area
to be scannned.
For this research,
r ph
hotogrammettry was usedd as a secondary qualityy control datta collection tool.
Typicallyy, laser scannning can retrieve
r morre points thhan photograammetry: millions
m of points
p
versus thhousands of points.
p How
wever, it has not been shoown that bettter accuracyy can be obtaained
through laser
l scanninng in any givven situationn (Ahmed ett al., 2011). Photogrammmetric technniques
support the
t automaticc stitching of
o photos throough the dettection of common featuures in a sequuence
of imagees, and it can
n be performmed using several comm mercial software packagees (e.g., Autoodesk
Photofly and Microsoft Photosyynth) (Dai and a Lu, 20110). Recent advances in generatingg 3D
environmments using laser scanniing and phootogrammetrry technologgies create an a opportuniity to
explore the technological feassibility of frequently gathering complete c annd accuratee 3D
informatiion of as-buiilt data.
Photograammetry hass been sugggested and validated
v forr particular applicationss in construuction
engineeriing, such ass: (1) inspeecting the seettling displlacements of control pooints on exiisting
infrastruccture, (2) managing
m andd visualizingg the progreess of a prooject, (3) providing a digital
d
record foor decision making
m and asset
a manageement purpooses, (4) and for controlliing the geom metric
dimensioons of as-buiilt building products
p throough efficiennt dimensionn takeoffs (D
Dai and Lu, 2010;
2
Golparvaar-Fard et all., 2011). Thhe photogram mmetry process can be summarized as followss: (1)
common features aree selected inn more than two t images;; (2) the positions and orientations ofo the
cameras are calculaated; and (3) ( the loccations of intersecting
i feature pooints enablee the
reconstruuction of 3D informationn. Theoreticaally, two imaages are suffficient for thhe generationn of a
3D pointt cloud using photogram mmetry, but for better results,
r 10 digital
d camerra locations were
identifiedd in the quallity control room,
r shownn in Figure 2. The imagges taken weere then anallyzed
using Phooto Modelerr software, annd then the 3D 3 point cloouds of the pipe
p spools werew generateed.

4
©  ASCE
Computing  in  Civil  Engineering  2015 284

Fiigure 2: Loccation of Caameras for Conducting


C g Post-Produ
uction Measurement
The caliibration elem ments for the cameraa used for photogramm metry includde focal leength,
resolutionn, format sizze and lens parameters. It should be b noted thatt extracting dimensions from
poorly oro non-reconstructed partsp is inacccurate andd unreliable for furtheer investigattions.
Howeverr, in order to o overcome this limitatioon and to im mprove the accuracy
a of the
t reconstruucted
model, suufficient nummber of imagges needs too be taken annd from diffeerent perspectives. Althhough
processinng time increeases as morre images aree taken from m the assembbly, this extraa effort is neeeded
to lower the probabillity of havinng poorly or non-reconsttructed regioons within thhe assembly.. For
the particcular assembblies tested inn this paper,, all regions were robusttly reconstruucted. The caamera
used in thhis research was a Canoon SX 40HS, and the parrameters weere extractedd according to t the
procedurre indicated in the PhottoModeler TM T
software. Figure 3 shows
s the fiinal point cllouds
generatedd by each method
m for onne of the pippe spools. AsA can be seeen, the pointt cloud geneerated
by the lasser scanner is
i denser andd more contiinuous than the t one geneerated by photogrammettry.

(a) (
(b) (c)
Figure 3:
3 (a) Origin
nal 3D CADD Drawing; (b) Point Cloud
C Generrated by Laaser Scannerr; (c)
Point Cloud Generaated Using the
t Photogrrammetry Approach
A
In order tot have the 3D
3 CAD Moodel and the scanned as--built status aligned,
a coarrse registratiion is
performeed using the Principal Coomponent Analysis
A (PC
CA), as the PCA
P is quickk and robustt. The
registratiion process is
i programm
med in MATL LAB and theen applied too sample spoools. A com mputer
with 3.7 GHz processor and 32 GBG RAM cann process thee registrationn in 5 to 6 seeconds.
While the application n of photogrrammetry tecchnique has known limiitations whenn it comes to flat
and featuureless surfaaces, such ass pipe spoolss, 3D laser scanning
s is very
v powerfful in this coontext
and its accuracy
a levvels do not deteriorate in such coonditions. Thhe results ofo laser scannning
process are dependeent on a nuumber of factors f suchh as object distance from scannerr and

5
©  ASCE
Computing  in  Civil  Engineering  2015 285

measurement angle. Laser scanners can output extremely high resolution models in compared to
photogrammetry results in both laboratory and actual field experiments, while both approaches
allow the as-built environment to be visualized from different viewpoints. In general, the
photogrammetry offers a good alternative to laser scanning considering its much cheaper start-up
cost, if the accuracy level desired for a certain application is moderate. For this study, results
obtained by implementing photogrammetry are as accurate as the results of using laser scanning
technologies for these size objects and indoor settings.
The proposed automated measurement system for the post-production QA process was
performed for several pipe spools. Sample results for three pipe spools are shown in Table 1 for
both the laser scan and the photogrammetry data. The measurement results are the diameters and
lengths of the varied sections of the pipe spools which are determined by the QA department as
the critical areas. There were five critical points identified on pipe-spools 1 and 2, and four
points on pipe-spool 3 to be controlled as part of the quality assurance program. The QA
department ensures the fabricated spools meet final assembly requirements through determining
the critical test area and dimensional tolerances for completed fabricated piping. There is no
standard or specification for determining the critical areas according to the one-off nature of
these projects. As a basic rule, typically the standard parts of the pipe spools such as fittings,
pipes, and flanges should not be considered as the critical areas.
Table 1: Comparison of the Dimensions; Dap: As-planned Dimension from the Original 3D
drawing; DL: Dimension Measured in the 3D Point Cloud from the Laser Scanner; DP:
Dimension Measured from the 3D Point Cloud from Photogrammetry (mm)
Pipe Dap (mm) DL (mm) Dp (mm) L= % L P= % P
Spool # Original Laser Photogrammetry Dac-DL Dac-DP
Value Scan
1 14 15 13 -1 -7.14% 1 7.69%
374 378 379 -4 -1.07% -5 -1.32%
117 120 122 -3 -2.56% -5 -4.10%
38 38 38 0 0.00% 0 0.00%
168 174 176 -6 -3.57% -8 -4.55%
2 133 131 132 2 1.50% 1 0.76%
1881 1879 1881 2 0.11% 0 0.00%
214 216 218 -2 -0.93% -4 -1.83%
165 175 174 -10 -6.06% -9 -5.17%
89 90 93 -1 -1.12% -4 -4.30%
3 38 43 44 -5 -13.16% -6 -13.64%
191 189 187 2 1.05% 4 2.14%
23 23 22 0 0.00% 1 4.55%
14 15 13 -1 -7.14% 1 7.69%

Based on the QA department procedure, the % L or % P has to fall less than 5% for a specific
pipe-spool to be accepted. If the result is between 5 to 7% then it is referred to QA Department
for a decision. If it is above 7% then the pipe spools is deemed unacceptable. Based on these

6
©  ASCE
Computing  in  Civil  Engineering  2015 286

criteria, spool 2 would be sent to QA department for further review and spools 1 and 3 would be
rejected. These results were matched exactly with the results reported by the manual inspection
of the spool. However, there are a few distinct advantages of using these automated systems.
First, while there are general rules for identifying the critical areas, there are no set and proven
standards. Therefore, many localized defects may go undetected at this point, simply because
their exact position was not chosen as a critical point. Instead, with the automated systems the
complete length of the spool can be checked with its as-planned dimensions, through the use of
the 3D model as a-priori knowledge and therefore the reliability of the QA will be substantially
improved.
Second advantage is the elimination human factor and associated errors in obtaining the
dimensional measurements. The accuracy of the systems used, both through photogrammetry
and 3D scanning, is reported to be between 1-2% in the controlled conditions similar to what was
used in this research (Ahmed et al., 2011, Nahangi et al. 2015). Given the many sources of error
that exist for the current system of measuring the dimensions manually with a measuring tape,
including worker fatigue, eye-sight errors, even expansion and contraction of the tape due to
temperature changes, the results obtained through the automated system are both more accurate
and more reliable. Even though the decisions that were made through the manual system
matched the ones made through both laser scanning and photogrammetry, it is possible that this
was due to the particular attention paid to the manual process in this investigation and it is
expected that in normal operations with hundreds of spools to inspect the accuracy of the manual
system would drop substantially while the automated system can continue to function with the
same consistency and accuracy levels.
CONCLUSION
In industrial construction, piping activities account for a significant portion of the construction
projects. The current QA processes for piping activities are characterized by numerous
limitations due to their inherent complexity and labour-intensive procedures. This research
investigated 3D laser scanning and photogrammetry as automated tools for post-production QA
for the particular application of prefabricated pipe-spools. The implementation of these
automated tools and the associated field study results indicate that the system has the potential to
be a valuable tool for QA tasks related to prefabricated pipe-spools, by improving accuracy,
consistency and reliability of the QA process. It was further concluded that either
photogrammetry or 3D laser scanning techniques would be suitable for this application. With the
promising results of this research in automating the post-production quality assurance process, it
is recommended for future research to consider the entire spectrum of QA activities in order to
provide a comprehensive automation system for QA processes related to prefabricated
construction elements.
REFERENCES
Ahmed, M., Guillemet, A., Shahi, A., Haas, C.T., West, J.S., and Haas, R.C.G. (2011).
“Comparison of Point-Cloud Acquisition from Laser-Scanning and Photogrammetry
Based on Field Experimentation.” CSCE 3rd International/9th Construction Specialty
Conference. Ottawa, Ontario, June 14-17.
Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., Park, K. (2006). "A Formalism for
Utilization of Sensor Systems and Integrated Project Models for Active Construction
Quality Control." J. Autom. Constr., 15(2), 124-138.

7
©  ASCE
Computing  in  Civil  Engineering  2015 287

Brilakis, I., Fathi, H., Rashidi, A. (2011). Progressive 3D reconstruction of infrastructure with
videogrammetry, J. Autom. Constr. 20 (7) 884–895.
Burati, J. L., Farrington, J. J., and Ledbetter, W. B. (1992). Causes of quality deviations in design
and construction. J. Constr. Eng. Manage. ASCE. 118(1), 34–49.
Construction Industry Institute. (2003). New Joining Technology for Metal Pipe in the
Construction Industry, Breakthrough Strategy Committee, BTSC Document.
Construction Industry Institute (2002), New Joining Technology for Metal Pipe in the
Construction Industry, University of Texas at Austin, Austin, TX.
Dai, F., and Lu, M. (2010). Assessing the accuracy of applying photogrammetry to take
geometric measurements on building products. J. Constr. Eng. Manage. ASCE. 136(2),
242-250.
Eastman, C. M., and Sacks, R. (2008). Relative Productivity in the AEC Industries in the United
States for on-Site and Off-Site Activities. J. Constr. Eng. Manage. ASCE. 134(7), 517-
526.
Fredericks, T., Abudayyeh, O., Choi, S., Wiersma, M., Charles, M. (2005). Occupational injuries
and fatalities in the roofing contracting industry, J. Constr. Eng. Manage. ASCE. 131
(11) 1233– 1240.
Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., and Peña-Mora, F. (2011). Evaluation of
image-based modeling and laser scanning accuracy for emerging automated performance
monitoring techniques. J. Autom. Constr., 20(8), 1143-1155.
Nahangi, M., Haas, C., West, J., and Walbridge, S. (2015). "Automatic Realignment of Defective
Assemblies Using an Inverse Kinematics Analogy." J. Comput. Civ. Eng. ,
10.1061/(ASCE)CP.
Son, H., Kim, C., and Kim, C. (2014). "Fully Automated As-Built 3D Pipeline Extraction
Method from Laser-Scanned Data Based on Curvature Computation." J. Comput. Civ.
Eng., 10.1061/(ASCE)CP
Rounds, J. and Chi, N. (1985).Total Quality Management for Construction. J. Constr. Eng.
Manage. ASCE. 111(2), 117–128.
Safa, M., Gouett, M.C., Haas, C.T., Goodrum, P.M., Caldas, C.H. (2011), “Improvement of
Weld-less Innovation on Construction Project,” 3rd International/9th Construction
Specialty Conference. Ottawa, Ontario, Canada.
Safa, M., Shahi, A., Haas, C. T., & Hipel, K. W. (2014). Supplier selection process in an
integrated construction materials management model. Automation in Construction, 48,
64-73.
Shahi, A., Aryan, A., West, J. S., Haas, C. T., & Haas, R. C. G. (2012). Deterioration of UWB
positioning during construction. J. Autom. Constr., 24, 72-80.
Shahi, A., Safa, M., Haas, C.T., and West, J.S. (2014) Workflow-Driven Data Fusion Framework
for Automated Construction Management Applications. J. Comput Civil Eng. ASCE.
Tatum, C. B., Vanegas, J. A., and Williams, J. M. (1987). Constructability Improvement Using
Prefabrication, Preassembly, and Modularization, Construction Industry Institute, The
University of Texas at Austin.
Tucker, R. (1982), Construction Technology Needs and Priorities - Construction Industry Cost
Effectiveness Project Report, The University of Texas at Austin, Austin, TX.
Yeung, N. S. Y., Chan, P.C.& Chan, D. W. M. (2002). “Application of prefabrication in
construction – a new research agenda for reform by CII-HK”. Conference on Precast
concrete Building System, Hong Kong.

8
©  ASCE
Computing  in  Civil  Engineering  2015 288

Characterizing Travel Time Distributions


in Earthmoving Operations Using GPS Data

Sanghyung Ahn1, Jiwon Kim2, Phillip S. Dunston3,


Amr Kandil4 and Julio C. Martinez5*
1
Postdoctoral Research Fellow, School of Civil Engineering, The University of
Queensland, Brisbane St Lucia, QLD 4072, Australia; PH +61-7-3365-3568; FAX
+61-7-3365-4599; email: [email protected]
2
Assistant Professor, School of Civil Engineering, The University of Queensland,
Brisbane St Lucia, QLD 4072, Australia; PH +61-7-3346-3008; FAX +61-7-3365-
4599; email: [email protected]
3
Professor, School of Civil Engineering, Purdue University, 550 Stadium Mall Drive,
West Lafayette, IN 47907-2051; PH (765) 494-0640; FAX (765) 494-0644; email:
[email protected]
4
Associate Professor, School of Civil Engineering, Purdue University, 550 Stadium
Mall Drive, West Lafayette, IN 47907-2051; PH (765) 494-2246; FAX (765) 494-
0644; email: [email protected]
5*
(Posthumously, deceased on June 4th, 2013) Professor, School of Civil Engineering,
Purdue University

ABSTRACT

Recent advances in sensor technology have led to enhanced data acquisition


capabilities in construction sites. A wealth of data are being collected from GPS-
equipped heavy vehicles for a wide range of monitoring, management, and analysis
purposes. The availability of detailed GPS trajectory data has opened up new
opportunities for modeling and simulation of real-world construction operations. One
of the emerging areas in this regard is data-driven modeling and simulation, which is
a modeling framework that attempts to automatically generate discrete-event
simulation (DES) models based on a rich set of observed data as well as dynamically
adapt the generated model to changes in data.
Within the overall framework of automatically generating a DES simulation
model for earthmoving operations, this paper focuses on developing methods to
convert complex movement data collected from scrapers into a modeling element of
activity-cycle diagram and activity scanning modeling paradigm-based DES system.
Scraper changes travel routes at every cycle and its trip patterns (e.g., travel path and
speed) are very difficult to generalize using a known parametric model (e.g.,
theoretical probability distribution), which in turn complicates the problem of
automatic model generation. To deal with this issue, this paper proposes the use of
relation between travel time and travel distance with regard to coefficient of variation
measures, expressed in two separate distributions, to capture information needed to
construct speed and path scenarios.

©  ASCE
Computing  in  Civil  Engineering  2015 289

INTRODUCTION

When modeling a real-world system using discrete-event simulation (DES),


the following steps are generally required: (i) establish the modeling scope and
abstraction level (level of detail), (ii) select essential elements needed to represent the
intended real system (e.g., resources, activities), and (iii) capture the logic and
behavior of each element appropriately.
When we define a set of activities to model an earthwork operation, activities
which represent earthmovers’ (e.g., dump trucks and scrapers) travels are identified
based on their origination and destination (OD) pairs. For instance, given the areas
where soil is to be loaded and unloaded, Activities Haul and Return are specified to
capture the trips of earthmovers between loading and dumping areas, respectively.
These activities are further differentiated based on different travel routes for a given
OD such that each activity can describe a particular path (i.e., different travel distance,
or road profile including slope and material). For a given path, which is represented
by a separate Activity element in DES model, the duration (travel time) of the activity
is modeled using a probability distribution, assuming that travel times across cycles
are independently and identically distributed (IID).
Challenges arise, however, when we model scrapers using this framework.
Unlike dump truck, which follows a fixed route but only varies its travel time, scraper
changes travel route and its trip pattern at every cycle in open remote area (Figure 1).
This implies that we would need to create hundred Activity elements to model
hundred different routes. This approach is not feasible, however, because we typically
have only one travel time observation for each path and this does not allow us to
specify the travel time distribution for the associated activity element. A better way
might be to create a single Activity element that can model all these variations in both
travel path and travel time. Thus, the objective of this study is to develop a
methodology to characterize scraper travel time distributions and represent various
travel routes as one activity element in DES model as illustrated in Figure 1.
The proposed approach employs relation between travel time and travel
distance with regard to coefficient of variation measures, expressed in two separate
distributions, to capture information needed to construct speed and path scenarios.
This research discovered that the Integrated Autoregressive Moving Average
(ARIMA) provides a suitable option to model the observed non-IID and non-
stationary travel distance data after achieving stationarity with lag-1 difference (Puri
& Martinez 2013). Moreover, a result of analysis shows that each operator tends to
sustain his/hers particular pattern in speed. This indicates that we need to model travel
time per individual machine to capture each driver’s driving behavior. DES computer
systems which are capable of sampling data from time series models (Puri 2012) such
as STROBOSCOPE (Martinez 1996), are needed to incorporate the proposed
methodology in DES studies.

©  ASCE
Computing  in  Civil  Engineering  2015 290

Figure 1. Travel time characterization problem in push-loading scraper


operation for DES study

METHODOLOGY

Data collection and synthesis. This study is based on data that the authors
collected during an afternoon shift for over 5 hours using inexpensive GPS loggers
from a push-loading scraper operation with three scrapers, a pusher (i.e., dozer) and a
grader. The observed operation performed as a part of US-231 road construction
project in 2011 near Purdue University. Travel duration data shown in this paper are
acquired by executing an activity recognition algorithm that the authors implemented.
The identified activity durations are thoroughly validated by using video which
recorded the operation and 3D-visualization tool that can re-create the operation
based on GPS data as 3D animation. The 3D-visualization tool is a byproduct of the
authors’ another research project. In the remainder of this paper, the term of “travel”
in scraper operation includes sequential Activities Haul, Spread, and Return. Scraper
travel times thus start at the end of Activity Load and end when Activity Return is
terminated at the cycle.
Most distribution fitting techniques are based on the assumption that the
observed data are IID (Martinez 2010). However, in many instances the data as
observed is not IID. Given GPS observation data, travel distances and speed data are
fitted to the distributions (assuming that the data are IID) by @Risk. The fits shown
in Figure 2 are very good (the ChiSquare, Kolmogorov-Smirnov, and Anderson-
Darling tests do not reject the hypothesis at the 0.10 level of significance), it may
seem a good idea to use the fitted distributions in a model.

Travel distance data. However, further inspection by using scatter diagrams


(Figure 3), sequentially ordered plots (Figure 4), and auto-correlation functions
(Figure 5) show that the travel distance data are not IID. In effect, the ordered plots
and auto-correlation functions suggest that data might not be stationary. Thus, an
ARIMA process provides a suitable option to model the travel distance data for
simulation. To achieve stationarity, lag-1 differences were computed for the data. The
autocorrelations for the travel distance data of three scrapers after lag-1 differencing
are shown in Figure 6 (i.e., the differenced series yield stationary processes). On
further analysis of the data, several possible options for modeling the data emerged

©  ASCE
Computing  in  Civil  Engineering  2015 291

and the best option was selected using the Akaike Information Criterion (AIC)
(Akaike 1974). The travel distance data were thus modeled using ARIMA(0,1,1),
ARIMA(0,1,2), and ARIMA(1,1,2) models for Scraper #1, #2, and #3 respectively
and the parameters were estimated using R software.

Figure 2. Histograms and fitted distributions for scraper


(a) travel distance and (b) speed

Figure 3. Scatter diagrams for scraper travel distances

Figure 4. Ordered plot for scraper travel distances

©  ASCE
Computing  in  Civil  Engineering  2015 292

Figure 5. Auto-correlation functions for scraper travel distances

Figure 6. Auto-correlation functions for scraper travel distances after lag-1


differencing

Travel time data. Total travel time with travel distance plot (Figure 7) and
sequentially ordered plot of speed (Figure 8) indicate that while operating, drivers
tend to sustain their own speed. Thus, travel time input modeling needs to incorporate
this individual behavior by having different distributions for each operator. To cope
with this perspective in sampling travel distance, time series models were derived for
each scraper to capture this characteristic as described on the previous section.
Scatter diagrams in Figure 9 show that travel distance and travel time are
correlated. To find a formal relationship, we further explore with the coefficient of
variation (CV ) between two parameters as shown in Figure 10. The step size of
sampling to study the CVs was used as three cycles per sample. The step size was
determined under the assumption that operators are likely to decide the travel distance
of the following trip at the current instance based on the previous and the current
routes.

Coefficient of variation. Given the linear relationship between the CV of


travel distance and the CV of travel time, the standard deviation (SD) of travel time
can be derived. The mean and SD of travel distance are known from the ARIMA
model. The mean of travel time can be determined from the mean of travel distance
and the mean of speed which can be retrieved from Figure 7. We can then define a
normal distribution with parameters of the mean and SD of travel time to sample a

©  ASCE
Computing  in  Civil  Engineering  2015 293

travel time which corresponds to the travel distance generated from the ARIMA
model.

Figure 7. Total travel distance vs. total travel time

Figure 8. Ordered plot for scraper speeds

Figure 9. Scatter diagrams for scraper travel distances vs. travel times

Figure 10. Scatter diagrams for coefficient of variations of scraper travel


distances vs. travel times

©  ASCE
Computing  in  Civil  Engineering  2015 294

Data sampling. At the first instance of travel distance sampling from the
derived ARIMA model (before the start of the first instance of Activity
HaulSpreadReturn per scraper), we sample two more instances such that the mean
and SD of travel distance can be calculated based on the generated three samples
which represent the travel distances of three sequential instances of the corresponding
machine. From the second instance of activity, ARIMA model generates only one
sample per instance for the travel distance of activity which will be initiated after two
instances. With having the mean and SD of travel distance and relationship between
the CV of travel time and the CV of travel distance, a travel time can be easily derived
as shown below.
CV

Di
Ti (T travel time, D travel distance, S speed , i activity instanc )
S

Ti CV T i Ti ( CV T found relationship with CVD in Figure 10 )


Ti ~ N ( T , T )

In this manner, with one activity defined in DES, we can exhibit different
travel paths in terms of travel distance and speed by characterizing individual travel
time per machine.

CONCLUSIONS AND FUTURE WORK

The authors have explored the observed travel data collected from a scraper
earthmoving operation. Some challenges were identified in defining model elements
for DES study and modeling travel time input data when travel pattern changes at
every cycle which is common in scraper earthwork operations. Therefore, this study
has proposed a simple but statistically robust methodology to characterize travel time
distributions with one activity element model. The proposed methodology uses the
relation between travel time and travel distance with regard to coefficient of variation
measures, expressed in two separate distributions, to capture information needed to
construct speed and path scenarios. This input modeling strategy will be studied
further with various validation techniques.

ACKNOWLEDGEMENT

The authors would also like to acknowledge the DJ McQuestion & Sons, Inc.,
in particular Rick McQuestion (Project Manager in US-231 Project), for their
considerable help in collecting data.

REFERENCES

Akaike, H. (1974). “A new look at the statistical model identification.” IEEE


Transactions on Automatic Control, 19(6), 716–723.

©  ASCE
Computing  in  Civil  Engineering  2015 295

Law, A. M., Kelton, W. D., and Kelton, W. D. (1991). Simulation modeling and
analysis. McGraw-Hill New York.
Martinez, J. (2010). “Methodology for Conducting Discrete-Event Simulation Studies
in Construction Engineering and Management.” Journal of Construction
Engineering and Management, 136(1), 3–16.
Martinez, J. C. (1996). “STROBOSCOPE: State and resource based simulation of
construction processes.” Doctoral Dissertation. Department of Civil and
Environmental Engineering, University of Michigan, Ann Arbor, MI.
Martinez, J. C., and Ioannou, P. G. (1999). “General-purpose systems for effective
construction simulation.” Journal of construction engineering and
management, 125(4), 265–276.
Puri, V. (2012). “Incorporation of continuous activities into activity cycle diagram
based discrete event simulation for construction operations.” PURDUE
UNIVERSITY.
Puri, V., and Martinez, J. (2013). “Modeling of Simultaneously Continuous and
Stochastic Construction Activities for Simulation.” Journal of Construction
Engineering and Management, 139(8), 1037–1045.
@Risk, Palisade Corporation, 31 Decker Rd, Newfield, NY 14867, USA.

©  ASCE
Computing  in  Civil  Engineering  2015 296

A Qualitative Study on the Impact of Digital Technologies on Building


Engineering Design Offices
Graham Hayne; Bimal Kumar; and Billy Hare
School of Engineering and the Built Environment, Glasgow Caledonian University,
Cowcaddens Road, Glasgow, G4 0BA Scotland, U.K.
E-mail: [email protected]
Abstract
The ethos of building design offices has undergone radical transformation following
the widespread adoption of digital technologies. Design calculations that were
formally carried out by hand over a period of weeks and months are now completed
in hours using highly sophisticated analysis and design packages. Likewise, the
drawing board has been almost universally abandoned in favour of CAD systems and
parametric 3D models. The purpose of this research is to identify the effect that
digital technologies is having on operations of design offices. A series of semi-
structured interviews were carried out with experienced practitioners (Consulting
engineers and a steelwork sub-contractor) all of whom have witnessed the evolution
of the industry to a digital environment. A subjective research philosophy was
adopted using an inductive approach to analyse the data. Not surprisingly, it was
found that the use of digital analysis, drawings and modelling suites have brought
significant benefits to the industry. There are, however, many issues that are
adversely impacting the industry; the over reliance on computer analysis by younger
engineers was argued to be leading to a loss of understanding of the basic
engineering principles in their designs. There is a tendency to ignore the historical
philosophy of building design and it’s associated approximations with inexperienced
engineers focusing on levels of accuracy more akin to manufacturing. With some
modellers being trained in the use of the technology instead of the principles of
engineering and construction, the quality of information being issued to constructors
is often lacking.
Keywords: Training; Experience; Technology

INTRODUCTION
Computers and associated digital technologies are radical inventions having the
ability to transform our lives (Winograd and Flores, 1987). Indeed, there is no doubt
that the working environment of building design engineers has been profoundly
changed following the introduction of computer drawing, modelling and analysis
packages (Hayne et al,2014). However, it is suggested that computer technologies
‘may have unintended as well as intended impacts’(Zhou et al, 2012, p103).
The findings of a brief literature review are set out which identifies some of the
negative impacts of computer technologies on the engineering design industry. In an

©  ASCE
Computing  in  Civil  Engineering  2015 297

attempt to verify these findings, a series of semi-structured interviews are conducted


with practising professionals, who have witnessed the changes in working practice.

BACKGROUND
Karl Weick (1985) sets out various psychological procedures that people adopt to
increase learning and efficiency at work including triangulation, affiliation,
deliberating and consolidating. He goes on to question if these are achievable when
working in a digital environment. ‘People using information technologies are
susceptible to cosmology episodes because they act less, compare less, socialize less,
pause less, and consolidate less…As a result, the incidence of senselessness
increases’ (Weick,1985, p56).

Design is defined as the ‘interaction between understanding and creation’


(Winograd and Flores, 1987) and ‘a creative process of adequate problem solving’
(Wilpert, 2007). Additionally, it is widely accepted that engineering is a combination
of art and science (Blockley, 1980). From these definitions it is evident that
experience and tacit knowledge play a fundamental role in the problem solving of
engineering design. Simon (1982) goes as far as to suggest that design is problem
solving based on trial and error guided by experience.

The issue for software designers is how to code experience and tacit knowledge. It is
argued that tacit knowledge cannot be codified and, therefore, engineering becomes a
combination of art and standardised coding (Henderson, 1999). Henderson further
questions who codes the information. Is it software houses or engineers? If the
former, there is a distinct danger that control of the software is lost with the computer
becoming ‘the bearer of knowledge’ (p20).

The use and outputs of computers needs to be considered in context. For example,
someone using a word processor is not merely producing a document but is writing a
letter or a report … (Winograd and Flores, 1987). Likewise an engineer analysing a
complex structure is in fact undertaking a part of the much wider design process.
It is generally accepted that design is an iterative process (Alexander, 2000; Wilpert,
2007; Whyte, 2013). This process relies upon the use of visual images for evaluation
of ideas (Goldschmidt,1994). This operation was often completed using tracing paper
overlays where changes could be quickly sketched and adjusted with the designers
able to see the interaction of the existing and proposed (Henderson, 1999). The use
of digital models can lead to an over reliance on a single data source that may not be
the most appropriate (Perrow, 1999. cited in Whyte, 2013). Viewing models and
images on screen can cause visualisation problems as it’s ‘…difficult to see very
much … you have to move the thing around and then you can’t remember what was
on the other side’ (Whyte, 2013. p51). Different perspectives and data sources are
required to validate and ensure accuracy of data. The illusion of accuracy can be
created if people avoid comparison (triangulation), but…illusions of accuracy are
short-lived, and they fall apart without warning. Reliance on a single,
uncontradicted data source can give people a feeling of omniscience’ (Weick,1985,
p57). To evaluate ideas the designers must challenge the outputs of the computer but

©  ASCE
Computing  in  Civil  Engineering  2015 298

researchers have identified that ‘digital systems do not encourage the active
challenging of assumptions’ (Zhou et al,2012).

METHODOLOGY
A purposive sample consisting of practising consulting engineers, who began their
careers in the pre-digital era was identified. Older engineers were selected as, unlike
younger engineers, they were able to provide comparisons of the two eras. The
sample consisted of six engineers and a technical manager from a steelwork
fabricator as set out in Table 1. The initial analysis of the interviews indicated that
saturation had been achieved and additional interviews were not considered
necessary.
The locations of the interviewees were geographically disparate requiring five of the
interviews to be conducted using Skype which gave the opportunity to observe body
language or facial expressions (Bryman & Bell, 2007).
The interviews were sound recorded and later transcribed before being coded using
Nvivo software. The common themes were drawn together in memos (Miles &
Huberman, 1994: Dey, 1993) which facilitated a subjective, inductive analysis of the
data.
Table1. Details of interview sample.
Pseudonym. Role / Position Years in Route into
industry industry
John Experience in consulting engineering in 24 HND and BSc
UK, North America and Gulf states. Civil
Engineering
Alan Experience in consulting engineering in 40 BSc Civil
the UK but with some time in Gulf Engineering
States.
Phil Experience in consulting engineering in 27 BSc Civil
the UK prior to last 3 years in China Engineering
Paul Experience in consulting engineering in 27 BSc Civil
the UK. Engineering
Adam Experience in consulting engineering in 33 BSc Civil
the UK with some time in KSA and Engineering
USA.
Richard Experience in consulting engineering in 29 BSc Civil
the UK. Engineering
Glen All work experience in steel fabricators 30 Apprenticeship
in the UK and ONC

FINDINGS OF INTERVIEWS
Positive aspects of computers

©  ASCE
Computing  in  Civil  Engineering  2015 299

There was an overwhelming belief that computer technology has brought some
distinct advantages to the industry (John, Phil, Adam, Richard,). The general view
was that much more complex structures were now being analysed and modelled that
would have been impossible to design in a pre-digital world. ‘The likes of Gehry
buildings and Zaha’s buildings and that is purely enabled by technology’ (Adam).
This view was echoed by Phil who interestingly provided the caveat that ‘it’s very
hard to always bring it back to something that’s real’.
The ability of computers to remove the drudgery of hand calculations was seen as an
improvement in the design process (Phil). Similarly, the ability to understand the
spatial interactions of buildings in a 3D environment was seen as an advantage, not
only for the engineers but also to explain the engineering principles to the architects
(Richard).
Over reliance of computers
Most graduate engineers were over-reliant upon the use of computers was a point
raised by several of the engineers. As one participant put it: ‘In the olden days you
would have liked to rationalise the structure…but now they just tend to put it straight
into the computer and believe the results’ (Alan). This sentiment was echoed ‘Their
natural instinct is always to go to the computer’ (Richard),
Several of the engineers had concerns that graduates were feeding designs into
computers with little understanding of what the expected output would be (John).
One respondent suggested that ‘…people put it in the computer, don’t really know
what’s going on and then believe the output’ (Alan). This is not a view shared by all
the engineers as Adam believes such concerns are now outdated. He emphasised his
belief that: ‘the computer is the tool that helps you understand it…Modern
technology allows you to challenge the structure more easily and I think that can
give you a better understanding of structural behaviour rather than the old fashioned
way would...” (Adam). Paradoxically, this is not a view shared by Richard who
explains that ‘it is quite difficult to try and get younger graduate engineers to
actually interrogate what comes out of the computer because they don’t know how to
interrogate it’ (Richard).
The importance of good mentoring was raised by several of the interviewees (John,
Paul, Richard). Richard was quite clear in his belief that it is important for
experienced engineers to pass on their knowledge and experience (Richard).
Repeatedly, Paul highlighted the potential problem that inexperienced engineers
could work for too long without the input of more experienced engineers (Paul). It
was interesting that he also stated that the nature of the questions asked by graduates
has changed as they are ‘… about how to fly the machine which aren’t about
engineering. In the days of hand calcs I suppose it [questions]was more about
engineering’ (Paul).

Precision and accuracy

©  ASCE
Computing  in  Civil  Engineering  2015 300

Issues relating to precision and accuracy were raised by four of the consulting
engineers, three of whom suggested that it was a potential problem for inexperienced
engineers (Alan, Adam, Richard).
Three arguments are put forward by the engineers to explain why the level of
accuracy sought by graduates is not appropriate; 1. Site conditions, 2. Buildability
and tolerances, 3. Design assumptions.
Alan, who is working as the site team leader for the design team on a large complex
project in the Middle East, raises the issue of site conditions stating that the computer
generated accuracy is inappropriate “…particularly when you see the work carried
out on site” (Alan).
The issue of buildability and tolerances is raised by Richard, who highlights the issue
that technicians constructing 3D models often have “…a lack of understanding of
tolerances. A lack of understanding of how things fit together.” (Richard). He
contends that a lack of understanding also leads to the creation of details that, whilst
buildable in a digital world, are impossible to construct in the real world.
However, a view put forward that differs to the previous thoughts is expressed by
Phil who suggests that a computer must be accurate “It can only be accurate; it
cannot give you a vague answer.”. He goes on to argue that checks to ensure the sum
of the resultants must equal the loads applied and must be accurate. Any discrepancy,
however small, could be a symptom of a much larger problem with the analysis
model which must be explored and resolved. Whilst requiring accuracy for the
analysis phase, it is interesting that this point does not in fact contradict the three
main points raised above.
Good Engineering Practice /Engineering Philosophy
There was widespread recognition that the fulcrum of engineering is the ability to
solve problems (Phil, Alan, Adam). Phil went on to state that the ability to complete
a structural analysis using a piece of software was not, in his mind, problem solving.
Indeed, the use of computers in such a way, linked with BIM modellers who were
experts in IT technology as opposed to construction was perceived to be a potentially
dangerous situation ‘I think we will end up with some very unsafe designs and there
will be a major failure’ (Richard). It is, therefore, important to recognise the role that
experienced engineers have in mentoring and teaching younger engineers (Richard).
The essence of problem solving would be using an engineering mind (Alan) to
identify what should be analysed, how and using what software (Phil).
Adam’s position encompassed these views as he suggested that engineers must be
able to breakdown complex structures into more manageable pieces or simplify the
structures using approximations. By undertaking this process it should be possible to
produce analysis models that could be constructed in reasonable timeframes but
would give solutions of an acceptable accuracy considering the approximate nature
of building design (Adam).

©  ASCE
Computing  in  Civil  Engineering  2015 301

The persistent use of computers can also affect the graduate’s ability to understand
the underlying engineering principles within their designs. ‘You see evidence of this
with jobs that come out of the office designed by young guys and you wonder how
much raw engineering has gone into it.’ (Alan). Alan also suggests that in the pre-
digital world engineers would have rationalised a structure using engineering
judgement (Alan). This is a view echoed by Phil, who recalls that complex structures
were previously broken down into smaller more manageable sections allowing
engineers to understand the structure. He suggests that ‘you might be losing the
ability to see what’s going on’ (Phil) by analysing the entire structure digitally. This
is often lost on some younger engineers who ‘feel that they are good engineers
because they can do the software analysis…rather than taking responsibility for the
solution (John).
Glen believes that the quality of engineering he witnesses as a steel fabricator is
probably at the same level as in the pre-digital era. However, the design is often
poorly transmitted on the drawings, requiring a significant number of questions to be
raised with the engineers to fully understand the design. Worryingly, these omissions
often include critical information germane to temporary stability
Experience and feel
Not surprisingly, experience was discussed by all the engineers. Whilst experience is
the generally understood phenomena of gaining knowledge through witnessing or
undertaking specific actions, feel is somewhat more subjective. Feel was raised by
four of the engineers several times in the context that designers can have an, almost
innate, ability to understand the structural actions of a system and /or the magnitudes
of elements within detailed designs. ‘…you still get that gut feel if they are right…in
some respects it’s feel more than anything else’ (Adam). The ability of younger
engineers to relate a 3D model to reality was also questioned (Richard, Phil, John).
The 3D visualisation of models provides a benefit in understanding how buildings fit
together (Paul ). This alone was not considered sufficient for other engineers. Phil
questioned the engineer’s ability to truly understand what the model represents if you
are lacking experience. ‘…if you don’t have the grounding of the nuts and bolts of
how it all fits together it must be quite hard because it’s all so abstract. ?’ Richard,
questioning the lack of experience stated that ‘… you need the experience behind it.
You still have to relate that 3D model to reality’ (Richard). He pointed out that ‘Even
though you have a 3D model you wouldn’t necessarily perceive that risk unless you
have that experience’ (Richard). This view was echoed by others who argued that if
you did not have explicit experience of how elements should be connected in theory
and on site, issues were going to be missed (John).
DISCUSSION
As expected, the overwhelming view of the interviewees was that computer
technology has brought significant benefits to the industry by removing the drudgery
of hand calculations and by facilitating the design of geometrically complex
buildings. The latter point raises a wider question as to whether these buildings have
architectural merit, or, are some of them simply designed because they can be?

©  ASCE
Computing  in  Civil  Engineering  2015 302

There is a clear predilection among younger engineers to use digital technologies to


solve engineering problems. However, what is less clear, and questioned by some of
the interviewees, is whether these younger engineers have the ability or knowledge to
apply sound engineering principles. The apparent lack of understanding of tolerances
and the search for accuracy, disproportionate to the world of construction, leads to
the conclusion that they lack this level of experience. This phenomenon is clearly
demonstrated by the roughness of the foundation excavation compared to square
edges shown on the extract of the engineer’s drawing in Figure 1.

Figure 1.Actual foundation excavation and extract of the engineer’s drawing


There also appears little willingness to dismantle structures into manageable and
understandable sub-structures that can then be re-assembled to form the complete
structure that is understood.
The need for mentoring by more experienced engineers is now a clear requisite for
younger engineers. It is questionable how much, if any, tacit knowledge is being
transferred via the software suites. Most of the questions asked by younger engineers
relate to the operation of the software. When combined with their belief that they are
good engineers because they can analyse and model a structure, this leads to the
conclusion that they are missing the fundamentals of the profession. Good designs
are often produced by the synergy of engineers having social interaction, sketching,
drawing and modelling not by individuals working in isolation on a computer.
These problems are compounded by modellers who are IT experts but have little
detail knowledge of building construction. If inexperienced engineers are responsible
for the information being issued it is not unpredictable that the level of detail on the
drawings will sometimes be incomplete and not relevant.
It should be noted that there are examples of good knowledge transfer being
incorporated into software applications. Cook et al (2008) demonstrates the potential

©  ASCE
Computing  in  Civil  Engineering  2015 303

offered by argument trees to impart knowledge during the design process.


Additionally many businesses are utilising knowledge management software to share
and disseminate knowledge within their organisations. The findings of this study
suggest that these practices should be encouraged and expanded where possible.

CONCLUSION
Digital technologies are now central to the working practices of building design
offices. Whilst major benefits have been manifest it is also apparent that a significant
number of unintentional impacts have surfaced. The default position of younger
engineers to automatically turn to the computer, without the underlying experience to
understand the structural issues, is leading to inappropriate designs being produced.
These designs can lack the application of sound engineering principles and become
over concerned with inappropriate levels of accuracy. The representation of the
design is often being lost due to the lack of experience of the engineers and
technicians who now focus their attention on software operations rather than building
construction.
There is a clear and acknowledged need for mentoring of younger engineers and
technicians to ensure that un-coded tacit knowledge is passed on before dangerous
situations arise on site.
REFERENCES
Alexander, C., (2000), Notes on the synthesis of form, Harvard University Press,
London.
Blockley (1980), The nature of structural design and safety, Ellis Horwood Ltd,
Chichester.
Bryman, A., Bell, E., (2007), Business research methods. Oxford, Oxford University
Press.
Cooke, T., Lingard, H., Blismas, N., (2008), ToolSHeDTM The development and
evaluation of a decision support tool for health and safety in construction
design, Engineering, Construction and Architectural Management, vol.15,
no. 4, pp. 336-351.
Dey, I., (1993), Qualitative data analysis. A user friendly guide for social scientists.
London, Routledge
Goldschmidt, G., (1994), On visual thinking: the viz kids of architecture, Design
Studies, vol.15, no.2 pp. 158-174
Hayne, G., Kumar, B., Hare, B., (2014) The Development of a Framework for a
Design for Safety BIM tool. Computing in Civil Engineering and Building
Engineering pp. 49-56
Henderson, K., (1999), Online and on paper, Visual representations, visual culture,
and computer graphics in design engineering, The MIT Press, Cambridge
MA
Miles, M., Huberman, A., (1994), Qualitative data analysis.London, Sage
Publications Ltd
Perrow, C., (1999 [1984]) Normal Accidents:Living with high-risk technologies.
Princeton University Press, Princeton, New Jersey

©  ASCE
Computing  in  Civil  Engineering  2015 304

Simon, (1982), Sciences of the artificial. The MIT Press, Cambridge MA


Weick, (1985), Cosmos vs Chaos: Sense and nonsense in electronic contexts,
Organisational Dynamics, vol. 14, no. 2, pp51-64
Whyte, J., (2013), Beyond the computer: Changing medium from digital to physical,
Information and organization, vol. 23, pp. 41-57
Wilpert, B., (2007), Psychology and design processes, Safety Science, vol. 45,
pp.293-303.
Winograd, T., Flores, F., Understanding Computers and Cognition, A New
Foundation for Design, (1987) Addison-Wesley Publishing Company Inc,
Reading Massachusetts
Zhou,W., Whyte,J., Sacks,R., (2012), Construction safety and digital design: A
review, Automation in Construction, vol.22, pp.102-111.

©  ASCE
Computing  in  Civil  Engineering  2015 305

Learning about Performance of Building Systems and Facility Operations


through a Capstone Project Course

Miguel Mora1; Semiha Ergan2; Hanzhi Chen1; Hengfang Deng1; An-Lei Huang1;
Jared Maurer1; and Nan Wang1
1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15231-3890. E-mail: [email protected];
[email protected]; [email protected]; [email protected];
[email protected]; [email protected]
2
Department of Civil and Urban Engineering, NYU Polytechnic School of
Engineering, Brooklyn, NY 11201. E-mail: [email protected]

Abstract

Monitoring and analysis of energy consumption and performance of heating


ventilation air conditioning (HVAC) systems in facilities can give insights about
building behaviors for improving facility operations. However, due to the custom
sensor infrastructure needed to monitor building performance parameters, additional
costs would be incurred by owners to instrument facilities and their systems.
Detailed analysis of building performance in highly sensed facilities can give insights
about the behavior of similar facilities that have no budget to have such sensing
infrastructure. This paper provides the analysis of energy consumption and HVAC
performance data acquired in a minute interval in a highly sensed building through
the comparison of several data mining algorithms, such weighted least square linear
regression, random forest and K-nearest neighbor. The findings include the
performance of such algorithms in identifying patterns and give insights about
suitability of such algorithms in predicting the energy use and system performance in
similar buildings with no sensor infrastructure.

Keywords: Building behavior; Data mining algorithms; Data management; Sensor


infrastructure; HVAC

INTRODUCTION
Monitoring and analyzing energy consumption and performance of heating
ventilation air conditioning (HVAC) systems in facilities can give insights about
building behaviors for improving facility operations. However, due to the custom
sensor infrastructure needed to monitor building performance parameters, additional
costs and design changes would be needed to fully instrument facilities. Detailed
analysis of building performance in highly sensed facilities can give insights about
the behavior of similar facilities that have no budget to have such sensing
infrastructure. Through a capstone project course, students had access to sensor data
from a highly sensed facility and they evaluated the performance of several
algorithms in predicting the energy consumption in order to propose a prediction
model that can be used in similar buildings with similar climate zones. The main
objective of the capstone project was to help students to implement the knowledge

©  ASCE
Computing  in  Civil  Engineering  2015 306

they acquired in the core courses of their graduate program and to apply data analysis
approaches, to reinforce their knowledge on sensor and monitoring applications while
learning about energy use in buildings.
The project scope included analysis of energy consumption and HVAC
performance data in a highly sensed building and development of prediction models
for building system performance and energy consumption useful for similar
buildings. Since Air Handling Units (AHUs) are the primary components of HVAC
system for heating and cooling in buildings, the research team focused on their
parameters to predict energy consumption and asses their performance. A set of data
mining algorithms including weighted least square linear regression, random forest
and K-nearest neighbor was applied to the available datasets to develop energy
consumption prediction models. In order to incorporate the parameters related to
heating and cooling, the parameters have been grouped by environmental parameters,
Heating Ventilation and Air-Conditioning (HVAC) system parameters (e.g., air
supply, exhaust and recirculation), AHUs’ performance parameters, and spatial
parameters of the building/system.
The findings include the performance of such algorithms in identifying
patterns and they give insights about suitability of such algorithms in predicting the
energy use and system performance in similar buildings with no sensor infrastructure.
By comparing these algorithms, the team also aims to determine the influence of
building physical and spatial parameters on the predictions.

BACKGROUND RESEARCH
Previous research on this topic show different ways to predict energy
consumption, some of them are based on physical parameters of the building while
others are data-driven (e.g., Mustafaraj et al., 2011). The former requires a broad
knowledge of a building and its components (e.g., Tariku et al., 2010) while the later
does not require that knowledge and uses algorithms based on sensed and measured
data to create forecasts.
The capstone project was focused on the data-driven approaches to implement
and consolidate learnt knowledge in previous graduate courses as well as to have
hands on experience with a large volume of sensor data. Supervised and classification
algorithms as well as machine learning algorithms were applied based on the
knowledge acquired in the graduate classes and from previous research studies.
On the supervised side, because of its simplicity, regression analyses were
mainly utilized in the literature for similar purposes (e.g., Dong et al. 2005). Such
studies try to predict a target value based on given set of parameters. The simplest
used regression is the multivariate linear regression, which is a generalization of the
linear regression for more than one input parameter. Complex regressions, such as the
weighted least square regression, which apply different weights to each parameter
(Bishop, 2006) are also available and tested for their performance in predicting
energy use in buildings as part of this project.
On the classification side, the algorithms that are available and mainly utilized
in similar studies in the literature were k-Nearest Neighbor (k-NN) and random
forest. The former is an instance-based algorithm, which means it classifies new data
by recombining the defined training dataset (Brown et al., 2011). It is commonly used

©  ASCE
Computing  in  Civil  Engineering  2015 307

in the
t fields off finance, hyddrology andd earth sciencce applicatioons, but it iss also found
useeful in foreccasting electrricity loads (e.g., Al-Qaahtani and Crone,
C 20133). Random
foreest is an en nsemble meethod that can c predict class labells for unseeen data by
agggregating a sets of classiifiers (predicctors) learneed from the training datta (Breiman
andd Cutler, 200 04). The esseence of the random
r foreest algorithm
m is random selection
s of
inddependent vaariables in thhe tree-grow wing processs, reducing thet correlatioon between
treees (Breiman and Cutler, 2004).
Finally on the macchine learninng side, artiificial neuraal network (ANN) ( was
utillized as it is one of the most
m commoonly utilized data-drivenn method for energy use
preediction. It has
h an average successs ratio betw ween 90–99% % and it can identify
inteerconnected patterns from f historrical data anda handlinng multiplee objective
funnctions/resultts. It is alsoo a black-box approach, which indiicates the faact that it is
harrd to know what
w the geneerated model looks like (Liu ( and Gaarrett, 2014).

AVVAILABLE SENSOR DATA D


The buiilding is a thhree-story huundred year old
o repurpossed office buuilding with
around 60,000 square feet of conditiooned space. The HVAC system included three
AHHUs with faan airflows capacities of o 11,340, 21,400,
2 20,5500 CFM reespectively,
where each AH HU conditionned a wing of o the buildiing (i.e., easst, south andd north) and
threee condensin
ng units (CU Us) (Xu, 20122).

Figure 1. HVAC
H systeem primary componentts: Ducts, AHUs,
A and CUs
C are
coolored according to thee zone they serve. The figure
f only shows the supply air
ducts for simpliciity
Data fro
om sensors was collecteed every onne minute, exxcept for thee hot water
boiiler, which was
w measureed every 15 minutes. Thus,T in ordeer to have a consistent
tim
me interval within
w the dattasets, data was
w adjustedd to 15 minutte time intervvals.

DAATA ANALY YSIS


In ordeer to perform m a systemmatic data analysis, the capstone project
p was
divvided in threee steps, whicch included the selectionn of input paarameters, deevelopment
of the
t predictio on models, anda evaluatiion of the outputs,
o as itt is shown in Figure 2.
Thee rationale of
o the projecct team and the project progress
p in each step is detailed in
the following su ubsections.

©  ASCE
Computing  in  Civil  Engineering  2015 308

Outputs:
Energy Consumption
Prediction Model: (KWh): Individual AHU
Input Parameters: Multivariate Regression & All AHUs
Operational (27) Artificial Neural Network Per zone & total
Environmental (4) Random Forest building
Physical (10) K-nearest Neighbor
Weighted Least Square Model Selection:
Mean Absolute Error
Root Mean Square Error

Figure 2. A schematic layout of the implemented workflow


Step 1: Selection of input parameters: The first step, “selection of input parameters”,
specified the data that would be used in each algorithm to develop the prediction
models. The parameters have been defined based on the collected data: building
physical, operational, and environmental parameters.
Building physical parameters are information about building layout, exterior
enclosure and mechanical equipment that has a direct impact on the energy use in
buildings. These include: conditioned space area, glazing area, external wall area,
relative compactness, glazing/volume, VAV/space, VAV/volume, VAV
capacity/Volume, external wall area/volume, and AHU capacity/volume. The values
for this set of parameters were calculated using the drawings and the building
specifications. Since these parameters cannot be separated by zone, they were only
used in the prediction models that included all the building parameters. These
parameters were identified from a detailed literature review and detailed previous
research studies that evaluated the impact of each building physical parameter on the
overall energy consumption in buildings (e.g., Tariku et al., 2010).
Operational parameters are the sensed operational data from the major
equipment in the building. There were around 27 sensed operational parameters in the
building depending on the equipment (condensing unit, AHUs, boilers, etc.).
However, not all parameters were consistently measured in each AHU. Hence, in
order to provide a consistent dataset, the team selected the same measured parameters
among the three AHUs, which are: the supply/return air temperature, the supply/
return static pressure, the fan speed, the hot water coil flow rate, the relative humidity
of the supply/ return air, the condensate flow rate, the supply/return hot water
temperature, the damper position, the mixed air temperature and relative humidity,
the static pressure after filter, the fresh air flow rate, the status of the heating valve,
the average fresh air temperature/velocity, the static pressure after heat, the average
supply temperature of the AHU, the average supply velocity of the AHU, the hot
water flow rate for heating, the CO2 concentration of the return air, the static pressure
after cooling, the supply flow rate and the CO2 concentration of the supply air.
Environmental parameters refer to the outside and inside ambient temperature
and humidity measurements that play a key role in energy use in buildings. The
indoor and outdoor temperature and relative humidity parameters were measured
every 15 minutes with sensors located inside and outside of the building.

©  ASCE
Computing  in  Civil  Engineering  2015 309

The environmental and operational parameters were obtained directly from


the sensors, whereas the physical parameters of the building were calculated from the
blueprints, specifications and schedules of the project. As it was described before,
only parameters related to energy consumption of the AHUs were selected among all
the parameters sensed and measured in the building. Nearly one year data was
selected and it was grouped in seasons, as shown in Table 1.

Table 1. Data points and seasons


Season Months included # of data points
Spring Mar 2013- May 2013 2976
Summer Jun 2013 to Aug 2013 11712
Fall Sep 2013 to Nov 2013 5856
Winter Dec 2013 to Feb 2014 10000

All of these parameters were stored as input and output datasets and separated
into training and test datasets. This division was done to be able to cross validate the
models with the test datasets and asses the performance of the forecast (Bishop,
2006). The training set was used to determine the coefficients of the models related to
the parameters and the second set was used to test the model and analyze how the
model fits or is able to predict the test data (Bishop, 2006). The parameters measured
for AHUs every 15 minutes i shown in Table 2. The table also shows instances of
data from spring data set. The data for training and test datasets were selected
randomly in order to fully represent the whole datasets. Several tests were performed
with partitions from 50/50 to 70/30 to find out the optimal splitting ratio. The
splitting ratio of 50/50 gave the best prediction performance and was chosen for the
datasets.

Table 2. Parameters measured for AHUs, with example values from AHU3
during spring time measurements
Return Supply Mixed Status Return Return Supply
Air Air Air Heating Air CO2 Supply Air Reference Static Static
temp temp temp Valve Conc CO2 Conc Pressure Press Press
71,39 56,25 72,85 20,74 0,26 516,13 517,48 -0,80 1,05
70,60 55,36 72,17 20,74 0,26 509,65 517,44 -0,52 0,86
Static Static Static
Press Press Press Return Mixed HW
after after after Air Supply Air Fan Condensate Coil
filter heat cool RH Air RH RH Speed Flow rate Flow
-0,88 -0,93 -1,22 47,88 75,38 49,67 0,00 0,00 0,00
-0,54 -0,57 -0,78 46,38 83,78 48,91 758,0 0,00 0,00
HW HW Fresh Fresh Avg Avg
temp temp HW Air Air Fresh Air Supply Supply Supply
Return Supply Heating Velocity Temp Flow Velocity Temp Flow
72,14 72,43 0,00 142,95 82,16 1786,86 433,70 72,79 16286,46
72,00 72,42 0,00 110,46 81,66 1380,73 356,14 72,16 13373,87

©  ASCE
Computing  in  Civil  Engineering  2015 310

Outside Outside Avg. Indoor Avg. Indoor


RH temp temp RH
57,74 81,36 72,25 51,54
59,67 80,79 72,24 48,44

Step 2: Development of the prediction models: There are several approaches/


algorithms to predict energy consumption in buildings. As detailed in the
background section of this paper, among all the available algorithms, five of them
were suitable for the available data set and the analysis. For each one of these
algorithms, 32 models (8 different models for every 4 seasons) were trained using the
three groups of parameters described earlier: with only operational and
environmental parameters for each AHU and for all AHUs, with all parameters for
each AHUs and for all AHUs. Due to space limitations, the fitted models will not be
provided but the results of tests will be detailed. For the total energy consumption, the
equipment considered is: AHUs, boilers and condensing units (CUs). Boiler energy
consumption (i.e., cubic-feet of natural gas use) for each zone is calculated by
totaling the heat draw of all components in each zone and dividing each by the total
heat draw of all components in the building.
Multivariate linear and weighted least square regression algorithms: These
algorithms were applied to the input datasets trying to find out the relations between
the parameters to predict the energy consumption as an outcome.
K-nearest neighbor (k-NN) algorithm: The model classifies the outcome
(energy consumption) to the majority vote among k nearest neighbors according to
Euclidean distance between test and training data (Wu et al., 2008). In this project, k-
values of 1, 4, 5, 6, 7, 9, 11, 21, 31 were experimented to fit models and it was
identified that k = 5 was the best value avoiding over- and under-fitting.
Random forest algorithm: The algorithm follows the principle of tree bagging
in order to aggregate many binary decision trees built on bootstrap samples,
representing uniformly drawn groups of data sets from the training data set. In order
to achieve the optimal performance, the number of chosen trees for this project was
200 as suggested in previous research studies (e.g., Breiman and Cutler, 2004).
Artificial neural network (ANN): An important step for generating an ANN
based model is to decide on the number of neurons in the input, hidden and output
layers. The input and output layers contain as many neurons as defined in the
input/output parameters. Based on the rule-of-thumb methods for determining the
correct number of neurons to use in the hidden layer (e.g., Heaton, 2014), three
hidden layers were defined, each one with twenty neurons.

Step 3: Model comparison and the evaluation of the outputs: The models’
performance evaluation was done based on the mean absolute error (MAE) and root
mean square error (RMSE) when models were evaluated using the test datasets
(Bishop, 2006). They are used together to assess errors and variations between the
predicted model and the eventual outcomes.
In this step, the high performing models are selected and outputs are evaluated for the
defined zones with and without the impact of physical parameters. The averaged
results of the error calculations for the high performing models are provided in Figure

©  ASCE
Computing  in  Civil  Engineering  2015 311

3. They show that Random Forest based prediction model has the lowest MAE and
RMSE, and hence it was chosen as the most accurate model to predict the energy
consumption of the building. One possible reason for its highly accurate prediction
results is its random selection of independent variables during the tree-growing
process that reduces correlation between them. ANN based prediction model is
detected to be the second best prediction model with slightly higher MSE and RMSE
values. Its accuracy can be attributed to the adjustments of weights done by the
algorithm to create connections between neurons. Multivariate linear regression based
prediction model performed well and because of its high simplicity, effectiveness and
less calculation requirements, it can also be considered for energy consumption
predictions. Finally, based on MAE and RSME values, k-NN and weighted least
square regression are not recommended for energy prediction models.

RMSE MAE
0.60 0.60
0.50 0.50
0.40 0.40
RMSE

MAE
0.30 0.30
0.20 0.20
0.10 0.10
0.00 0.00

Figure 3. Season averages of RMSE and MAE (B: Boiler, CU: Condensing Unit)

Among all AHU units, prediction performance of AHU2 is examined to be


the worst, as its prediction error is considerably higher than the other two. However,
AHU2 unit constantly consumes nearly 20% more energy than AHU3 while the space
volume and other building physical parameters of the zone are quite the same. It is
also noticeable that the prediction result is habitually lower than the actual
consumption. Based on these results, it can be presumed that there are some energy
inefficiencies contributed by equipment or system losses.
The prediction error using all AHUs with additional physical parameters is
generally higher than the prediction error by using each individual AHU. In addition,
prediction of energy consumption using AHUs with condensing units and boilers is
worse than the energy consumption prediction using only the AHU data. This makes
the team believe that building control systems influences more the energy
consumption than the physical parameters.

CONCLUSIONS
This capstone project provided opportunities for graduate students to
understand the implications of data mining methods and their performance in

©  ASCE
Computing  in  Civil  Engineering  2015 312

predicting the energy use of AHUs. The students had hands on experience with
sensor data and applied their data analysis knowledge to build prediction models in
order to forecast energy consumption in similar buildings that share the same climate
zone and functionalities with the studied building.
In this capstone project, various prediction models were tested and proved to
be a useful tool for energy consumption prediction. Among the studied models and
based on MAE and RMSE, Random Forest had the best performance and it should be
useful to forecast energy consumption of AHUs with similar capacities running on
building with similar heating and cooling loads with comparable physical parameters.
Further analyses with well performed algorithms should be done in cross
validations with other buildings, with a larger set of physical parameters, with
multiple years of data and different building control logic in order to improve the
prediction models.
REFERENCES
Al-Qahtani, F.H., and Crone, S.F. (2013). “Multivariate k-nearest neighbor regression
for time series data — A novel algorithm for forecasting UK electricity
demand.” The 2013 International Joint Conference on Neural Networks
(IJCNN). Dallas, TX, 4-9 August 2013, pp. 1-8.
Bishop, C. (2006). Pattern Recognition and Machine Learning. Cambridge: Springer.
Breiman, L., and Cutler, A. (2004). “Random Forests.”
<https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm>
(Dec2, 2014).
Brown, M., Barrington-Leigh, C., and Brown, Z. (2011). “Kernel regression for real-
time building energy analysis.” Journal of Building Performance Simulation,
Vol. 5, 263-276.
Dong B., Lee, S.E., and Sapar M.H. (2005). “A holistic utility bill analysis method
for baselining whole commercial building energy consumption in Singapore.”
Energy and Buildings, 37(2), 167-174.
Heaton, J. (2014). “A feed forward neural network.”
<http://www.heatonresearch.com/articles/5/page2.html> (Dec. 2, 2014).
Liu, P., and Garrett, J. (2014). “Computer-based approaches for search & decision
support in civil infrastructure.” Lecture notes, Pittsburgh, PA: Carnegie
Mellon University.
Mustafaraj, G. J., Lowry, G., and Chen, J. (2011). “Prediction of room temperature
and relative humidity by autoregressive linear and nonlinear neural network
models for an open office.” Energy and Buildings, 43(6), 1452-1460.
Tariku, F., Kumaran, K., and Fazio, P. (2010). “Integrated analysis of whole building
heat, air and moisture transfer.” International Journal of Heat and Mass
Transfer, 53(15-16), 3111-3120.
Wu, X., Kumar, V., Quinlan J.,, Gosh, J., Yang,Q., Motoda, H., MacLachlan, G.,
Ng, A., Liu, B., Yu, P., Zhou, Z., Steinbach, M., Hand, D., and Steinberg,
D. (2008). “Top 10 algorithms in data mining.” Knowledge and Information
Systems, Vol. 14, 1-37.
Xu, K. (2012). “Assessing the minimum instrumentation to well tune existing
medium sized office building energy models.” PhD. Thesis, Pennsylvania
State University.

©  ASCE
Computing  in  Civil  Engineering  2015 313

Integrated Target Value Approach Engaging Project Teams in an Iterative Process of


Exploration and Decision Making to Provide Clients with the Highest Value
Renate Fruchter1; Flavia Grey2; Norayr Badasyan3; Sarah Russell-Smith2; and Fernando Castillo4
1
M.ASCE, Director of PBL Lab, Civil and Environmental Engineering Department, Stanford University.
E-mail: [email protected]
2
Graduate Student, Civil and Environmental Engineering Department, Stanford University. E-mail:
[email protected]
3
Graduate Student, Construction Economics Department, Bauhaus University, Germany. E-mail:
[email protected]
4
Engineer, DPR Construction. E-mail: [email protected]

Abstract
The design and construction industry has the responsibility to provide the client with the product
of highest value. This is a complex task. Project stakeholders typically propose discipline
specific optimal solutions and have many conflicting interests. To provide an overall successful
solution they need to correlate the discipline specific goals and constraints and resolve many
conflicts. These are resolved through a negotiation process in which each party typically takes a
self-interested position. An improved approach is to set discipline specific targets and develop
solutions that meet these targets. This paper presents three project delivery scenarios and their
impact on product value to the client: (1) no set target values; (2) designs that meet domain
specific targets - progressive cost estimating Target Value Design (TVD), Sustainable Target
Value (STV), and Life Cycle Cost (LCC); and (3) a preliminary integrated target value driven
approach, linking TVD, STV, and LCC, to engage all stakeholders in an iterative process of
exploration and decision making to provide the client with the highest product value. The AEC
Global Teamwork course offered at Stanford with partners worldwide is used as a testbed.

INTRODUCTION
The design and construction industry has the responsibility to provide the client with the product
of highest value that they can conceive and create. This is a complex task. Project stakeholders
typically propose discipline specific optimal solutions and have many conflicting interests that
need to be resolved before they can provide and deliver a high quality product. To provide an
overall successful solution they need to correlate the discipline specific goals and constraints and
resolve these conflicts. Owners, architects, engineers, and contractors constantly engage in many
negotiations. Generally, the negotiation process is one in which each party takes a self-interested
position and tries to seek agreement through a series of demands and concessions that often
result in decisions that favor the most powerful or outspoken party and not necessarily the
overall value of the project. An improved approach is to set discipline specific objective targets
and develop solutions that meet these targets. This paper presents three project delivery scenarios
and their impact on product value to the client: (1) no set target values; (2) designs that meet
domain specific targets; and (3) a preliminary integrated target value driven approach. More
specifically, the integrated target value driven approach focuses on linking and maximizing three
key aspects of the design and life cycle of a building - first cost of construction, sustainability,
and life cycle cost. The main objective of this approach is to engage all stakeholders in an
iterative process of exploration and decision making to provide the client with the highest value.

1
©  ASCE
Computing  in  Civil  Engineering  2015 314

This study builds on practical and research points of departure grounded in three areas related
to progressive first cost Target Value Design (TVD), Sustainable Target Value (STV), and Life
Cycle Cost (LCC). Figure 1 illustrates three formalized project delivery scenarios that will be
further discussed in this paper: (a) The state-of-practice sequential and reactive design,
construction, operation and maintenance process, which does not provide the client with
quantitative evidence about how well the building will perform. (b) The state-of-the-art proactive
project delivery process in which domain specific target values are set. This engages team
members and client to collaborate in order to meet the set targets within each specific domain.
(c) An integrated target value driven project delivery approach that links TVD, STV, and LCC,
which engages all project stakeholder not only to meet the targets, but use the targets as a point
of departure to further collaborate, explore alternatives, and make informed decisions based on
the correlation of design impacts on TVD, STV, and LCC to maximize the value for the client.

Figure 1: Project delivery scenarios:


(a) no set target values, (b) set domain specific target value design, (c) integrated target value design

We describe the points of departure, developed and integrated tools for TVD, STV, and LCC.
We illustrate the impact of the three project delivery approaches on the quality of the building
products with examples from the AEC Global Teamwork education testbed (Fruchter, 2006).
TESTBED AND METHODOLOGY
We used project data from the AEC Global Teamwork course to develop and test the TVD, STV,
and LCC framework and project delivery scenarios presented in this paper. The AEC student
teams engage in a project-based learning (PBL) experience focused on problem based, project
organized activities that produce a product for a client through a re-engineered processes that
engages faculty, practitioners, and students from different disciplines, who are geographically
distributed. It has been offered annually since 1993 – January through May. It engages
architecture, structural engineering, MEP engineering, lifecycle-cost management (LCFM), and
construction management students from universities in the US, Europe, and Asia (Fruchter
2006). The AEC student teams work on a university building project. The project specifications
include: (1) building program requirements for a 30,000sqft. university building; (2) a real world
university campus site that provides local conditions and challenges for all disciplines, e.g. local
architecture style, climate, and environmental constraints, earthquake, wind and snow loads,
flooding zones, access roads, local materials and labor costs; (3) a budget for the construction of
the building, and (4) a time for construction and delivery. The project progresses from
conceptual development in Winter Quarter to project development in Spring Quarter. The teams
experience a fast track project process with intermediary milestones and deliverables. They
interact with industry mentors who critique and provide constructive feedback (AEC project
gallery http://pbl.stanford.edu/AEC%20projects/projpage.htm). We used the data of AEC global
student teams that we clustered into two groups: (1) control group represented by AEC teams
that participated in the course before the deployment of the TVD, STV, LCC tools, target value

2
©  ASCE
Computing  in  Civil  Engineering  2015 315

design thinking and project delivery process, and (2) experimental group of AEC teams that had
access to the new tools and new project delivery processes, i.e. LCC deployed in 2006, TVD
deployed in 2011, STV deployed in 2013, and the preliminary integrated target value driven
approach TVD-STV-LCC deployed in 2014. The team members jointly determined the targets
for TVD and LCC for their specific projects; and the location specific targets for STV were
determined by our research team members and given to the 12 experimental AEC teams in 2013
and 2014. Note that in terms of STV AEC teams prior to 2013 were considered control teams
and had a qualitative sustainability project requirement to design at least a silver LEED certified
“green building.”

TARGET VALUE DESIGN (TVD)


The industry state-of-practice is for architects to design the first concept of the project based on
the client’s specifications and guidelines. Then a cost estimate is generated based on the design
to determine the financial feasibility of the proposed project. In this scenario cost is a dependent
variable to design. This often results in designs that exceed the owner’s budget and requires
numerous redesign iterations until the cost is acceptable to the client. Target value design (TVD)
addresses this problem by making design and cost interdependent variables and engages the
project stakeholders in a continuous collaborative effort to determine target costs and use those
targets as a basis for design. By incorporating cost as a factor in design, TVD effectively
mitigates the risk of generating a design that exceeds the owner’s budget. Target costs have
proven to be an effective method to reduce project costs (Ballard, 2008).
Our approach builds on practical and theoretical points of departure that include: observations
from state-of-the-practice applications of TVD in the AEC industry and the Nine Foundational
Practices for TVD (Macomber and Barberio, 2007). The Lean Construction Institute defines
Target Value Design as a practice which incorporates cost as a factor in design to minimize
waste and create value (Tiwari et al., 2009).
We further expanded the TVD framework and developed a software tool in MS Excel to
address several key aspects: explore the relationship between cost and value; clarify the impact
of TVD for design, construction, and the project’s life-cycle; improve the methods to track and
present key data; incorporate a cost estimating-data reliability tracking system to identify areas
of potential problems or opportunities that need to be addressed by the team (Castillo Cohen and
Fruchter, 2012). In our TVD approach we explored the distinction between cost and value. In all
the observed industry case study applications of TVD, the focus was on minimizing waste to
reduce first cost, increase profit, and not necessarily on maximizing the value for money to the
client. This represents today’s state-of-practice being used by progressive companies in the
industry. In this scenario a target profit, as opposed to a target value, is used to inform design
and construction decisions. To bring value into the equation, the TVD framework and tool we
developed in MS Excel establishes a qualitative relationship between cost and value by
requesting the client to rate the importance of different project aspects. The client rating is
equated into the overall target for the project. Then the team establishes initial reference targets
for each sub-system in the building based on early conceptual estimates and data from previously
completed projects that were similar in scope. The team refines the targets for each key sub-
system in the building based on the client’s rating and the particular goals for the project. More
importantly, the team uses any surplus between the estimated cost and the overall target to
further improve the aspects of top priority for the client. The key difference between our
proposed TVD approach and the current industry practice is that the industry is focused on

3
©  ASCE
Computing  in  Civil  Engineering  2015 316

maximizing profit for the design and construction team while our proposed method focuses on
maximizing value for the client. Given that maximizing profit is a key goal for design and
construction companies and maximizing value is a key goal for clients; the challenge rests in
aligning the financial incentives so that both goals can be simultaneously realized.
A critical success factor is how the data is analyzed and used by the project team to inform the
design process, as well as how the team members interact with the data. The data should help the
team members understand the impact of their discipline decisions and help determine the next
steps the team should follow to meet the cost targets set for the overall building and each sub-
system. Figure 2 illustrates the most effective visualization of the updated and synthesized TVD
information that kept the AEC team members focused and engaged in the process. After rapid
prototyping cycles for the TVD graphic user interface we found the most effective way to engage
the team in the TVD process was to provide them with updated, simple, and clear visual
representations of the critical information.

Figure 2. TVD data visualization and TVD-driven exploration of subsystem alternatives


Figure 2 highlights the delta between the estimate and set cost target. The table at the top
shows the data for the overall project. The Value Delta and the “Tracking Target Over Time”
graph captures the attention of the team and initiates discussions to explore design changes to
meet the target. The “TVD – Targets by Cluster” top left bar graph shows similar data for each
of the key sub-systems in the building. This graph was essential to focus the attention on
specific sub-systems that had the most significant deltas and engage the team in further explore
alternatives, e.g. exterior walls and windows that impacted the STV and LCC.

SUSTAINABLE TARGET VALUE (STV)


The building construction industry represents more than 10% of US GDP. Buildings are the
largest consumer of energy and greatest contributor to climate change in the United States. They
consume approximately half of the energy produced and contribute close to half of greenhouse
gas emissions. To date there are few methods to explore alternative solutions and make informed
decisions regarding the life cycle energy and environmental impacts of building designs during
the concept development phase. By establishing site-specific sustainability targets and using life
cycle assessments (LCA), buildings can be designed to perform at higher environmental
standards than those designed without a set target. LCA represents a comprehensive approach to
assess the environmental impacts of an entire building. LCA is an internationally standardized
method that accounts for all inputs, outputs, flows within a process, product, or system boundary
to accurately quantify a comprehensive set of environmental, social, and economic indicators
(Finnveden et al.,2009). It aims to quantify the energy, material flows associated with each life
cycle stage from raw material extraction through material processing, manufacture, distribution,
operation & maintenance, end-of-life for a product or service (Hunt and Franklin, 1996).
The goal of STV) design is to reduce the environmental impacts of buildings throughout their
life cycle by setting targets for environmental indicators during the design phase. We chose as

4
©  ASCE
Computing  in  Civil  Engineering  2015 317

indicators primary energy and water consumption as major resources used during building
operation, and global warming potential (GWP) and ozone depletion potential (ODP) as global
pollutants. (Table 1). The hypothesis was that setting site specific targets for these indicators will
lead to reductions of life cycle environmental impact. The site specific sustainability targets were
provided by the research team according to the construction location of each project, e.g., San
Francisco and LA California, Reno Nevada, Madison Wisconsin, San Juan Puerto Rico, Weimar
Germany. (Russell-Smith et al., 2014a, 2014b). The proposed STV prototype integrates life cycle
assessment into building design and provides a set of environmental targets including greenhouse
gas emissions, water, and energy based on the ecological carrying capacity of the planet and
accounting for the full building life cycle. These targets are set to enable development of
building designs that perform at or below the site-specific STV targets.
Table 1. STV indicators

The objective was to make STV a team effort. The STV tool aims to engage all team members
in a continuous process of exploration and decision-making to design buildings to specified
environmental targets in order to reduce building life cycle impacts to sustainable levels. STV
makes explicit and transparent the design decisions in an iterative process. It does not require a
complete design solution to evaluate how the building performs, but rather use the STV tool as
an exploration and decision-making tool, comparing different design options (e.g. materials,
subassemblies, systems) and their impact on the different sustainable indicators.
The STV tool was developed in MS Excel. The project specific data inputs include information
about: the location of construction site, “Materials and Construction Phase” and “Use Phase,”
on-site natural gas and electricity consumption, on-site electricity generation that may offset
power drawn from the grid. STV tool automatically updates and displays the location specific
targets and then calculates and displays the different impact categories by referencing the
underlying dataset sheets. Based on the total building floor area the STV tool calculates yearly
water consumption based on commercial building water use data. The results are shown on a
dashboard that summarizes the building impacts and displays the aggregated impact values CO2e
emissions, Water, and Energy in a triangle spider diagram. Each triangle outside the dashed
triangle representing the STV targets represents an extra 100% above the target. This allows the
team to visualize the environmental impacts of a specific building design in comparison to the
site specific STV targets - CO2 emissions, Water, and Energy target values.
Table 2. STV Project Performance – Experiment and Control Group Examples

12 AEC teams in the experimental group were provided the STV design targets and the STV
tool. 15 AEC teams in the control group did not have access to STV targets or tool but were
given a qualitative target of “green design” LEED silver building. Control and experimental
groups had the same physical building requirements on the same sites. The experimental designs

5
©  ASCE
Computing  in  Civil  Engineering  2015 318

were within the targets, as were the control designs aiming for net-zero use phase energy.
Projects with specific energy targets out-performed them. Table 2 illustrates typical project
examples of STV performance results of AEC projects from the experimental and control
groups. The results show that setting specific sustainable targets prior to design and providing
support resources that allow designers to iteratively improve and validate designs, reduces the
impact of the building for each environmental indicator studied as compared to control designs.

LIFE CYCLE COST (LCC)


Nowadays life cycle costing is an important component for the development of a higher quality
building with lower costs. Clients seek efficient and economic solutions covering the entire life
of a facility. Frequently, the public sector is interested to increase the involvement of the private
sector due its flexibility and financial capacity, and take on different roles in the value chain of
the facility lifecycle or become a shareholder or a provider of capital in different public-private-
partnership models (PPP) (Weber and Alfen, 2010). A PPP model was introduced in the AEC
global teamwork projects. The AEC student project team played the role of a private building
service provider that offers the client of the future university building better value for money
solutions, where the university is the public partner. The AEC teams were responsible for all the
life cycle services, i.e., design, construction, operation and maintenance (O&M), life cycle cost
(LCC) and financial management. Consequently, the AEC team needs to address targets for the
overall design, construction, sustainability, and set a target for minimum rent for the client that
can lead to a better value for money solution based on evaluating various LCC and financial
alternatives for the project. LCC has a significant role to optimize the cost of acquiring, owning,
and operating the physical assets by identifying and quantifying all the significant costs
(Woodword, 1997). LCC can have a central role in the decision making process. It can identify
the initial investments needed to optimize future costs of the buildings, find better design
solutions that can have an impact during the life of the building thus evaluating the total
economic worth for a project. Sometimes lower construction costs lead to higher costs during the
life cycle of the building and, vice versa, higher investments in the initial phase can lead to a
more efficient O&M management of the building (Schade, 2007). It is important to consider that
higher production costs can decrease the total LCC of the building.
Life Cycle Cost (LCC) Target Value provides a total economic value framework. LCC targets
the possible minimum rent for the owners based on evaluating various life cycle costs and
financial alternatives for the project. The minimum rent is set to provide the owners with the best
value for money as a function of construction costs, energy consumption amount, and financial
characteristics. LCC can be considered as a decision making tool to address interdisciplinary
issues by focusing on construction, MEP, and financial solutions.
To find the appropriate amount of the investments in the initial phase that could influence the
LCC, AEC teams in 2014 used the integrated target value approach that linked TVD, STV, and
LCC, such that the impact of any design change was reflected in all three. This allowed the AEC
teams and client to see whether the targeted rent amount is sufficient to provide strong financial
ground for the AEC team as a private building service provider (e.g. River team 2014). More
specifically, the LCC focused on five indicators – construction cost, operation and maintenance,
replacement cost, risk surcharge, and interest cost. Figure 3 illustrates different energy and
service systems solutions that River team explored and their impact of the building performance
with respect to targets represented by energy consumptions of the building and sustainability
STV, LCC, and TVD. This integrated target value exploration for a better solution for the client

6
©  ASCE
Computing  in  Civil  Engineering  2015 319

provides the justification for the rent. Rent is the annual amount paid by the university to the
building provider is the main source of income. The target rent plays a key role in the calculation
of the net operation income in order to compare the efficiency the building design options with
the financial stability of the building service provider (in this case the AEC team) and the client’s
desired building functionality. To determine the efficiency of any solution the TVD, STV and
Rent targets and Excel tools were linked to the cash flow model. This allowed for exploration,
iterations, and informed decision making that engaged all project team members and the client.

Figure 3: Iterative process of exploration and decision making illustrating alternative Services Subsystems
and their impact on STV, LCC, and TVD based on an integrated target value design approach
Industry state-of-practice and the control AEC student projects used the allocated budget from
the client as metric. The architect, structural engineers, and MEP members designed the building,
then the construction managers produced the cost estimates and the LCFM proposed a rent that
maximized profits and revenue. Figure 2 illustrates a case study typical to AEC team solutions
after we deployed the TVD tool and process in 2011, in which the team pursued the second
project delivery approach (Figure 1b) focused to meet the set overall and sub-system cost targets

7
©  ASCE
Computing  in  Civil  Engineering  2015 320

by exploring alternative solutions of subsystems. Figure 3 illustrates a case study in which the
team pursued the integrated project delivery approach (Figure 1c) focused on a decision process
that is based on design exploration and correlation of TVD, STV, and LCC.

CONCLUSION
This paper presents three project delivery approaches – design that did not have set targets,
domain specific target design, and integrated target value design. It illustrates the importance of
setting specific targets for construction costs, sustainability, and life cycle cost targets prior to
design by providing and integrating the respective assessment tools. These were developed by
leveraging and expanding LCA, target value design, and life cycle costing methodologies. They
transform the state-of-the-practice delivery process by engaging all stakeholders in determining
the targets, exploring alternatives, and making informed cost-benefit decisions based on
performance indicators for first cost, sustainability, and life cycle cost. Unlike generic metrics
such as a given construction budget or LEED point-rating, TVD, STV, and LCC enabled the
teams to understand the quantified cost and environmental performance improvements associated
with design decisions. Each design assessment iteration generated cost and environmental
performance data that the teams used to compare alternatives and inform decisions. Each of the
20 AEC teams that used these target value design tools produced building designs that performed
better than similar scope and size buildings for the same construction sites developed by the
AEC teams in the control group that did not have access to these tools (Russell-Smith et al 2014
a, 2014b) (Castillo and Fruchter, 2012). The value of the proposed integrated target value
approach increased active engagement of all stakeholders in continuous and concurrent
exploration, assessment, and decision making leading to reduced environmental impacts and
costs, as illustrated in Figure 3.

REFERENCES
Ballard, G. (2008) The Lean Project Delivery System: an update. Lean Construction Journal, 1-19.
Castillo Cohen, F.J. and Fruchter, R. (2012) “Engaging global multidisciplinary project teams in
target value design” Proc. ICCCBE-XIV, Moscow, Russia.
Fruchter, R., (2006) The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI, 2006:
241-257.
Finnveden, G., Haushild, and M., Ekvall, T., Guinée, J., Heijungs, R., Hellweg., Koehler A.,
Pennington, D., and Suh, S., Recent Developments in LCA, J. Envir. Man., Vol. 91, No.
1, 1-21, 2009.
Hunt, R., and Franklin, W., LCA – How it came about, Int. J. of LCA, Vol. 1, No. 1, 4-7, 1996.
Intergov. Panel on Climate Change (IPCC), Climate Change 2007: Synthesis Report, IPCC 4th
Asses. Report.
EU Climate Foundation, Roadmap 2050: A Practical Guide to a Prosperous, Low-Carbon EU,
Vol. 1, 2010.
Energy Policy Act of 2005, Pub. L. No. 109-58, 119 Stat. 1143, 2005.
Bannister, P., Munzinger, M., and Bloomfield, C., (2005) Water Benchmarks for Offices and
Public Buildings, Ed. 1.2, Exergy Australia Pty Limited, Canberra.
Macomber, H., and Barberio, J., (2007). Target-Value Design: Nine Foundational Practices for
Delivering Surprising Client Value. Lean Project Consulting.
Russell-Smith, S., Lepech, M., Fruchter, R., Littman, A. (2014a) Impact of progressive
sustainable target value assessment on building design decisions, Journal of Building and
Environment. In Press.

8
©  ASCE
Computing  in  Civil  Engineering  2015 321

Russell-Smith, S., Lepech, M., Fruchter, R., Meyer, Y., (2014b). Sustainable Target Value
Design: Integrating Life Cycle Assessment and Target Value Design to Improve Building
Energy and Environmental Performance. Journal of Cleaner Production, In Press.
Schade, J. (2007) Proc. 4th Nordic Conference on Construction Economics and Organisation
Development Processes in Construction Management.: ed. B. Atkin; J. Borgbrant. Luleå.
Research report, Luleå University of Technology 2007:18, 321–329.
Weber, B., and Alfen, H.W. (2010) Infrastructure as an asset class: Investment strategies, project
finance and PPP. Chichester, West Sussex, U.K.Wiley.
Woodward, D. G. (1997) Life Cycle Costing-theory, information acquisition and application.
International Journal of Project Management 15, no. 6, 335–344
Tiwari, S., Odelson, J., Watt, A., and Khanzode, A., (2009). Model Based Estimating to Inform
Target Value Design. AECbytes “Building the Future.”

9
©  ASCE
Computing  in  Civil  Engineering  2015 322

Enhancing Spatial and Temporal Cognitive Ability in Construction Education


through the Effect of Artificial Visualizations

Ivan Mutis1

Assistant Professor, Department of Civil, Architectural, and Environmental


Engineering, Illinois Institute of Technology, 3201 South Dearborn Street, Chicago,
IL 60616-3793.
1
E-mail: [email protected]

Abstract

The Construction Engineering and Management (CEM) workforce requires


training on the systematic coordination the complex relationships of the
interdependencies, interactions and constraints among engineering systems. The
understanding of these relationships presents a significant challenge to master CEM
practices in-situ. It is essential to fully develop the learning abilities to effectively
incorporate the interdependencies and interactions with constraints to solve problems.
Building upon our previous research, where we defined the individual’s spatial and
temporal cognitive ability as the ability to couple and combine qualitative temporal
and spatial abstractions, this research addresses the problem of mastering the complex
relationships of the interdependencies, interactions and constraints. We use a layer of
artificial visualizations to simulate the contexts where such spatial and temporal
constraints are easily and rapidly observed. The superimposition of virtual objects on
images is an Augmented Reality (AR) layer that serves as an instructional. The
assumption is that the enhancement of spatial-temporal constraints enables learners to
visualize context and hidden processes in situ.

INTRODUCTION
The lack of exposure to construction processes on the job-site results in
students’ lack of understanding of the dynamic, complex spatial constraints (e.g., how
construction products are related to one another in a particular contextual space) and
the temporal constraints (e.g., the dependencies for coordinating subcontractors’
processes). In fact, the deficiencies in understanding construction products and
processes is widely acknowledged by construction program graduates, including a
lack of experience in applying construction-related concepts to real-world problems
(McCabe, et al. 2008) and the exclusion of important contextual constraints typical
found on jobsites (Sawhney, et al. 2000). Even though these constrains commonly
exist in the management activities of the projects, there is a lack of appropriate
pedagogical materials and media to enable instructors to effectively bring those job
experiences (Jestrab, et al. 2009) into the classrooms that reflect what has been
learned about the dynamics and complexities of such constraints.

©  ASCE 1
Computing  in  Civil  Engineering  2015 323

Revealing the spatial temporal relationship of construction processes is a


challenging task for educators, in particular, when using traditional instructional
media in a classroom environment. The research question, therefore, is to determine
how educators can bring the experiences of the dynamic, complex spatial and
temporal constraints found on the jobsites into the classroom. In response to this
question, this research uses pedagogical material in a way that leads to the students’
understanding of the complex dynamic and spatial-temporal constraints by using
Augmented Reality Technology (ART).

INSTRUCTIONAL MEDIA LIMITATIONS AND DISADVANTAGES


WITHIN CEM PROGRAM OF STUDY

Typically, many existing CM curricula offer field trips or internships to


remedy the instructional media limitations on instructing spatial-temporal and context
conditions. This instructional strategy is an institutional sponsored experiential
learning (Kolb and Kolb 2005). It involves a direct encounter with complex situations
on the construction project under the assumption that the direct experience will lead
to genuine meaningful learning. Students are exposed to the real-world environments
within the projects of the construction firms they intern with. The challenge, however,
is to expose the students to the context that adds value to the learning experiences and
to involve the student in relevant experiences that significantly impact learning
(McClam, et al. 2008). As instructors do not accompany students on the project job-
site during the internships, they do not embrace coaching and debriefing, reflection,
scaffolding and judging, among other teaching strategies. Students are left in a self-
learning environment, which might not lead to the development of the required skills
defined by the learning outcomes of CM courses.
In sum, while experiential learning in internship programs assumes that the
direct experience will lead to genuine meaningful learning, full development of CEM
spatio-temporal skills is limited (Bhattacharjee, et al. 2013, Glick, et al. 2012).
Learners are only exposed to a one-of-a-kind job-site experience, which are built in
unique and constantly evolving contextual environments. This is critical since
internship is typically a requirement for all students in construction management
programs (Moore and Plugge 2008). Learning experiences, however, must integrate
and make students aware of their understanding of the spatio-temporal constraints
(McClam, et al. 2008).

ART USE AS PEDAGOGICAL MATERIAL

ART is a technology-based tool that enables users to perceive artificial layers


of computer-generated information (Azuma, et al. 2001, Haller, et al. 2007),
including a combination of both the real environment and computer-generated
information (Bimber, et al. 2005). This technology includes video, graphics, and
geographical positional systems to mediate and enhance the human’s perception of
reality (Barfield and Caudell 2001). This research demonstrates that the ART
supports CM students in having unlimited access to otherwise limited opportunities to
participate in jobsite experiences. Current CEM traditional instructional media, e.g.,

©  ASCE 2
Computing  in  Civil  Engineering  2015 324

stattic diagram ms with text instruuction andd


phootographs, inhibits
i the instructorss’ ability too
demmonstrate the t compleexity of constructionn
proocesses and d ensure conceptual
c and factuaal
undderstanding. While new n virtual modelingg
enaabling tecchnologies such ass Buildingg
Infoormation Modeling
M (BIM) and Gooogle Sketchh-
up plug-ins pro ovide ways to visualize components
andd coordinate associated CEM C activitties, they aree
fullly immerssed in thhe virtual (artificial)
envvironment. They do not providde the fulll
perrception of the real-w world envirronment, as
oppposed to AR RT, i.e. perrception of the physicaal
conntext as occcurs in the real environnment. ART T
suppplements th he real worrld and releevant virtuaal
(computer-geneerated) objeects to co-eexist in the
Figure 1.
1 Real imagge and
samme space (vaan Krevelen and Poelmaan 2010) by
auggmenting on nes’ perceptiion of compplex spatial virtual objects
o that appear
infoormation, see for insttance Figurre 1. ART to coexisst in the sam me real
proovides afford dances for leearning and knowledge- environm ment.
building scaffo olds (Yoon and Wangg 2014) to
imppact perform mance in undderstanding CEM activities in: (1) simulating
s c
construction
jobsite environ nments (Houu, et al. 20133, Yabuki, et e al. 2011) by
b integratinng the real-
worrld spatio-teemporal featuures in CEM M, (2) enhanncing undersstanding of thet purpose
andd relevance ofo designs (JJae-Young, et e al. 2012, Wang and Dunston
D 20111, Xiangyu
andd Dunston 2009)
2 (archittecture, struucture, mechhanics, lightiing), and (3) analyzing
connstruction prrocesses (Chhen and Huuang 2013, Park P and Kiim 2013) (aas they rely
heaavily on the interpreterss’ ability in coupling annd combininng qualitativve temporal
andd spatial absttractions).

ME
ETHODOLO
OGY AND APPROAC
CH

The app proach has three


t basic steps: (1) Rendering
R viirtual objectts and their
commposition- by b focusing on the desiign and impplementationn of the proototype; (2)
Sim
mulations- aimed
a at devveloping sittuated superrimposed real-world im mages using
ARRT; and (3) evaluation and a assessm ments, which consist of generating
g thhe required
testts using casee studies.
(1) Renndering virtual objects and their composition
c n. The modells consist of
a seet of computer-generated virtual objjects designeed to meet instructional
i l objectives.
Thee objects theemselves havve a set of vaariables that were manippulated by thhe user. The
object designs are aimed at a occupying a continuuum within construction
c n activity to
meeet the intendded learning aims includding the activvity context (i.e., the objject designs
adddress the sp pecific learnning goals, such as thhe number of o virtual objects
o that
represent comp ponents andd equipmentt for a partiicular activiity). The sett of virtual
objects are buiilt using 3D D CAD plattforms and the availablle library off geometric
objects.

©  ASCE 3
Computing  in  Civil  Engineering  2015 325

The set of stored video clips and images are composed (connected) to the
virtual model objects. These virtual model objects are superimposed on the selected
images from the video clips and set of stored video clips and images. The composed
virtual objects allow users to contrast real-world images with the virtual objects,
generating the AR experience. Figure 2 shows examples of video frames used by the
researchers to compose the virtual objects. The generation of videos includes areal
images taken from unmanned flying devices to provide visualizations of the jobsite
context by involving broader views, since fix locations limit the ability to capture
such views (see Figure 2). The researchers use an AR platform to coordinate the
images and the virtual objects for each case study. The virtual objects’ designs
occupy a continuum within construction activity to meet the intended learning aims
by including the activity context in the scenario (i.e., the objects’ designs incorporate
specific learning goals, such as the number of overlaid objects that represent
temporary structures to build an assembly for a particular activity).
The focus is placed on what learners need to acquire on spatial and temporal
constraints for CEM activities. The learners-ART interaction is possible through the
development through an AR video composition application. The video captures the
main components and CEM activities of interest and the context (physical and social)
of the construction environment. The video output is a set of video images, which are
edited, tagged and stored using computer applications such as SynthEyes (Anderson-
Technologies 2014). As this research develops, the next step is to develop an AR in-
house computer application, using an AR development platform such as Vuforia®
(Vuforia 2014) over Unity-3d extension (Unity-Technologies 2014) - a game engine
and integrated development environment- for broad compatibility.
The sensory
input data is mostly
visual. Thus, the
video captures the
main components and
CEM activities of
interest and the
context (physical and
social) of the
construction (a) (b)
environment. The Exterior Wall Section Roof Section
video output is a set Figure 2: Images captured from an Unmanned Aerial
of video images, Vehicle (UAV).
which are edited,
tagged and stored using computer applications such as SynthEyes (Anderson-
Technologies 2014).
(2) Simulations. In the classroom, students manipulate superimposed real-
world images as they access and retrieve the images through a computer application
in the classroom. At the same time, the instructor uses a multi-display panel to show
the view of the same field images/streaming video with or without the virtual models
(i.e., the instructor will illustrate and assist the learning process, by showing the

©  ASCE 4
Computing  in  Civil  Engineering  2015 326

commposite or non-composi
n ite set of im
mages,
inclluding thee superimpposition off the
commputer gen nerated objects and other
auxxiliary informmation).
Since this
t researcch uses a video
devvice to captu ure particular scenes on o the
jobsite. The device is positioned in a
straategic, safe place to allow capturinng the
maiin compon nents of thhe construcction
proj
oject environ nment. As an a example,, see Figgure 3: Supeerimposed virtual
v
the superimposed steel coolumn struccture objeects (steel coolumns) oveer an in-
oveer and imag ge in-situ inn Figure 3. The situ image.
sim
mulation in th he Figure aim ms at enhanncing
the understandiing of locattion/orientatiion of the structure
s on the footings. The data
outtput consistss of a set of video cllips and im mages, classiified accordding to the
connstruction prrocess and materials
m useed to executte a particular construction activity
(e.gg., steel erecttion in the Figure).
F
(3) Vallidation An nd Assessmeent. This sttep assesses the learnerrs’ problem
solvving capabiilities relateed to CM spatial-tempporal constrraints. It iss aimed at
valiidating the learners’
l ability to solvve problems when usingg the ART intervention
i
andd when learn ners use tradditional matterials methoods. The latter is used as a a control
grooup. CM pro oblem solvinng capabilitiies denotes hereh the leaarners’ abilitty to obtain
CMM backgroun nd knowledgge with exiisting constrraints by arrticulating thhe problem
throough a techn nical commuunication to formulate
f feasible solutions.
The straategy to asssess problem m solving is on formulaating an ill-ddefined CM
situuation (e.g.,, estimatingg and schedduling) wheere the learrners’ spatial-temporal
abillities are esssential. Pooorly conceivved CM deefinitions, foor instance, consist of
pooorly defined,, incompletee parameters where fundamental CM M knowledgee is required
to anticipate
a an
n outcome.
The stu udent assessment strateegy also inncorporates questions in i a semi-
struuctured interrview for a more consiistent and acccurate asseessment. The questions
will be built acccording to the t followinng assessmennt aims (Am merican Socieety of Civil
Enggineers. Bod dy of Knowleedge Committee. 2008, Anderson
A annd Krathwohhl 2001):
• CM
C Knowlledge: The student’ss abiility to remeember the CM M material.
It immplies recallling the apppropriate speecific facts and
a the learnned CM conncepts (e.g.,
funndamental co oncepts of construction
c n techniquess and methoods) to conttribute to a
prooblem-solvin ng outcome.
• CM
C concep pts compreh hension: Thhe learners’ abilities to summarize
andd explain aree characterizzed by their ability to trranslate and map from a system of
symmbols (geom metries in thee drawings) to another system such as a text-based system.
Thiis is a basic level of undderstanding of o the CM prroblem, wheere the learneer is able to
expplain and disscuss details (e.g., give examples,
e peerform generralizations frrom details)
andd describes and distinnguishes differences (elements interactions, i including
prooperties and qualities).
q
• CM
C concep pts application: This aiim refers to the learner’’s ability to
usee the learneed CM conncepts, methhods, and theoretical t principles in
i concrete

©  ASCE 5
Computing  in  Civil  Engineering  2015 327

situations, including new or innovative ways. Learners’ are able to solve (find answer,
explanation), apply (find uses for particular purposes or cases), use tools (to a
particular purpose), plan (articulate, contribute, implement, and provide a solution or
outcome), conduct related activities (carry out reports), organize (order elements in
the construction process), and explain (broad clarification to give reason for or cause).
• CM problem analysis: This aim denotes the CM student’s ability to
separate the components parts of an assembly by separating or breaking down
elements for close examination. The analysis activity relating (mapping) parts to
another part or a group of parts for a better understanding of the structure or
organization as whole (e.g., mapping the assembly parts to understand the
components functionality within the assembly-wall structure). The analysis leads to
the organization of parts into a systematic planning and coordination of the systems
involved and to looking at the parts’ functionality as a whole. To reach the proper
outcome, the learners compare the parts’ quality and properties, by identifying
particular features in order to contrast the differences.
• Synthesis of CM concepts: This aim refers to the learners’ ability to
assemble a new whole set or group (by properly and orderly fitting parts) as an
outcome through interpretations. It is bringing into existence a new plan (arrange
parts, devise the realization of CM concepts), and design or creation according to a
plan. It is putting together factors into consistent and complex settings, to reveal and
develop hidden features, to assemble and fit (adapt by means of changes and
modifications) disparate elements into specified and suitable conditions.
• CM conceptualization evaluation: This aim refers to the learner’s
ability to judge the value of concepts, which are definitions of elements for a given
purpose, by meeting defined criteria (e.g., subcontractors’ required conditions for
execution of a assembly task due to constrains- such as costs- and quality/property
conditions). Learners judge and analyze the significance of CM elements and
concepts by justifying their use and by determining their importance. It involves
critical analyses and self-assessments.

CONCLUSIONS

This research provides a viable solution to bringing field experiences to


classrooms, including an open source, cost-effective mechanism for sharing
educational resources among participating faculty members. Such a capability would
be beneficial to a wide range of CM and engineering education programs, since the
shared challenges and the proposed approaches are common to many other
engineering disciplines. It is anticipated that the ART users will significantly grow as
well as the users’ feedback based on trustful relationships among users (Issa and
Haddad 2008).
Results of the project directly impact the teaching and learning of a wide
variety of construction engineering courses, which rely heavily on students’ spatial-
temporal cognition skills of building systems and construction processes
In particular, the resulting proof of concept and the data of this research might
be used to inform the design of a scalable teaching/learning environment (i.e., the

©  ASCE 6
Computing  in  Civil  Engineering  2015 328

follow-on ART design environments within the classroom will facilitate ART use and
its implementation within CM programs from other US institutions).

ACKNOWLEDGEMENT

This research has been founded by the National Science Foundation (NSF)
grant No. 1245529.

REFERENCE
American Society of Civil Engineers. Body of Knowledge Committee. (2008). Civil
engineering body of knowledge for the 21st century : preparing the civil
engineer for the future, American Society of Civil Engineers, Reston, Va.
Anderson, L. W., and Krathwohl, D. R. (2001). A taxonomy for learning, teaching,
and assessing : a revision of Bloom's taxonomy of educational objectives,
Longman, New York.
Anderson-Technologies (2014). "SynthEyes. 3-D camera-tracking (match moving)
application.", <http://www.ssontech.com/synovu.html >, (Accesed, April,
2014), Last Update March 2014.
Azuma, R., et al. (2001). "Recent Advances in Augmented Reality." IEEE Comput.
Graph. Appl., 21(6), 34-47.
Barfield, W., and Caudell, T. (2001). "Fundamentals of Wearable Computers and
Augumented Reality."CRC Press, 836.
Bhattacharjee, S., et al. (2013). "Comparison of Industry Expectations and Student
Perceptions of Knowledge and Skills Required for Construction Career
Success." International Journal of Construction Education and Research, 9(1),
19-38.
Bimber, O., et al. (2005). "Spatial augmented reality merging real and virtual worlds."
A K Peters,, Wellesley, Mass.
Chen, H.-M., and Huang, P.-H. (2013). "3D AR-based modeling for discrete-event
simulation of transport operations in construction." Automation in
Construction, 33, 123-136.
Glick, S., et al. (2012). "Student Visualization: Using 3-D Models in Undergraduate
Construction Management Education." International Journal of Construction
Education and Research, 8(1), 26-46.
Haller, M., et al. (2007). "Emerging technologies of augmented reality interfaces and
design." Idea Group Pub.,, Hershey, PA.
Hou, L., et al. (2013). "Using Augmented Reality to Facilitate Piping Assembly: An
Experiment-Based Evaluation." Journal of Computing in Civil Engineering
Issa, R. R. A., and Haddad, J. (2008). "Perceptions of the impacts of organizational
culture and information technology on knowledge sharing in construction."
Construction Innovation: Information, Process, Management, 8(3), 182-201.
Jae-Young, L., et al. "A Study on Construction Defect Management Using Augmented
Reality Technology." Proc., Information Science and Applications (ICISA),
2012 International Conference on, 1-6.
Jestrab, E. M., et al. (2009). "Integrating Industry Experts into Engineering Education:

©  ASCE 7
Computing  in  Civil  Engineering  2015 329

Case Study." Journal of Professional Issues in Engineering Education and


Practice, 135(1), 7.
Kolb, A. Y., and Kolb, D. A. (2005). "Learning Styles and Learning Spaces:
Enhancing Experiential Learning in Higher Education." Academy of
Management Learning & Education, 4(2), 193-212.
McCabe, B., et al. "Strategy: A Construction Simulation Environment." Proc.,
Construction Congress VI: Building Together for a Better Tomorrow in an
Increasingly Complex World, ASCE, 110-115.
McClam, T., et al. (2008). "An Analysis of a Service-Learning Project: Students'
Expectations, Concerns, and Reflections." Journal of Experiential Education,
30(3), 236-249.
Moore, J. D., and Plugge, P. W. (2008). "Perceptions and Expectations: Implications
for Construction Management Internships." International Journal of
Construction Education and Research, 4(2), 82-96.
Park, C.-S., and Kim, H.-J. (2013). "A framework for construction safety
management and visualization system." Automation in Construction, 33, 95-
103.
Sawhney, A., et al. "Internet Based Interactive Construction Management Learning
System." Proc., Construction Congress VI: Building Together for a Better
Tomorrow in an Increasingly Complex World, ASCE, 280-288.
Unity-Technologies (2014). "Unity3d. Cross-platform Game Engine and Integrated
Development Environment (IDE)." Augmented Reality Software Development
Kit (SDK) for mobile devices, <http://unity3d.com/ >, (Accesed, August
2014), Last Update 2014.
van Krevelen, D. W. F., and Poelman, R. (2010). "A Survey of Augmented Reality
Technologies, Applications and Limitations." The International Journal of
Virtual Reality, 9(2), 1-20.
Vuforia (2014). "Qualcomm, AR Developement Kit." Augmented Reality Software
Development Kit (SDK) for mobile devices, <https://developer.vuforia.com/ >,
(Accesed, September 2014), Last Update 2014.
Wang, X., and Dunston, P. S. (2011). "Comparative Effectiveness of Mixed Reality-
Based Virtual Environments in Collaborative Design." Systems, Man, and
Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 41(3),
284-296.
Xiangyu, W., and Dunston, P. S. "Comparative effectiveness of Mixed Reality based
virtual environments in collaborative design." Proc., Systems, Man and
Cybernetics, 2009. SMC 2009. IEEE International Conference on, 3569-3574.
Yabuki, N., et al. (2011). "An invisible height evaluation system for building height
regulation to preserve good landscapes using augmented reality." Automation
in Construction, 20(3), 228-235.
Yoon, S., and Wang, J. (2014). "Making the Invisible Visible in Science Museums
Through Augmented Reality Devices." TECHTRENDS TECH TRENDS, 58(1),
49-55.

©  ASCE 8
Computing  in  Civil  Engineering  2015 330

A Real-Time Building HVAC Model Implemented as a Plug-In for SketchUp


Zaker A. Syed1 and Thomas H. Bradley2
1
Graduate Research Assistant, Department of Mechanical Engineering, Colorado
State University; 1584 Campus Delivery, Fort Collins, CO 80523-1584
(corresponding author). E-mail: [email protected]
2
Associate Professor, Department of Mechanical Engineering, Colorado State
University, 1584 Campus Delivery, Fort Collins, CO 80523-1584. E-mail:
[email protected]
Abstract: One of the greatest challenges faced by humanity is the impact on climate
due to human activity. Among the most prominent of these activities is the
construction of buildings. As time and technology progress, the need for more
efficient and environmentally-friendly processes and materials is gaining favor
among the general population. In line with this, various government protocols have
been implemented which have led to development of various software applications
that assist in designing better buildings. However, the reach of these applications is
limited owing to the cost and expertise needed to actually use these software
applications. For instance, architects have multiple tools at their disposal for
designing an energy efficient building. But most of these tools are costly to operate,
time-consuming and require extensive and software-specific training. This has a
greater impact in the initial stages of design, where the amount of data is very limited.
The focus of this paper is to develop a plugin for SketchUp®, which is the CAD
design software most commonly used by architects, so as to give insight into building
energy consumption during the initial design stages. The challenge is to develop a
calculation methodology that is sufficiently accurate, requires very little data from the
user, requires no prior training to use, and consumes less time than the state of the art
simulation methods. The main component of energy consumption modeled using the
developed software is the Heating, Ventilation and Air Conditioning [HVAC]
operation. The HVAC system is modeled using design and analysis methods
developed by ASHRAE [Radiant Time Series], coupled with real-life time series
solar data from NSRDB. This toolset will help architects evaluate design features like
the placement and size of fenestration, materials for construction, building
orientation, and more, to enable early design decisions to improve the building’s
lifecycle energy consumption.
Introduction
Construction of commercial buildings is one of the primary activities that impact the
environment and ecosystem. With the objective towards reducing the impact of
buildings on the environment, various government and industrial programs and

©  ASCE
Computing  in  Civil  Engineering  2015 331

incentives have been developed to enable the construction of highly efficient


buildings.
An example of the reach of these reforms was demonstrated in Hasegawa in which
25% of new office buildings in UK had obtained an environmental assessment label
(Hasegawa 2002). Many methods and tools have been developed by researchers and
industry organizations to facilitate engineers and architects in designing energy
efficient buildings. However, there are various barriers that have limited the utility of
these tools within a conventional architecting and engineering design process.
Specifically, these tools are generally high cost, require considerable expertise,
require a considerable time commitment, and are not suited to the early stages of
building design (when detailed design information is not available).
To develop a toolset that would have more utility to the early stages of building
design, we have considered the general design framework presented by Malmqvist
et.al. (Malmqvist 2009), and repeated here as Figure 1. The design of a building
usually starts with a project that is assigned to a company by a customer. During the
initial interactions the customer gives his requirements to the designers using which
they come up with an initial design. Early in the design process, many design options
are available, but the amount of knowledge available to the designer is low.

Figure 1 – General relationship between design options and Knowledge


But as time progresses, the number of design options reduce until it becomes
impractical to make any more changes to the design. On the other hand, the accuracy
of results as well as the available knowledge increases with time. Hence better
decisions could be taken at this point, but available options are reduced because
earlier design decisions have been “locked in.”

©  ASCE
Computing  in  Civil  Engineering  2015 332

One way around this issue is to have tools that can develop knowledge earlier in the
design process. The coincidence of high knowledge and a large number of design
options is hypothesized to lead to better design decisions.
The plugin being developed in this project aims to provide more knowledge to the
architects in the preliminary design phase. By developing an energy modeling toolset
that is compatible with the tools, constraints, and methods of early building
architecting and design, early design decisions can be made effectively to improve
building energy efficiency in the final product.
Design decisions based on HVAC system
Minor selections in a design can have a major impact on the end energy consumption
of a building after its completion. For instance, the appropriate orientation of a
building can reduce energy consumption by 30-40% (Cofaigh et. al. 1999). Similarly
proper sizing and positioning of fenestration of a building also have a major impact
on HVAC loads, and therefore total energy consumption. In general practice,
architects use rules of thumb and previous experience to design these features. For
example, it is common knowledge that windows facing east and west have maximum
solar gain. Hence to reduce solar heat gain generally windows are avoided on these
faces, and if used, proper shading is applied. Some of the early design decisions that
architects can take to reduce energy impact include building orientation, wall
insulation material and window properties like size, shading and glazing type.
Previous Tools
Numerous tools have been developed to assist designers in making energy efficient
buildings. But many of these tools require some previous training and technical
know-how about the software and its limitations. SketchUp® is a software package
that is widely used by architects for generating initial CAD models of buildings.
Though SketchUp® has a plugin called OpenStudio®, it requires the entire model to
be constructed again in the plugin and has considerable computational costs to run
each simulation. The computational cost of such programs both increases the time
required for design, and makes these tools incompatible with early stages of
sketching, architecting and design.
One specific tool that follows the same simulation methodology as this plugin is the
Green Building Studio® developed by Autodesk®. This tool has the benefit of being
able to import data from any CAD software using gbXML and exporting the data into
energy plus to carry out a detailed simulation and also takes into account the real
world weather data accumulated over more than a million virtual weather stations
(Autodesk®). However a major concern while using this software is the unreliability
of gbXML while converting geometry. A study by Stanford University (Maile T. et.
al. 2007) shows that some geometry/wall properties are lost during conversion. In

©  ASCE
Computing  in  Civil  Engineering  2015 333

terms of computing power requirements, since this tool utilizes cloud computing, it is
more robust but is dependent on the use of internet. Whereas, the tool in development
does not require file conversion and gives the preliminary simulation results in the
same CAD environment, thereby allowing an architect to make the first few decisions
in a more informed way.
Thus this plugin aims at changing the general work flow of design by integrating the
simulation stage within the CAD environment and reduce the time consumed in the
numerous detailed iterations needed to reach an optimal solution. Figure 2 compares
the usual work flow with the modified work flow.
Hence having a plugin that uses the existing CAD data to give instantaneous
preliminary results within the same environment/computer window would help in
removing many of the barriers inhibiting the application of energy modeling tools.

Figure 2 - Process Flow of Design


The rest of the paper from this point on will discuss the modelling procedures
developed by ASHRAE® and how it was integrated with the National Solar Radiation
Database [NSRDB].
Design inputs
Before selecting the modelling techniques it is essential to know what inputs are
available to us for the design process. SketchUp® uses the programming language
Ruby for its Application Programming Interface (API). Using Ruby scripts, we were
able to extract the following data from a CAD model

©  ASCE
Computing  in  Civil  Engineering  2015 334

• Wall/Window/Floor/Roof areas
• Wall orientation angle with respect to south(azimuth angle)
• GPS location of site from Google® Earth
All the above data is already generated by default when an architect creates his CAD
model. The only extra input that the user has to provide is the building type as per the
16 DoE classification of commercial buildings (US DoE 2011). Based on the building
type. The material properties are pulled from a database that is hard-coded into the
plugin based on building type.

Figure 3 - SketchUp CAD model of a prototypical commercial building


With these readily available inputs, we seek to model the HVAC energy consumption
of the building. The ASHRAE Radiant Time Series (RTS) method was selected for
its robustness to limited input information, and because it takes into account the
weather data of a selected location.
Radiant Time Series (RTS)
American Society of Heating Refrigeration and Air conditioning Engineers
(ASHRAE) has done extensive research in the field of HVAC load calculation and
developed various calculations methods. Out of all these methods, the Heat Balance
(HB) method is the most exhaustive and state of the art approach. It is also the
method used by EnergyPlus® to model energy consumption of buildings.
But due to the information and computation time constraints, the Radiant Time Series
(RTS) method was selected. RTS method is a simplified approach derived from HB
method to calculate HVAC peak load calculations. Due to its inherent assumptions of
steady-periodic conditions (ASHRAE 2013 Handbook), it was unsuitable for doing
annual energy calculations. To counter the effect of this assumption, real time data
from NSRBD TMY3 was implemented in place of peak load values of weather
conditions.
A general overview of the RTS procedure is depicted in Figure 4 (ASHRAE 2013
Handbook). The first step in the RTS procedure involves calculating the solar
intensities for each hour for each exterior surface. The NSRBD TMY3 dataset
consists of several hourly property values for a typical year of which the direct and
diffused irradiance on a horizontal surface are used. By transposing these values for a

©  ASCE
Computing  in  Civil  Engineering  2015 335

vertical wall in each of the standard 8 orientations along the geographic axes, the
transmitted and diffused heat gains were calculated and a database was created for all
the TMY3 locations. Table 1 shows the net solar irradiance for the city of Denver for
first 10 daylight hours of the year for the standard orientations.

Figure 4. - Flow chart for RTS method including NSRDB-informed hourly load
calculations
Table 1 – Total Irradiance including direct, diffused and reflected components
for each wall orientation in Btu/h.ft2
Hour N NE E SE S SW W NW
7 0 0 0 0 0 0 0 0
8 1.45 2.67 7.41 8.87 5.83 1.56 1.45 1.45
9 13.99 16.00 41.85 56.34 44.50 17.88 13.99 13.99
10 25.67 26.11 112.06 194.07 176.40 71.95 25.67 25.67
11 31.66 31.66 83.65 195.48 213.08 123.49 32.19 31.66
12 40.40 40.40 47.52 178.67 236.46 176.79 46.18 40.40
13 38.22 38.22 39.14 114.84 187.28 171.02 79.58 38.22
14 28.70 28.70 28.70 80.30 191.06 208.26 119.23 29.09
15 29.34 29.34 29.34 40.55 128.75 164.68 118.32 33.15
16 12.05 12.05 12.05 12.95 91.41 143.63 117.57 31.45
17 1.69 1.69 1.69 1.69 19.48 38.82 36.31 13.59
18 0 0 0 0 0 0 0 0

Effect of building orientation on HVAC load

©  ASCE
Computing  in  Civil  Engineering  2015 336

For the demonstration of the calculation procedure, consider a medium sized office
located in the city of Denver. The outer shell has a dimension of 120’ by 80’ and
consists of 3 floors of 10’ each. Two possible orientations are depicted in Figure 5.
As can be seen from Figure 6 and Table 1 the max heat gain occurs at the south face
and min heat gain on the north [Graph only shows the direct beam component of
solar radiation].

Figure 5 - Orientation option one and two


Taking these into account an architect can decide to orient the face with less window
area towards the east-west direction to reduce cooling load of the building by using
the sun’s heat. Alternatively he/she may decide to provide proper shading on the
windows to achieve the same result if building orientation is not an option.

Figure 6 - Direct Normal Irradiance for 8760 hours of the year


The plugin under development calculates these values and presents it to the user for
design consideration. Some of the other inputs which the user may change include
window type, wall color etc.
Working method of the plugin
The plugin pulls up the solar intensities for various wall orientations shown in table 2
and multiplies it with the surface areas that are extracted from the CAD model. It then
applies the radiant time series to calculate the hourly load for the entire year. After

©  ASCE
Computing  in  Civil  Engineering  2015 337

adding the internal heat gains the plugin displays the result within the same window.
Since the calculations consists of only multiplications the simulation time is in
seconds. Though the results may not be as accurate as the ones obtained using the
higher fidelity software packages, it gives an idea to the designer whether the change
made to the design would improve the building or vice versa. He/she can then select
the top five designs and send them for further analysis in more sophisticated software
packages.
Conclusions
This paper has presented the methods by which a light-weight building HVAC energy
model is being developed from the ASHRAE RTS method and integrated into a
Sketchup® Plug in. The goal of the plugin is not to develop a calculation model, but
to develop a new workflow for an architect that lets him/her make better informed
design decisions. Once some preliminary designs are selected using this plugin, they
can be run into more standardized and accurate software packages to reach a final
decision. This toolset has advantages over the current state of the art in building
HVAC system modeling tools in that it has fewer information requirements, lower
computational time requirements and presents the results within the CAD
environment.
Future work will concentrate on integrating other aspects of the energy consumption
of the entire building lifecycle in the plug in. Through integration of materials
embedded energy calculations, HVAC energy consumption, lighting energy
consumption [Daylight utilization], water consumption, and end-of-life, a building
carbon footprint metric tool can be developed.
References
ASHRAE (2013) Handbook – Fundamentals
Autodesk Green Building Studio Features, http://www.autodesk.com/products/green-
building-studio/overview
Cofaigh E. O., Fitzgerald E., Alcock R., McNicholl A., Peltonen V., Marucco A.
(1999) A green Vitruvius - principles and practice of sustainable architecture
design. James & James (Science Publishers) Ltd., London
Hasegawa, T. (2002). “Policy Instruments for Environmentally Sustainable
Buildings,” Proc., CIB/iiSBE Int. Conf. on Sustainable Building, EcoBuild, Oslo,
Norway
Maile T., Fischer M., Bazjanac V. (2007). “Building Energy Performance Simulation
Tools - a Life-Cycle and Interoperable Perspective” - CIFE Working Paper
#WP107 December 2007

©  ASCE
Computing  in  Civil  Engineering  2015 338

Malmqvist, T. et. al. (2009) “Life cycle assessment in buildings: The ENSLIC
simplified method and guidelines” Energy, Volume 36, Issue 4, April 2011, Pages
1900-1907
NSRDB TMY3 data compiled by National Renewable Energy Laboratory (NREL)
U.S. Department of Energy Commercial Reference Building Models of the National
Building Stock (2011) Technical report, NREL/TP-5500-46861, February 2011

©  ASCE
Computing  in  Civil  Engineering  2015 339

Visualization of Indoor Thermal Conditions Using Augmented Reality for


Improving Thermal Environment

Nobuyoshi Yabuki1; Masahiro Hosokawa2; Tomohiro Fukuda3; and Takashi


Michikawa4
1
Professor, Ph.D., Division of Sustainable Energy and Environmental Engineering,
Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
E-mail: [email protected]
2
Master Course Student, Division of Sustainable Energy and Environmental
Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
E-mail: [email protected]
3
Associate Professor, Dr. Eng., Division of Sustainable Energy and Environmental
Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
E-mail: [email protected]
4
Project Assistant Professor, Dr. Eng., Center for Environmental Innovation Design
and Sustainability, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
E-mail: [email protected]

Abstract

Temperature in an air conditioned room is not usually even and it is generally


known that “the higher, the warmer” in winter and “the closer to the AC, the cooler”
in summer. In order to conceive countermeasures to such unevenness, distribution of
temperature and humidity in the room should be monitored. Typical line charts and
table representation of sensing data would not be efficient for human to understand
the situation if the number of sensors increase. Thus, in this research, a system which
visualizes the information obtained in real time with AR technology by monitoring
the indoor thermal environment by using Wireless Sensor Network (WSN) was
developed. This system allows users to observe thermal environment in a 3D space,
in order for them to intuitively grasp updated distribution status of temperature and
humidity. The feasibility of this system was studied as an environmental
improvement method by an experiment.

INTRODUCTION

Air conditioners are used for cooling or heating air in buildings and houses. It
is generally known that air temperature is not even and varies place to place in the
same office or room. Monitoring the spatial, thermal environment would help to
conceive effective countermeasures to such temperature variation. For example, if a
certain place is hot and others are cool in a room, it may be a countermeasure to
deploy electric fans to mix airs of different temperatures. The thermal environment
can be understood by either numerical analysis simulations or monitoring.
Although the thermal environment can be predicted for various conditions by
changing parameters in numerical analysis, high level professional knowledge and
expensive analysis software are necessary to execute simulations. In addition, it takes

©  ASCE 1
Computing  in  Civil  Engineering  2015 340

much time and effort to make a 3D model with special attention for analysis and
difficulty exists in verifying the simulated result.
On the other hand, grasping the thermal environment by monitoring
temperatures enables us to compile a database of actual condition, so it is possible to
obtain the real time information. Recently, it is getting easier to monitor temperatures
using Wireless Sensor Network (WSN) thanks to the technological improvement of
wireless communication devices and miniaturized and power saving sensors. In the
future, it is expected to obtain and utilize various types of information by combining
more sensors to WSN. However, sensing data is usually displayed in charts or tables,
where users will find it difficult to know corresponding relationships between sensors
and their locations. Thus, when a countermeasure to make the temperature uniform in
a room is performed, it takes time to check the employed countermeasure works or
not.
One of the methods to represent sensor data in a 3D environment is to display
the data of temperature sensors in a 3D model. However, it takes time and effort to
create a 3D building model. On the other hand, Augmented Reality (AR) is drawing
an attention recently. AR is a technology adding information to the actual
environment by overlapping digital information such as Computer Graphics (CG) to
video camera images. AR can keep overlapping the model in designated position
from the video images and following perspectives by automatically estimate the
location and position of the camera from video images immediately. The model to be
overlapped can be updated in real time so that time consistency can be kept with the
real world. One of the advantages of AR is that the video camera can be moved
during the real-time operation thanks to the registration techniques.
In this research, a system which visualizes the information obtained in real
time with AR technology by monitoring the indoor thermal environment by using
WNN was developed. This system allows users to observe thermal environment in a
3D space, in order for them to intuitively grasp updated distribution status of
temperature and humidity. As a result, the effectiveness of the measures conducted
can be seen, and it can conduct more effective measures than conventional ones. The
expected users of this system are, workers at an office, residents of a house, students
in a classroom, etc. The feasibility of this system was studied as an environmental
improvement method by an experiment.

PRVIOUS RESEARCH

Visualization of Thermal Environment by VR/AR


The result of thermal environment analysis is a massive numerical data, and it
is difficult to understand as it is. The research using VR has recently been paid
attention as a technology to visualize sensory thermal environment.
Sato et al. (2009) have developed a system which comprehensively conducts
predictive estimation for performance of a building by indicating the result of each
simulation assimilatory on an architectural design of the building using VR.
Visualization of the results of air flow and thermal analysis using VR was conducted
to understand thermal environment in a server room at the data center. By doing this,
it is possible to find the waste of air-conditioning energy, and take effective measures,

©  ASCE 2
Computing  in  Civil  Engineering  2015 341

and monitoring its resolution based on the analysis of the measures. By indicating the
result of the air flow analysis using VR before and after taking measures, it is possible
to understand correctly and grasp easily 3D behavior.

Registration Techniques for AR


Yabuki et al. (2012) have visualized the result of thermal environmental
analysis obtained by CFD analysis software in an out-door space with AR, and made
it possible to evaluate thermal environment in a 3D way. Compared to the
conventional visualized expression, it is further realistic and steric by using AR, and
makes it possible to easily image the thermal environment.
When using AR, it is important to indicate the correct location and position by
conducting accurate adjustment when showing the hypothetical object in video
images. There are three principal adjustment methods; the method using sensor bases,
using artificial markers, and using natural feature point. When applying the method
using artificial markers at large-scale building in an out-door space is the intended
object, the markers should be huge. Then, Ota et al. (2010) conducted research to
miniaturize the markers into four by setting four markers on tops of the regular
tetragon to assume it a virtually-created huge marker. The performance was improved
but there is a limitation to put the markers virtually on the edges of the regular
tentacles.
Hatori et al. (2013) have conducted a research to adjust the location accurately
by assigning four or more different markers freely on a same level. Obtaining the
central coordinate on the detected four or more markers on the screen coordinate,
mapping was conducted based on the degree of coincidence of pattern matching, and
relative 3D coordinate of each marker which had an original point measured
accordingly to the central point of standard marker previously registered. Posing
estimation under some standard points were conducted, and by setting projection line
to make it minimize to have the projection gap with least-square method, high
accuracy adjustment can be done. If five or more markers are set, it can be secured to
a certain amount of perspective degree of freedom. In this research, adjustment was
done by this method.

A VISUALIZATION SYSTEM OF THERMAL ENVIRONMENT BY WSN


AND AR

Overview of the Proposed System


This system obtains the thermal/humidity information at each point by setting
a number of wireless thermal/humidity sensors at an indoor space, and visualizes its
database in real time by AR technology. It makes it possible to see the data indicated
in a 3D way in actual space through camera (Figure 1). Although the set of video
camera, PC, and display in Figure 1 looks large and heavy, it may be a laptop PC with
a small video camera so the user can move it easily.
After setting in an indoor space, sensor node automatically creates ad hoc
network. The node conducts measurement per minute, and sends data. Measured data
is transferred to the server computer via networks. The server computer saves

©  ASCE 3
Computing  in  Civil  Engineering  2015 342

received data in its own SQLite database and send it to a client computer via wireless
LAN.
Server computer and client computer performed communication in XML
(Extensible Markup Language) way. Client computer analyzed the XML, and
extracted the necessary information only.
The method mentioned in the previous chapter (Hatori et al., 2013) was used
as an adjustment method. The markers are sized for each marker to be detected, as
accuracy of adjustment has no dependence between the sizes of markers. Four or
more different artificial markers were set on the same level. Arbitrary 3D coordinate
system is set at an indoor space to obtain relative coordinate value between the
standard marker and each marker. When the system is on and four or more markers
are recognized, the model can be overlapped. Sometimes when the markers reflect the
light and cannot be detected, images from video camera are converted to binary ones
and indicated, so that it will be possible to confirm the status of detection and change
of threshold value as needed.

Figure 1. System architecture

Representation of Sensing Data Using AR


To represent measured data, a 3D model was used to obtain a certain sense of
perspective. Temperature and humidity are indicated by changing the color of models
according to the data; as such high temperature is indicated in red, low temperature is
indicated in blue (Figure 2). Temperature, humidity, and discomfort index are shown
as thermal environmental information. Discomfort index was calculated by Equation
(1) (Ishimatsu, 2004). Discomfort index is also represented by using cubes.

U = 0.81Td + 0.01H(0.99Td - 14.3) + 46.3 (1)


where U: Discomfort index, Td: Temperature, H: Humidity,

©  ASCE 4
Computing  in  Civil  Engineering  2015 343

Figure 2. Sample image of thermal cubes in AR

Relationship between R, G, B rates (0.0 - 1.0) and each thermal data was
defined. Relationship between temperature and R, G, B rates is shown in Table 1.

Table 1. Temperature and R, G, B rates


temperature color R G B
27 1.0 0.0 0.0
1.0-0.4 (temp-
24.5 27 1.0 0.0
24.5)
22 24.5 0.4 (temp-22.0) 1.0 0.0
1.0-0.4 (temp-
19.5 22 0.0 1.0
19.5)
17 19.5 0.0 0.4 (temp-17.0) 1.0
17 0.0 0.0 1.0

EXPERIMENT

Feasibility of this system was examined by the experiment. Experiment was


done at Room 521, S4 building, Osaka University Suita Campus, and trial was made
to improve indoor thermal environment. Targeted room is an open ceiling space
between 5F and 6F, sensible temperature during winter was hot in upper part, and
very cold in lower part. The dimension of the room is W7.8m x D11.6m x H5.7m.

Environment and Devices of the Experiment


To obtain actual images, Microsoft LifeCam Studio web camera with
HD1080pt 8 million pixels was used. The air conditioner used was Super Power Eco
R (ACSA 11256A) from Toshiba Corporation, which is a hanging type with 4HP
power generation.
It is necessary to detect each marker as a different object when using a method
with a number of markers; eight markers with different internal patterns were

©  ASCE 5
Computing  in  Civil  Engineering  2015 344

prepared. In this research, regular tentacle markers using internal patterns with the
letters “me,” “so,” “po,” “ta,” “mi,” “a,” “bun,” and “mei,” written in both Japanese
Hiragana and Kanji with Kozuka Gothic Pro font. Size of the markers used were 350
x 350mm each. To enhance a sense of detection in an actual environment, white
frames were created around them.
In this research, NeoMote of Sumitomo Precision Products Co., Ltd. was used
to build WSN. In the experiment, temperature measurements were conducted in 34
different points. Sensors were hung in the air to measure temperature at interspace.

Procedure of the Experiment


The effect of countermeasures were compared by obtaining indoor
temperature with the system developed while observing the states of the changes of
temperature distribution. The aim was to reduce irregularity at the indoor space to
resolve vertical temperature gap. The setting of the air conditioner at the start of this
experiment was: preset temperature = 28 strong wind, horizontal air outlet, no
swing. The initial condition was named Case 1.
As an improvement measure, air circulation by changing direction of the air
conditioner and running electric fans was conducted. The direction of the air
conditioner was changed from the original horizontal/direct position to lowermost
position in the setting. Three electric fans were set to let the air circulate in anterior-
upper direction (Figure 3). To compare each improvement measure, cases of
verification were set in the experiment as shown in Table 2.

(a) Plan (b) A-A section


Figure 3. Drawings of the experiment office and locations of AC and fans

Table 2. Experiment cases


Case AC air outlet direction Electric fans operation
1 Horizontal No
2 Max. downward No
3 Horizontal Yes
4 Max. downward Yes

Result of the Experiment


Verification 1: Change direction of the air outlet direction of AC

©  ASCE 6
Computing  in  Civil  Engineering  2015 345

Comparison between Case 1 and 2 were conducted. The comparison of AR indication


result is shown in Figure 4. Hot air from air conditioner was flown to floor direction,
so it was confirmed that the high temperature air was able to reach around floor level.
Verification 2: Air circulation by electric fans
Comparison between Case 1 and Case 3 were conducted. Each AR indication result is
shown in Figure 4. The fans created convection, which circulated warm air around
ceiling. It was confirmed that temperature at overall measurement points increased.
Verification 3: Setting change of air conditioner and air circulation by
electric fans
Comparison between Case 1 and Case 4 were conducted. Each AR indication result is
shown in Figure 4. By combining the setting change of the air conditioner and air
circulation by electric fans, rise in temperature around floor level was confirmed
while restricting the rise of temperature at upper points.

Figure 4. Results of Cases 1 – 4

Discussion
The system developed made it possible to obtain temperature distribution at
an indoor space intuitively, so it also leaded to obtain the changes in real time. By
observing process of the changes, it was possible to understand the results of each
measure immediately and leaded to the result of measure comparisons were
conducted easily. It is assumed that these results will contribute to creating a better
environment, because users can understand current status at once so that they can
come out with various types of measures on the spot.
Obviously, the snapshots in Figure 4 do not show all sensors. Other sensors
are not in the field of view in this case. The user of the system can move the video
camera to see the condition of other sensors.

©  ASCE 7
Computing  in  Civil  Engineering  2015 346

CONCLUSIONS

In this research, the system was developed in order for the users to obtain the
thermal environmental information database intuitively by monitoring the indoor
thermal environment with WSN. The improvement measure at an indoor environment
was also conducted by using this system.
The conclusion of this research is summarized as follows:

The new visualization system was built which indicate the database measured
and collected via Wireless Sensor Network in the actual indoor space by AR.
By observing indoor thermal environment with this system, the status of change
aiming to improve the environment was obtained, resulted in a significant
environmental improvement method.

REFERENCES

Hatori, F., Yabuki, N., Komori, E., and Fukuda, T. (2013). “Application of the
augmented reality technique using multiple markers to construction sites.”
Journal of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering
Informatics), 69 (2), I_24-I_33.
Ota, N., Yabuki, N., and Fukuda, T. (2010). “Development of an accurate positioning
method for augmented reality using multiple markers.” Proceedings of the
13th International Conference on Computing in Civil and Building
Engineering (ICCCBE), Nottingham, UK, Paper 4, Memory, 1-6.
Sato, Y., Ohguro, M., Nagataki, Y., and Morikawa, Y. (2009). “Development of
advanced VR system Hybridvision.” Technical center report of Taisei Corp.,
No. 42.
Yabuki, N., Furubayashi, S., Hamada, Y., and Fukuda, T. (2012). “Collaborative
Visualization of Environmental Simulation Result and Sensing Data Using
Augmented Reality.” Proceedings of the International Conference on
Cooperative Design, Visualization, and Engineering (CDVE 2012), 227-230.

©  ASCE 8
Computing  in  Civil  Engineering  2015 347

A Conceptual Framework for Modeling Critical Infrastructure


Interdependency: Using a Multilayer Directed Network Model and Targeted
Attack-Based Resilience Analysis

Chen Zhao1; Nan Li2; and Dongping Fang3


1
Department of Construction Management, Tsinghua University, Beijing 100084,
China. E-mail: [email protected]
2
Department of Construction Management, Tsinghua University, Beijing 100084,
China. E-mail: [email protected]
3
Department of Construction Management, Tsinghua University, Beijing 100084,
China. E-mail: [email protected]

Abstract

Critical Infrastructure (CI) plays a vital role in sustaining fundamental


functionalities and services in urban systems. As these CI systems are getting
more interdependent, disturbances in one CI system often propagate to other
systems, imposing significant cascading and devastating risks on cities. This paper
introduces a conceptual framework for modeling interdependencies between CI
systems and analyzing the impacts of interdependencies on urban resilience. This
framework is composed of a directed network model and a resilience analysis
approach. In the network model, different types of interdependencies are
integrated through a multilayer network structure. For each layer, the use of
directed network and the PageRank algorithm provides an improved
representation of node criticality. In resilience analysis, urban disturbances are
first classified into two categories based on their paths of propagation. Within
each category, disturbance responses of the network can be observed by
simulating targeted attacks. This conceptual framework provides a quantitative
approach of modeling and analyzing CI interdependencies, which can lead to
better understanding of the mechanism of CI interdependencies and provide
measures to protect CI against possible cascading failures.

INTRODUCTION

Critical Infrastructure (CI), such as electric power and water supply, is


underpinning every aspect of urban life by providing essential services for its
functioning (Johansson and Hassel, 2010; Oh et al., 2012). With the ever-
increasing complexity of urban systems due to rapid global urbanization, different
CI systems have become extensively interdependent on each other, altogether
forming a larger-scale complex system. In this complex system, various
interdependencies, such as functional connection and geographical proximity,
enable disturbances in certain infrastructure assets to propagate to connected or
proximal others, which could lead to a breakdown of the whole system (Buldyrev
et al., 2010; Dueñas-Osorio and Vemuru, 2009). Recent decades have witnessed
quite a few of such cascading failures, for instance the 2001 California Power

©  ASCE 1
Computing  in  Civil  Engineering  2015 348

Outage and the 2008 China Winter Storm. Therefore, the need to build cities with
greater resilience has been widely recognized in both academia and practice.
However, there is no universally accepted definition of the concept of ‘resilience’
in urban studies (Francis and Bekera, 2014). Throughout the remainder of this
paper, ‘resilience’ refers to the overall response of CI systems in a short term after
disturbances, instead of long-term recoverability.
Despite the recognition of the importance of urban resilience and the
active research seen in the past decade, the following seemingly basic questions
still remain daunting: how to model the complex CI interdependency, and how to
systematically analyze the resilience of interdependent CI systems. To address
these challenges, this paper proposes a conceptual framework. The framework is
composed of a multilayer network model for representing technological networks
of CI systems, and a targeted attack-based approach for resilience analysis.

LITERATURE REVIEW

The research on infrastructure interdependency is relatively new, with the


first known paper systematically describing the concept of interdependency
published in 2001 (Wang et al., 2012a). In this paper, Rinaldi et al. (2001)
proposed a conceptual framework that examined interdependencies of CI from six
dimensions. For each of the six dimensions, the researchers put forward issues
that need to be addressed. They emphasized that these issues should be addressed
by proper modeling approaches, which can also be the foundation for further
resilience or vulnerability analysis. Various modeling and simulation approaches
have been proposed ever since.
Network-based modeling and analysis have been actively explored in prior
research. Some studies theoretically researched into modeling and resilience
analysis from the perspective of graph and network theory, such as Buldyrev et al.
(2010). Grubesic et al. (2008) argued that few real CI systems bear resemblance to
these theoretical networks, thus making these theoretical studies less practical in
real world. At the meantime, encouraging progress has been made in modeling
real infrastructure systems through real data. Dueñas-Osorio et al. (2007)
proposed an analysis approach based on graph theory and conditional probability,
and used network topology to represent two real infrastructure systems. They
investigated disturbance impact on the interconnected infrastructure network via
different node removal strategies. Johansson and Hassel (2010) presented an
approach for modeling interdependent infrastructure systems. The approach
consisted of a network model and a functional model. Vulnerability analysis was
carried out based on these two models.
A challenge seen in prior research is that insufficient efforts have been
made to integrate different types of interdependencies that exist between CI
systems. A limited number of models attempted to consider more than one type of
interdependency (Ouyang et al., 2009; Thomas et al., 2003). Further research
efforts are needed in order to comprehensively model and analyze different types
of interdependencies within a single coordinated framework.

CONCEPTUAL FRAMEWORK

As illustrated in the flowchart in Figure 1, the proposed framework is


composed of two main modules, including a multilayer network model, for

©  ASCE 2
Computing  in  Civil  Engineering  2015 349

statically modeling CI with connectedness and interdependency, and an approach


for dynamically analyzing the resilience of CI based on the network model. The
framework is featured by:
(1) Different types of interdependency are integrated in the network model and
resilience analysis approach;
(2) The multilayer network model is built on several directed networks, in order
to better reflect the supporting-supported relationship between infrastructure
assets;
(3) Criticality of nodes is measured using a PageRank (PR) algorithm;
(4) Urban disturbances are classified into two categories, including functional
disturbances and geographical disturbances, based on their propagation
manner;
(5) Different attack strategies (targeted node removal and targeted zone removal)
are conducted respectively for these two categories of disturbances.

Figure 1. The conceptual framework for CI interdependency modeling and


resilience analysis

The President’s Commission on Critical Infrastructure Protection (Marsh,


1997) listed eight CI systems widely recognized and referenced in prior research.
These CI systems include (1) electric power, (2) water supply, (3) natural gas and
oil, (4) telecommunications, (5) transportation, (6) banking and finance, (7)
government services, and (8) emergency services. The first five CI systems are the

©  ASCE 3
Computing  in  Civil  Engineering  2015 350

focus of this paper. They are selected because of their typical technological
network structure. Network-based modeling is often used for these networks.

A multilayer network model for CI interdependencies


Multilayer network structure
The network model includes a collection of nodes and links. In this
network, nodes are elementary working units, which represent built facilities that
act autonomously as controllers, managers or coordinators in the network. Pairs of
nodes are connected through links, which are non-autonomous components in the
network. Links transmit certain types of commodities, such as material, energy,
and information, from their one end to the other. CI interdependency is
represented by ‘inter-links’ in the proposed model, while connection among
infrastructure assets in a same layer is represented by ‘intra-link’.
As illustrated in Figure 2, nodes are identified by real CI basic information,
such as general function, system composition, and governance structure; after
node identification, detailed data are also collected, such as geographical position
of nodes and link direction between nodes. Each layer of the network represents
one CI system. As illustrated in Figure 2 using a fictional city as an example,
nodes (red and blue series) maintain their geographical positions as on
geographical layer (the bottom one).

Figure 2. A three-layer network model for a fictional city

Types of interdependency in the multilayer network model


The study presented in this paper adopts the classification of
interdependencies by Rinaldi et al. (2001). The proposed framework focuses on
real connections amongst five technological CI systems and segregates them from
government administration, regulation and legislation, and other factors whereby
logical interdependency might play a role. Therefore, the framework is designed
to cover three types of interdependencies, including physical, cyber and
geographical interdependency. In addition, according to Rinaldi et al. (2001), ‘an
infrastructure has a cyber interdependency if its state depends on information
transmitted through the information infrastructure’. Information transmitted
amongst infrastructure assets via telecommunications system can be regarded as
‘commodity’ to the telecommunication infrastructure, as, in physical
interdependency, electricity to electric power infrastructure. Therefore, physical

©  ASCE 4
Computing  in  Civil  Engineering  2015 351

and cyber interdependency are collectively referred to as ‘functional


interdependency’ in this paper.
Geographical interdependency reflects how facilities are interdependent
because of their geographical proximity. In prior research, geographical
interdependency was mostly modeled separately from functional attributes of
infrastructure assets. Consequently, resilience analysis treats geographical and
functional interdependencies as two different sources of vulnerability; hence two
independent sets of resilience analysis need to be carried out. However, when
there is large-scale disturbance, cascading failures are usually the results of the
coupling effects of both types of interdependencies. It is therefore necessary to
integrate both functional and geographical interdependencies into a coordinated
modeling and analysis approach.
The proposed framework addresses this research gap by a multilayer
structure. When a node (disturbance source) malfunctions for some reason, its
capacity of transmitting ‘commodity’ is lost or reduced. This impact would
propagate to a second layer via inter-links. Moreover, if the disturbance is of a
larger scale, e.g. earthquakes, the impact also spreads to a second layer via
geographical proximity, though the second layer may not be functionally relevant
to the source of the disturbance. In this way, the proposed modeling approach
allows for better integration of functional and geographical interdependency in
resilience analysis.

Criticality measurement
Measuring node criticality is an important step in interdependency
modeling and resilience analysis. The prior research concerning network-based
modeling often uses degree centrality of nodes to measure criticality (Newman,
2003). However, for directed network, a node has both out-degree and in-degree,
and having link directions makes a significant difference to criticality
measurement (Wang et al., 2012b). In terms of CI interdependency, out-degree
represents how many adjacent facilities this facility supports, while in-degree
represents how many adjacent facilities this facility relies on for support. Thus,
out-degree reflects more about the importance of this facility to a network. The
simple sum of out- and in-degrees, in other words degree centrality, cannot fully
reflect the criticality of a node. Therefore, in this network model, PageRank (PR)
algorithm is introduced to calculate node criticality. PR is an iterative algorithm
first used by Google search to measure website criticality. Note that for website, a
link is a hyperlink from one page to another, so link direction represents ‘being
supported’ instead of ‘supporting’. Therefore, in order to use the PR algorithm to
calculate CI network, link directions in CI network should first be all changed to
the opposite. Each node in directed network can be assigned a numerical weight
(PR value). The basic idea of the PR algorithm is that a more important node is
likely to receive more links from other nodes, which is also applicable in the CI
network case. Through the PR algorithm, each node in a layer is assigned a PR
value, which is the criticality, after an iterative computation. The PR value lays
the basis for targeted removal-based resilience analysis discussed later in the
paper.
The implementation of the PR algorithm in calculating node criticality is
explained as follows.

©  ASCE 5
Computing  in  Civil  Engineering  2015 352

1. Set a matrix

, where is the out degree of node

i;
2. Set initial PR values (i=1,2,…,N) to all nodes in a layer, and make
sure ;
3. Set the damping factor ‘d’ as recommended 0.85 (Brin and Page, 1998). In
Iteration Step t, divide every node’s PR value at Step (t-1) equally and give
shares to its pointing nodes. Correct every node’s PR value as the sum of
shares it gets multiplied by 0.85. The remaining 0.15 value is equally
distributed to all the nodes. Thus,
(where N is the number of all nodes in a layer);
4. Iterate until convergence.
It should be noted that the selection of initial PR values does not influence
the convergence result, but does influence computational time (Page et al., 1999).

Resilience analysis
Network response measurement
In prior research, characteristic path length has been widely regarded as an
important metric to measure network response (Dueñas-Osorio et al., 2007;
Ouyang et al., 2009), and changes to its value indicate changes in global
efficiency due to disturbances. Characteristic path length is the harmonic mean of
all geodesic paths between any pairs of nodes, and can be defined as follows:
,
where N is the number of nodes in a layer, including the two virtual nodes, and
is the geodesic path from node i to node j. For directed network, the shortest
path between two nodes is direction sensitive.
Targeted attack-based resilience analysis
To investigate the potential response of CI to hypothetical disturbance, a
common procedure is to first determine the criticality of every infrastructure asset
prior to disturbances based on certain criteria (Grubesic et al., 2008). Based on the
criticality ranking, targeted node removal is used to mimic disturbances to CI and
examine the responses of CI when exposed to the disturbances. Prior to resilience
analysis, based on propagation manner, disturbances are first classified into two
types, including functional disturbances and geographical disturbances.
Functional disturbances refer to the disturbances whose impact prorogates
to another layer via functional interdependency only. Typical functional
disturbances include mechanical breakdowns, material shortages, operational
mistakes of infrastructure assets, and small-range man-made damages, such as fire
of small scales. When a functional disturbance occurs, the infrastructure asset
loses its function, either partially or completely, and the subsequent impact
prorogates within the CI system and/or to another system via intra- and inter-links.
Geographical disturbances, on the contrary, refer to disturbances whose impacts
prorogate via at first geographical proximity and then functional interdependency.
Typical geographical disturbances include climate changes, natural disasters, and
man-made damages of large scales (e.g. explosions).

©  ASCE 6
Computing  in  Civil  Engineering  2015 353

In order to study the responses of CI network to functional disturbances,


the following targeted node removal based on criticality ranking is adopted in the
proposed framework. The following steps of targeted node removal are carried out
for each functional layer sequentially: (1) rank all the nodes (except the two
virtual nodes) according to their criticality (i.e. PR values); (2) based on the
rankings, remove the most critical node in a certain layer; (3) without considering
flow rearrangement and redistribution, calculate characteristic path length of the
network; (4) repeat steps 2 and 3 until all nodes are removed. The results allow for
better understanding of the disturbance responses of the network resulting from
the removal of critical nodes. It is also possible to analyze and compare the
responses of the network due to node removal of different functional layers.
In order to study the responses of the CI network to geographical
disturbances, the targeted zone removal-based method is adopted. To implement
this method, a concept of impact zone is introduced. An impact zone is a square or
regular hexagon zone with a predefined size. The size is determined based on the
scale of disturbances and the size of the area under investigation. The whole
geographical layer is divided into a number of impact zones. The criticality of a
zone is defined as the sum of PR values of all nodes (except the virtual nodes)
from all layers that fall within the zone. This criticality measurement considers
both topological attributes (i.e. PR value) of the nodes and their geographical
proximity. The steps of targeted zone removal are as follows: (1) rank all the
zones according to their criticality, (2) based on the rankings, remove the most
critical zone (zone removal means all the nodes falling in the zone are removed);
(3) without considering flow rearrangement and redistribution, calculate the
characteristic path length; (4) repeat the step 2 and 3 until all zones are removed.

CONCLUSIONS AND FUTURE RESEARCH

This paper introduces a conceptual framework to examine CI


interdependency and its impact on urban resilience. This framework consists of a
network-based model and targeted attack-based resilience analysis. The model is
featured with a multilayer structure to represent five CI systems. Moreover, a
geographical layer facilitates an integration of functional and geographical
interdependency. The PR algorithm is employed to measure node criticality in
directed networks. As for resilience analysis, this framework first classifies urban
disturbances into two categories, based on how their impacts propagate. Targeted
node removal and targeted zone removal are adopted to analyze the respective
responses of the CI network to both types of disturbances. This conceptual
framework can lead to better understanding of interdependent CI systems, and
provide important decision support tools for the protection of CI systems. The
authors are also carrying out a case study in a middle-size city in China. The city,
with a population of around 300,000, is located at an important intersection of
several economic regions in the Yangtz Delta River area. Its CI systems are being
examined to validate the operability and effectiveness of the proposed framework.

ACKNOWLEDGMENTS

This material is based upon work supported by Tsinghua University Initiative


Scientific Research Program under Grant No. 2014z21050. The authors are
thankful for the support of Tsinghua University. Any opinions, findings, and

©  ASCE 7
Computing  in  Civil  Engineering  2015 354

conclusions or recommendations expressed in this paper are those of the authors


and do not necessarily reflect the views of Tsinghua University.

REFERENCES

Brin, S., Page, L., 1998. The anatomy of a large-scale hypertextual Web search
engine. Computer networks and ISDN systems 30, 107-117.
Buldyrev, S.V., Parshani, R., Paul, G., Stanley, H.E., Havlin, S., 2010.
Catastrophic cascade of failures in interdependent networks. Nature 464,
1025-1028.
Dueñas-Osorio, L., Craig, J.I., Goodno, B.J., Bostrom, A., 2007. Interdependent
response of networked systems. Journal of Infrastructure Systems 13, 185-
194.
Dueñas-Osorio, L., Vemuru, S.M., 2009. Cascading failures in complex
infrastructure systems. Structural safety 31, 157-167.
Francis, R., Bekera, B., 2014. A metric and frameworks for resilience analysis of
engineered and infrastructure systems. Reliability Engineering & System
Safety 121, 90-103.
Grubesic, T.H., Matisziw, T.C., Murray, A.T., Snediker, D., 2008. Comparative
approaches for assessing network vulnerability. International Regional
Science Review 31, 88-112.
Johansson, J., Hassel, H., 2010. An approach for modelling interdependent
infrastructures in the context of vulnerability analysis. Reliability Engineering
& System Safety 95, 1335-1344.
Marsh, R. T., 1997. Critical foundations: Protecting America’s infrastructure.
President's Commission on Critical Infrastructure Protection.
Newman, M.E., 2003. The structure and function of complex networks. SIAM
review 45, 167-256.
Oh, E.H., Deshmukh, A., Hastak, M., 2012. Criticality Assessment of Lifeline
Infrastructure for Enhancing Disaster Response. Natural Hazards Review 14,
98-107.
Ouyang, M., Hong, L., Mao, Z.-J., Yu, M.-H., Qi, F., 2009. A methodological
approach to analyze vulnerability of interdependent infrastructures.
Simulation Modelling Practice and Theory 17, 817-828.
Page, L., Brin, S., Motwani, R., Winograd, T., 1999. The PageRank citation
ranking: Bringing order to the web.
Rinaldi, S.M., Peerenboom, J.P., Kelly, T.K., 2001. Identifying, understanding,
and analyzing critical infrastructure interdependencies. Control Systems,
IEEE 21, 11-25.
Thomas, W., North, M., Macal, C., Peerenboom, J., 2003. From physics to
finances: Complex adaptive systems representation of infrastructure
interdependencies. Naval Surface Warfare Center Technical Digest, 58-67.
Wang, S., Hong, L., Chen, X., 2012a. Vulnerability analysis of interdependent
infrastructure systems: A methodological framework. Physica A: Statistical
Mechanics and its Applications 391, 3323-3335.
Wang, X., Li, X., Chen, G., 2012b. Network Science: An Introduction. Higher
Education Press Beijing.

©  ASCE 8
Computing  in  Civil  Engineering  2015 355

An Integrated Framework for Assessment of the Impacts of Uncertainty in


Construction Projects using Dynamic Network Simulation

Jin Zhu1 and Ali Mostafavi2


1
Ph.D Candidate, Department of Civil and Environmental Engineering, College of
Engineering and Computing, Florida International University, Miami, FL, Email:
[email protected]
2
Assistant Professor, OHL School of Construction, College of Engineering and
Computing, Florida International University, Miami, FL, Email: [email protected]

ABSTRACT: This paper presents a framework for analysis of the impacts of


uncertainty on the performance of construction projects using a multi-node, multi-link
and dynamic network analysis framework. In the proposed framework, the network
representations of construction project organizations are created using a meta-network
consisting of different types of node entities (e.g., agents, information, resources, and
tasks) and different relationships between these entities. Accordingly, the impacts of
uncertain events are captured based on perturbations in the node entities and the
subsequent changes in the topological structure of project network. The application of
the proposed framework is demonstrated using a case study related to a tunneling
project. The results of the analysis are twofold. First, the results identify the node
entities that are critical for the success of a project. Second, the results capture the
impacts of uncertain events on the performance of the project organization based on
simulation of different scenarios. The results highlight the capability of the proposed
framework in capturing the complex interactions in project organizations and enabling
an integrated analysis of organization vulnerability under uncertain scenarios.

INTRODUCTION
Construction project organizations are complex systems-of-systems consisting
of interconnected networks of different node entities: human agents, resources,
information, and tasks (Zhu and Mostafavi, 2014). The complex and dynamic
interactions between these node entities affect the ability of project organizations to
cope with uncertainties. Similar to other complex networks, the ability of construction
projects to cope with uncertainty can be understood based on investigation of the
structural properties of the network topology. Over the past two decades, social
network analysis (SNA) has been used for better understanding of the performance in
construction projects by taking social elements of project organizations into
consideration (Loosemore, 1998; Pryke, 2004; Chinowsky and Victor, 2008). Despite
the efforts made in investigating construction projects using SNA, there are certain
limitations in the existing studies. The major limitations are twofold: (i) the lack of
consideration of different types of entities and relationships in construction project
networks. SNA mainly focuses on the interactions between human agents. Other node
entities in construction project organizations (i.e., resource, information, and task) and
their interdependencies (e.g., interdependencies between human agents and resources,
or information and task) are not fully considered; (ii) the lack of consideration of the
impact of uncertainty. The unpredictability of performance in project organizations is

©  ASCE
Computing  in  Civil  Engineering  2015 356

mainly due to their complexity as well as the existing uncertainties (Jin and Levitt
1996). Uncertainty-induced perturbations would disrupt the original topology of the
network, and subsequently, affect the project performance. SNA, as a descriptive
approach, has failed to capture the impacts of uncertainty on the performance of project
organizations. Hence, an integrated framework for analysis of complex interactions and
uncertainty in construction projects is missing in the existing literature. This paper
proposes a multi-node, multi-link and dynamic network simulation framework to
address this important gap in knowledge.

AN INTEGRATED FRAMEWORK FOR DYNAMIC NETWORK


SIMULATION IN CONSTRUCTION PROJECTS
The proposed framework is based on the concept of meta-networks in dynamic
network analysis (DNA). DNA is an emergent field in network theory. It is different
from traditional SNA in that it can investigate large dynamic multi-node and multi-link
networks with varying levels of uncertainty (Carley, 2003). The components of the
proposed framework are explained below:
Project organizations as meta-networks: In the proposed framework,
construction projects are conceptualized as complex systems consisting of networks of
interconnected tasks, autonomous agents, information, and resources. Based on this
conceptualization, the interconnections between tasks, agents, information, and
resources are developed using  the  concept  of  “meta-networks”  (Carley  2003). The four
types of node entities are interconnected through ten primitive types of links (Table 1).
Each type of links and their corresponding nodes can form an individual network [e.g.,
social network of agent-agent relationships (A), assignment network of agent-task
relationships (AT), and resource requirement network of resource-task relationships
(RT)]. Different networks in a meta-network are interconnected. Changes in one node
entity or network cascade into changes in the other node entities and networks.
Table 1. Nodes and Links in Construction Project Meta-Network.
Agent Information Resource Task
Agent A: who works AI: who AR: who can AT: who is
with and reports knows what use what assigned to
to whom resource what task
Information I: what IR: what IT: what
information is information is information is
related to needed to use needed for
other what resource what task
information
Resource R: what RT: what
resource is resource is
used for other needed for
resources what task
Task T: what task is
related to
other tasks

©  ASCE
Computing  in  Civil  Engineering  2015 357

Uncertain events as node perturbations: Investigation of a project’s   meta-


network facilitates the use of theoretical underpinnings from network science in the
analysis of the impacts of uncertainty on project performance. In the context of the
proposed framework, uncertain events are conceptualized as perturbations in the node
entities. For example, a delay in delivery of material can be represented as a
perturbation on the resource node entity. Table 2 depicts examples of uncertain events
in construction projects and their resulting perturbations in the node entities.
Table 2. Uncertain Events as Node Perturbations.
Node entity perturbations Examples of uncertain events
Agent perturbation Staff turnover, safety accidents and injury
Information perturbation Late design deliverables, unclear scope/design,
limited access to required data
Resource perturbation Counterfeit/defective materials, equipment
breakdown, later delivery of material

Network and node metrics: A large number of metrics can be used to assess the
node and network features and properties in meta-networks (Carley and Jeff, 2004).
The use of appropriate metrics depends on the objective of analysis. For example, for
analysis of performance, a significant metric is task completion. The value of task
completion metric ranges from 0 to 1. It measures the percentage of tasks that can be
completed by the agent assigned to them, based on whether the agents have the requisite
information and resource to do the tasks (Carley and Jeff, 2004). Equations (1)-(3)
show the knowledge-based task completion calculation steps when all the individual
networks are represented as binary matrices. Resource-based task completion is
calculated in a similar way, by replacing matrix AI with AR and matrix IT with RT.
The overall task completion is then obtained as the average of information-based task
completion and resource-based task completion.
𝑁𝑒𝑒𝑑 = [(𝐴𝑇 × 𝐴𝐼) − 𝐼𝑇 ] (1)
𝑆 = {𝑖|1 ≤ 𝑖 ≤ |𝑇|, ∃  𝑗: 𝑁𝑒𝑒𝑑(𝑖, 𝑗) < 0} (2)
𝐾𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒  𝑏𝑎𝑠𝑒𝑑  𝑡𝑎𝑠𝑘  𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 = (|𝑇| − |𝑆|)⁄ |𝑇|   (3)

Dynamic network simulation: The meta-network of a construction project


organization is dynamic. Uncertain events could potentially occur in a construction
project, disrupt the original topology of the network, and affect the project performance.
The vulnerability of a project organization to the uncertainty can be evaluated based on
the changes in the task completion metric under different scenarios. A robust project
organization is the one that is less vulnerable to the uncertainty-induced perturbations.
This component of the proposed framework enables assessing the impacts of different
levels of uncertainty from various sources on project performance using simulation.

NUMERICAL EXAMPLE
A numerical example of a tunneling project is used here to illustrate the
application of the proposed framework. In this example, New Austrian Tunneling
Method (NATM) is adopted. The major significance of this approach is dynamic design
based on rock classification as well as deformation observed during construction. By
adjusting the initial design during construction, a tunnel can be constructed at a

©  ASCE
Computing  in  Civil  Engineering  2015 358

reasonable cost compared with the conventional method, which uses the suspected
worst rock condition for design. To achieve this purpose, in NATM, rock samples are
first collected by geologists during the early stage of the design process. Rock condition
is tested and analyzed in the laboratory, and the classification of rock mass type is
determined by comparing the test result with the rock quality designation index. Using
the rock type data, the designer conducts the initial design. According to the initial
design, excavation crew conducts excavation into the tunnel face, followed by loading
explosives and blasting. The safety supervisor does the safety inspection and issues the
safety approval before the blasting. After the excavation crew finishes the excavation
process, the support installation crew continues the work with initial lining, which
include applying shotcrete and installing the initial support (e.g., rockbolts, lattice
girders or wire mesh). Once initial support elements are installed, measurement
instrumentations are installed to observe the behavior of rock. Geologists undertake the
observation and report the rock deformation to the designer. The designer then decides
whether a design change is needed. If not, a final lining process composed of traditional
reinforced concrete in conducted; otherwise, the designer revises the initial design for
both initial lining and final lining. In this case, the support installation crew will use
the revised design to implement the initial and final lining. The whole tunneling project
is constructed in sections. At the end of each section, the risk manager reviews the
initial design, revised design, as well as the rock deformation reports to assess the risks
and makes a decision on the step length for excavation of the next section. For example,
if a relatively large deformation is observed, the risk manager will decrease the step
length to eliminate the risk of collapsing. Using the proposed framework, the complex
interactions in this project can be conceptualized as a meta-network. Table 3
summarizes the node  entities  in  the  project  organization’s  meta-network.
Table 3. Basic Entities in the Tunneling Project Case.
Agent geologists, designer, excavation crew, support installation crew, risk
manager, safety supervisor
Information rock condition, rock quality designation index, safety approval, initial
design, rock deformation, revised design, step length
Resource excavator, explosive, loader, truck, boomer, shotcrete machine, initial
support, concrete, reinforcement, measurement instrument, electric
power system
Task laboratory test, initial design, excavation, safety inspection, blasting,
mucking, apply shotcrete, install initial support, observe deformation,
revise design, final lining, risk assessment

Dynamic network analysis and simulation model


To analyze the meta-network of the project in the case study, ORA-NetScenes
3.0.9.9 was used. The links in the network were abstracted based on the
interdependencies between the node entities. For example, agent-resource links exist
between excavation crew and excavator, explosive, loader, truck, and electric power
system because the crew needs these resources to complete the excavation task. Figure
1 shows the network configuration including all the nodes and links in the project.

©  ASCE
Computing  in  Civil  Engineering  2015 359

This project is exposed to various uncertain events that will cause perturbations
in the node entities, which would eventually affect the performance of the project.
Different scenarios of uncertain events were simulated to evaluate the vulnerability of
the project organization. Events were defined based on two features: perturbation effect
(Table 2) and likelihood of occurrence. In the simulation scenarios for this case study,
three levels of uncertainty were defined: low, medium, and high. Events of the three
levels of uncertainty have 10%, 20%, and 50% likelihood of occurrence, respectively.

Agent
Information
Resource
Task

Figure 1. Network configuration of the tunneling project case.

Results
First, the meta-network (i.e., without perturbation) was analyzed and the key
node entities were identified. Then, the impacts of random perturbation on the
performance of the project were investigated.
Analysis of the meta-network: The  project’s  meta-network is composed of 36
nodes (of four different types) and 112 links (of ten different types). The density of the
meta-network as a whole is 0.178, which implies a relatively low level of
connectedness. The overall task completion is 1, which means under no perturbations,
the project organization is capable of completing all the tasks. The top five critical
agent, information and resource nodes are identified and shown in Figure 2. The
criticality of the nodes are evaluated by their total degree centrality values, which are
determined based on the connections that each node has in the meta-network. The
designer and excavation crew are the most critical agent nodes in the project, since they
undertake multiple tasks using different resources and information in the project. The
initial and revised design are two critical information nodes in the project. This is
because the design information in this adaptive construction project are connected and
used by many other node entities (e.g., designer, rock deformation, excavator, and
initial support). The electric power system is the most critical resource node. It is used
by the geologists, excavation crew and support installation crew in different tasks such
as excavation, safety inspection as well as rock deformation observation. Critical nodes
in the meta-network are crucial for the project success, and thus, any uncertainty-
induced perturbations in the critical nodes could lead to significant impacts on the
project performance. This information is important for developing mitigation plans to
eliminate or reduce the likelihood of perturbations in the critical nodes.

©  ASCE
Computing  in  Civil  Engineering  2015 360

Figure 2. Critical nodes of the tunneling project meta-network.

Dynamic network simulation: The vulnerability of the project was measured


based on the extent of the changes in the relevant topological measures (i.e., task
completion measure) before and after a perturbation (Holme et al. 2002, Criado et al.
2005). The greater the changes in the topological measures due to node disruptions, the
greater the vulnerability of the network. Simulations were conducted to assess the
organizational vulnerability under different scenarios to generate useful information
for predictive performance assessment and proactive performance management for the
tunneling project.
First, different uncertain events and their corresponding impact were simulated
individually. For example, staff turnover may occur if the designer quit in the project.
This causes a perturbation in the designer agent node. Similarly, defected concrete
means a perturbation in the concrete resource node. A delay in the delivery of revised
design documents indicates a perturbation in the design information node. For all the
possible uncertain events in the tunneling project, different scenarios were simulated
to evaluate the impacts on project performance. Figure 3(a) lists the ten most significant
uncertain events in this project. For example, failure of the electric power system is a
significant uncertain event, which could lead to the most severe performance loss in
the project. When it occurs, the task completion decreases from 1 to 0.667, which
indicates that 66.7% of the tasks can be completed. A 33.3% decrease in performance
shows that the project is vulnerable to the event of electric system failure. To reduce
the negative impacts of uncertain events, proactive measures can be taken against the
most significant uncertain events.
The impact of uncertain events could be even more significant if they occur
simultaneously. For example, one of the simulated scenarios (shown in Figure 3(b)) is
related to occurrence of a hurricane. A hurricane may affect the project in several
aspects. It has a ripple effect on the project by causing perturbations in multiple nodes
(e.g., delaying the delivery of material, failure of the electric system, and lack of access
to the jobsite for the key personnel). The simulation of this specific scenario composed
of multiple events and perturbations shows that, under this scenario, the project
performance decreases by 50%. This kind of scenario analysis and results can be used
for quantitative assessment, prediction and preparation for simultaneous uncertain
events. It enables evaluating the vulnerability of project organizations to different
uncertain events collectively based on their impacts on project performance.

©  ASCE
Computing  in  Civil  Engineering  2015 361

The proposed framework can also be used to conduct scenario analysis to


investigate the impacts of the likelihood of the events on project performance. For
example, in the tunneling project, the likelihood of events that cause perturbation in
agent nodes is high (e.g., 50%), while the likelihoods of events causing information
and resource node perturbations are medium (e.g., 20%) and low (10%), respectively.
Based on likelihood of the uncertain events, the probabilistic impacts of these events
on project performance could be evaluated using Monte Carlo simulation. As shown in
Figure 3(c), the results show that the mean value of performance loss due to uncertain
events with certain likelihoods in this tunneling project is 39%. This indicates the
overall vulnerability of the project to various uncertain events with different likelihoods.
The results could be used in developing contingency plans for the projects consistent
with their vulnerability to uncertain events.
The final set of results (Figure 3(d)) show a comparison between the impacts of
three different types of uncertainty-induced perturbations (i.e., agent perturbation,
information perturbation, and resource perturbation). For each type of perturbation, 20
runs of Monte Carlo experiments were implemented. As shown in Figure 3(d), the
project is most vulnerable to perturbations in the agent node entities. In average,
uncertain events that lead to perturbations in agent node entities could reduce the
performance of the project by 19%. The average performance loss caused by
perturbations in the resource nodes is 13%, while the average performance loss due to
perturbations in the information nodes is 9%. This information could be used in
prioritizing mitigation actions to optimize the allocation of limited resources for
reducing the impacts of uncertain events on project performance.

Figure 3. Simulation of project performance under uncertain events.

©  ASCE
Computing  in  Civil  Engineering  2015 362

CONCLUSION
In this paper, a dynamic network analysis and simulation framework is
proposed. The proposed framework is based on a multi-node, multi-link meta-network
structure that facilitates considering different types of entities and links in construction
projects. The proposed framework provides a novel approach for quantitative and
probabilistic assessment of project performance under uncertainty based on
consideration of the dynamic interactions between agents, information and resources.
The results of the analysis enable identification of: (i) critical node entities; (ii)
significant uncertain events; and (iii) the vulnerability of project performance to
different uncertain events individually and collectively. This information would be
critical in identification of effective mitigation actions and prioritizing them to optimize
the allocation of resources for reducing the impacts of uncertain events on project
performance under uncertainty. In future work, project organization’s   speed   and  
capability of return to the new steady state following perturbations will be considered.
This would ultimately foster a paradigm shift toward proactive assessment and
management of project performance using an integrated approach.

REFERENCES

Carley, K. M., (2003). “Dynamic   network   analysis.”   Dynamic Social Network


Modeling and Analysis: Workshop Summary and Papers, R. Breiger, K. M.
Carley, and P. Pattison, eds., National Research Council, Washington, DC.,
133-145.
Carley,   K.   M.,   and   Jeff,   R.   (2004).   “ORA:   Organization   risk   analyzer.”   CASOS
Technical Report CMU-ISRI-04-106, School of Computer Science, Carnegie
Mellon University.
Chinowsky, P., James D., and Victor G. (2008).  “Social network  model  of  construction.”
Journal of construction engineering and management, 134(10), 804-812.
Criado, R., Flores, J., Hernández-Bermejo, B., Pello, J., and Romance, M. (2005).
“Effective measurement of network vulnerability under random and intentional
attacks.” Journal of Mathematical Modelling and Algorithms, 4(3), 307-316.
Jin, Y., and Levitt, R. E. (1996). “The  Virtual  Design  Team  :  A  computational  model
of project organizations.” Computational & Mathematical Organization
Theory 2, 3, 171–196.
Holme, P., Kim, B. J., Yoon, C. N., and Han, S. K. (2002). “Attack vulnerability of
complex networks.” Physical Review E, 65(5), 056109.
Loosemore, M. (1998). “Social network analysis: using a quantitative tool within an
interpretative context to explore the management of construction crises.”
Engineering, Construction and Architectural Management, 5(4), 315-326.
Pryke,   S.   D.   (2004).   “Analyzing construction project coalitions: Exploring the
application   of   social   network   analysis.” Construction Management and
Economics, 22(8), 787-797.
Zhu, J., and Mostafavi,   A.   (2014).   “System-of-systems modeling of performance in
complex construction projects: A multi-method simulation   paradigm.”
International Conference on Computing in Civil and Building Engineering
(ICCCBE), June 23-25, 2014, Orlando, FL: ASCE.

©  ASCE
Computing  in  Civil  Engineering  2015 363

Information Representation Schema for Tower Crane Planning In Building


Construction Project

Yuanshen Ji1 and Fernanda Leite2


1
PhD Student, Construction Engineering and Project Management Program,
Department of Civil, Architectural and Environmental Engineering, The University of
Texas at Austin, 1 University Station C1752, Austin, TX, 78712; email:
[email protected]
2
Assistant Professor, Construction Engineering and Project Management Program,
Department of Civil, Architectural and Environmental Engineering, The University of
Texas at Austin, 301 E Dean Keeton St. Stop C1752, Austin, TX 78712-1094; PH
(512) 471-5957; FAX: (512) 471-3191; email: [email protected]

ABSTRACT

Given the uniqueness and diversity of constraints from each construction site,
tower crane planning can be a challenging task, especially in terms of collecting and
presenting multiple sets of information with varying levels of detail. Several research
studies have been carried out to investigate information categories necessary for
tower crane planning. This paper investigates challenges related to information
representation of tower crane planning in a computational environment. Also, a site-
specific information representation schema for tower crane planning is presented.
Requirements and information resources for each category of information are
specified with the objective of visualizing and automating tower crane planning. This
information representation schema was tested on a building construction project and
is subject to continuous refinement based on additional case studies of building
construction projects that have different characteristics and constraints. Uniqueness
from individual projects is expected; however, critical constraints and characteristics
are commonly shared by the entire building construction field.

INTRODUCTION

For construction site layout planning activities, among many categories and
resource of information being concerned (Zolfagharian and Irizarry 2014), there are
three critical components: temporary facility layout planning (Elbeltagi et al 2004,
Razavialavi et al 2014), material handling (Pheng and Hui 1999), and construction
equipment planning, where optimization and automation are highly desired (David et
al 2009). Effective site layout planning has been recognized as an effective factor to
improve project safety and performance (Hornaday et al. 1993). Also, tower crane is
typically one of the most expensive equipment on a construction site and central to
the majority of structure-related material handling (e.g., equipment, tools and material

©  ASCE
Computing  in  Civil  Engineering  2015 364

transportation) as well. Therefore, tower crane planning is key to controlling the pace
of construction activities (Gray and Little, 1985).
The state-of-practice approach of tower crane planning is highly ad hoc and
heavily relies on   planners’ “cognitive capabilities” (Tommelein et al 1991) and
professional experiences. Generally speaking, tower crane planning is about defining
and refining both the configuration and location of tower cranes for different
construction phases and for each work zone. Due to the dynamic nature and
considerable amount of unique constraints for each construction site, tower crane
planning requires a variety of information to derive an acceptable plan. Typically, the
preliminary tower crane planning requires input from the owner organization, and
engineering consultants, if applicable (Hornaday et al 1993), and the major
responsibility rests on the general contractor (specifically, project manager and
superintendents).
For building construction projects, tower crane planning is a “hand-in-hand”
process with other aspects of site layout planning to achieve the optimized goal
against cost, schedule, and safety performance. In other words, tower crane planning
requires much iteration and interaction with other jobsite facilities layout planning
(e.g. size, location). There are many criteria to declare a failed tower crane plan,
either failed partially due to safety concerns or failed as a whole against the three
criteria mentioned above.
Tower crane planning lacks a commonly agreed definition of desired optimum.
One reason is that each construction site has its own challenges that are hard to be
generalized. The second reason is that there are many uncertainties during the course
of the whole construction process. For example, material delivery delays, severe
weather conditions, and human resource issues. Any of these factors can undermine
the project in terms of cost, schedule, and safety performance. Hence, evaluating the
success of tower crane planning is difficult to practically achieve.
Albeit the dynamicity and uncertainty of construction projects, lift demand
analysis, which is the fundamental input for tower crane planning, should be
investigated and developed. Well-structured lift demand analysis can reduce the
burden of planners from tedious and error prone quantity takeoff and help them focus
on “means and methods” and many other constraints.
However, current research studies did not explicitly explore the framework for
synthesizing necessary information for efficient building information modeling (BIM)
implementation (e.g. 3D modeling, 4D simulation, and tower crane related constraints
detection). Information resources, level of detail for each information category and
modeling methods, have not been thoroughly understood.

BACKGROUND RESEARCH

During the last few decades, there have been many research studies focused
on developing mathematical models to improve the decision-making tools for tower
crane planning. For example, Rodriguez-­‐Ramos and Francis (1983) developed an
algorithm, which captures the movement of trolley and boom swing activities to
model material transportation activities. Transportation associated cost is calculated
for single tower crane location optimization. This means that tower crane location can
be optimized according to quantitative metrics. Afterwards, many research studies

©  ASCE
Computing  in  Civil  Engineering  2015 365

were conducted to solve this problem by implementing different quantitative


measurement approaches. Zhang et al (1996) applied artificial neural networks
(ANN) to model tower crane operation activities and genetic algorithms (GA) to
optimize the tower crane location on a commercial building construction project. Tam
and Tong (2003) used similar approach on another public house construction project
tower crane optimization. They concluded that the GA-ANN model is efficient for a
relatively small number of tower crane location candidates (e.g., 12 tower crane
candidate locations and 9 possible locations for material supply and unload locations
were selected). Quantitative measurement for crane type selection has been studied as
well. For example, a fuzzy logic approach (Hanna and Lotfallah 1999), and a
procedural algorithm which queries databases according to a user input through a
specifically developed graphical user interface (Al-Hussein et al 2001). This system
was specifically developed for mobile crane type selection.
With the capability of including domain knowledge and experience, and of
processing uncertain events, expert system drew researchers’   attention for
approaching tower crane planning and optimization problem. Warszawski (1990)
proposed LOCRANE, and Gray and Little (1985) proposed CRANES. However, both
of the proposed expert systems saw some limitations, such as not capable of
combining other transportation methods (e.g., hoist, carts) with tower crane for
optimization; interdependency of tower crane usage and other construction tasks
planning were not captured; and “constraint  relaxation”  problems.  (Warszawski  1990).    
There are also some other expert systems that aimed at construction planning, which
can potentially include tower crane planning, such as CONSTRUCTION PLANEX
by Hendrickson et al (1986). Expert system is a rule-based system, which is hard to
practically implement in construction planning or tower crane planning activities
because: 1)  domain  knowledge  and  experts’  experience  is   hard  to   be  organized  into  
rules in order to practically solve real-world problems that have certain level of
complexity and uncertainty; 2) current material handling and facility management is
mostly similar to “first-come-first-serve”   system   (Zolfagharian and Irizarry 2014).
Therefore, an automatic site layout checking system was initiated to take advantage
of computational capacities of modern computer systems to achieve rule checking,
which still needs to be further investigated (Zolfagharian and Irizarry 2014).
Besides the quantitative metrics and rule-based expert systems mentioned
above, there have been research efforts focused on implementing information
technologies to improve the efficiency of crane planning process. Hornaday et al.
(1993) investigated heavy lift planning methods for industrial projects and
implemented the results on a 3D crane heavy lift planning system. Results show that
geometry information visualization dramatically improves the productivity of the
iterative process of crane heavy lift planning. Also, new applications of building
information modeling (BIM) and geographical information system (GIS) have been
investigated to assist crane related planning activities. For example, Heikkilä et al.
(2013) used color-coding in a BIM model to visualize building components’ weight.
Mobile crane oriented research studies have implemented BIM and fine-tuned
algorithms for automatic lift path checking and design visualization (Lei 2013).
Irizarry and Karan (2012) implemented GIS to expedite massive spatial data
integration for tower crane location selection with respect to pre-defined rules.

©  ASCE
Computing  in  Civil  Engineering  2015 366

Afterwards, BIM is applied to visualize the results for review. Meanwhile,


interoperability issues between GIS and BIM was recognized for future
improvements (Irizarry and Karan 2012).

RESEARCH APPROACH

This pilot research study intends to develop an information representation


schema for site-specific tower crane planning that enables 4D simulation and tower
crane related constraints detection. Such information schema should guide planners to
establish a site-specific model with sufficient and relevant only details to support
decision making in tower crane planning process. This can be achieved by looking
into the status quo of tower crane planning process of building construction projects.
The ultimate goal of such information representation schema includes facilitating
tower crane planning automation and optimization for any type of project.
This exploratory study containts three sequential steps: 1) modeling tower
crane planning status quo process; 2) identifying engineering challenges for the
representation of tower crane planning in a computational environment; and 3)
formalizing a comprehensive information representation schema. For demonstration
purposes, a proof-of-concept case study was developed following these series of
research steps.

CASE STUDY

The developed case study is a 430,000 sq. ft. teaching and research facility in
a university campus with a total project cost of $310 million. This project includes
two major phases: existing facilities demolition and new building construction (at the
same location). Site condition is complex and congested. Access to the jobsite is
limited in all directions for logistics and especially for tractor-trailer trucks. To
accommodate these site-specific constraints and structural configurations, the
superintendent decided to use two stand-alone hammerhead tower cranes from the
general   contractor’s   in-house inventory. Due to availability issue, the two tower
cranes are of different types.

Tower Crane Planning Process Modeling


Tower crane planning process has three phases: 1) initial tower crane planning
development; 2) detailed tower crane planning development; 3) day-to-day tower
crane meeting. This case study was carried out at end of the detailed tower crane
planning phase, which is six months ahead of the tower crane installation, according
to the project schedule. Each phase has its own input information, the expected output
and design criteria.
Initial tower crane planning requires information from the initial site layout,
specifications of possible tower cranes from inventory and selected critical lifts. The
expected output for this phase is preliminary tower crane type and location selection,
which has to coordinate with preliminary material laydown/storage, critical lift plan,
preliminary hoist location, other facility locations, and tower crane capacity. Figure 1
is a flowchart of the process. During the initial tower crane planning, iterations are

©  ASCE
Computing  in  Civil  Engineering  2015 367

employed to make sure that the ranges of both tower cranes cover all lift pickup and
drop down locations, and to avoid lifted loads swinging over major work zones to
reduce safety risks.

Figure1. Initial Tower Crane Planning Process Modeling

Detailed tower crane planning focuses on tower crane location and configuration
refinement with respect to spatial conflicts avoidance, tower crane structural and
foundation design, tower crane service plan (e.g. erection, dismantle, alteration, and
maintenance) and constructability. Figure 2 is a flowchart of the detailed planning
process. When this planning step is finished, tower crane planning is complete in
terms of location selection and configuration refinement. Further detailed sequencing
and scheduling of tower crane activities are determined in day-to-day tower crane
meetings.
Day-to-day tower crane meetings are held one day ahead of planned tower
crane-related activities to address specific tower crane activity sequencing, hook hour
assignment, and safety protocol compliance.

Engineering Challenges to Representing Tower Crane Planning


As shown in the process mentioned above, tower crane planning is highly
iterative, and is typically accomplished manually. There are few ready-to-use tools or
technologies that can assist or partially relieve the tedious and error prone working
load from planners in order to efficiently execute spatial constraints detection over
tower crane usage life cycle. Constructability review regarding tower crane activities
is also poorly supported  by  visualization  and  highly  relying  on  planners’  professional  
experience. All mentioned above could impose more schedule interruptions and
safety issue to the project while construction plan being executed.

©  ASCE
Computing  in  Civil  Engineering  2015 368

Figure 2. Detailed Tower Crane Planning Process Modeling

Also, critical lift information, as one of the tower crane planning inputs, is
prepared by reviewing design documents, and integrating previous experience and
engineering judgment. Comprehensiveness of such information is at engineers’  own  
risk. This process has various approaches and is not formalized for automation as well.
Lastly, tower crane schedule cannot be updated automatically according to
project schedule changes. Tower crane operation hours and related costs can hardly
be automatically updated when project schedule change happens. Construction
engineers and estimators have to work together, on case-by-case basis, to understand
the time and economic consequences casted over tower crane usage by any project
schedule changes, and vice versa.

Information Representation Schema


To address the three major engineering challenges and to represent tower
crane planning in a computational environment, an information representation schema
was devised. Specifically, in order to assist tower crane spatial constraints detection
process, several resources of information, such as site layout, structural model,
mechanical, electrical and plumbing (MEP) model, tower crane model, surrounding
as-built model, and local facility model, were integrated to reflect the possible spatial
constraints. Figure 3 illustrated the proposed information representation schema.
Spatial constraints have two major types: hard and soft. Hard constraints refers to
situations where parts of tower crane are physically obstructing with other objects.
While soft constraints include tower crane components interfering with facility,
equipment and personnel clear space.

©  ASCE
Computing  in  Civil  Engineering  2015 369

Figure 3. Information Representation Schema for Tower Crane Planning

A comprehensive 4D lift demand analysis, integrating information from tower


crane specific work breakdown structure and project schedule is also included in the
information schema. 4D lift demand analysis is critical in terms of connecting project
schedule with tower crane schedule. In order to develop tower crane plan accountable
over the entire period of usage, miscellaneous tower crane information should be
considered, which includes range, capacity, erection, dismantle, height alteration, and
maintenance and inspection. These information can also have physical and schedule
conflicts with other jobsite activities.

CONCLUSIONS AND FUTURE WORK

This paper investigated engineering challenges related to the information


representation for tower crane planning in a computational environment through a
building construction project case study. A proposed information representation
schema was presented. Requirements and information resources for the information
schema were specified with the objective of automating the process of tower crane
planning. Future work will focus on testing and refining this information
representation schema based on additional building construction project cases that
have different characteristics and constraints. Uniqueness from various project
conditions is expected; however, critical constraints and characteristics are commonly
shared by the entire building construction field.

REFERENCES

Al-Hussein, M., Alkass,   S.,   Moselhi,   O.   (2001).   “An   Algorithm   for   Mobile   Crane  
Selection and Location on Construction Sites.”  Construction Innovation, 1(2),
91-105.

©  ASCE
Computing  in  Civil  Engineering  2015 370

David J. E., Gary D. H. (2009). "Construction Plant and Equipment Management


Research: Thematic Review." Journal of Engineering, Design and
Technology, 7(2), 186-206.
Elbeltagi, E., Hegazy, T.,   and   Eldosouky,   A.   (2004).   “Dynamic Layout of
Construction Temporary Facilities Considering Safety.”   J. Constr. Eng.
Manage., 130(4), 534–541.
Gray, C., Little, J. (1985).  “A  Systematic Approach to the Selection of an Appropriate
Crane for a Construction Site.”  Constr. Manage. Economics, 3(2), 121-144.
Hanna, A. S., & Lotfallah, W. B. (1999). “A Fuzzy Logic Approach to the Selection
of Cranes.” Automation in Construction, 8(5), 597-608.
Heikkilä, R., Malaska, M., Törmänen, P., Chris Keyack (2013).  “Integration of BIM
and Automation in High-Rise Building Construction.”  Proceedings of the 30th
ISARC, Montréal, Canada, 1171-1176.
Hendrickson, C., and Carnegie Mellon U niversity .Engineering Design Research
Center (1986). “An   Expert System Architecture for Construction Planning"
Department of Civil and Environmental Engineering. Paper 34.
Hornaday,   W.,   Haas,   C.,   O'Connor,   J.,   and   Wen,   J.   (1993).   ”Computer-­‐Aided
Planning for Heavy  Lifts.”  J. Constr. Eng. Manage., 119(3), 498-515.
Irizarry, J., Parveresh Karan, E. (2012). “Optimizing Location of Tower Cranes on
Construction Sites through GIS and BIM Integration.” ITcon, 17, 351-366.
Lei, Z., Taghaddos, H., Hermann, U., Al-Hussein, M. (2013). “A methodology for
mobile crane lift path checking in heavy industrial projects.” Automation in
Construction, 31, 41-53.
Pheng, L.   S.,   and   Hui,   M.   S.   (1999).   “The application of JIT Philosophy to
Construction: A Case Study in Site Layout.” Construction Management &
Economics, 17(5), 657-668.
Razavialavi, S., Abourizk, S., and Alanjari, P. (2014). “Estimating the Size of
Temporary Facilities in Construction Site Layout Planning Using Simulation.”
Proceedings of the Construction Research Congress, ASCE, Atlanta, GA, 70-
79.
Rodriguez-­‐Ramos,  W.  and  Francis,  R.  (1983).  ”Single  Crane  Location  Optimization.”  
J. Constr. Eng. Manage., 109(4), 387–397.
Tam, C. M., Tong, T. K. L. (2003). “GA-ANN Model for Optimizing the Locations
of Tower Crane and Supply Points for High-Rise Public Housing
Construction.”  Construction Management and Economics, 21(3), 257-266.
Tommelein, I., Levitt, R., Hayes-­‐Roth,   B.,   and   Confrey,   T.   (1991).   ”SightPlan  
Experiments:   Alternate   Strategies   for   Site   Layout   Design.”   J. Comput. Civ.
Eng., 5(1), 42-63.
Warszawski, A. (1990).   “Expert   System   for   Crane   Selection.”   Constr. Mgmt. and
Economics, 8(2), 179-190.
Zhang,   P.   Harris,   F.C.   Olomolaiye,   P.O.   (1996).   “A   Computer-­‐Based Model for
Optimizing the Location of a Single Tower Crane” Building Research &
Information, 24(2), 113-123.
Zolfagharian, S. and Irizarry, J. (2014) “Current Trends in Construction Site Layout
Planning.” Proceedings of the Construction Research Congress, ASCE,
Atlanta, GA, 1723-1732.

©  ASCE
Computing  in  Civil  Engineering  2015 371

An eeBIM-Based Platform Integrating Carbon Cost Evaluation for Sustainable


Building Design

Qiuchen Lu1 and Sanghoon Lee1


1
Department of Civil Engineering, University of Hong Kong, Hong Kong.
E-mail: [email protected]; [email protected]

Abstract
In order to meet the CO2 emission requirements of sustainable construction, both
energy performance and ‘carbon’ accounting should be taken into account for the
construction information. This study proposes an energy-enhanced BIM framework that
integrates the carbon finance theory. With the integrated cost evaluation according to the
carbon finance theory, the proposed eeBIM-based platform can help achieve lower
carbon emission cost while enhancing the energy efficiency and further estimate the
carbon consumption of the building design during its construction. The project team thus
will be able to evaluate the overall sustainability and develop an optimized building
design solution at the design stage. In particular, this paper discusses the analysis of
existing sustainable building design systems through intensive literature review, identifies
key challenges and gaps remained unaddressed, and introduces the model proposed to
address these challenges and gaps.
Keywords: eeBIM; Carbon finance; Cost evaluation; Energy efficiency; Sustainable
construction

INTRODUCTION
Based on the report published by Intergovernmental Panel on Climate Change
(IPCC), some proposed policies would focus on emissions targets as well as temperature
change: for example, controlling global average temperature change to 2 °C above
preindustrial levels (IPCC 2014) and reducing greenhouse gases (GHGs) emissions to 80%
below 2005 levels by 2050 (White House 2013). Among all these activities related to
carbon emissions and energy concern, construction has played a significant role. In the
United States, residential and commercial buildings occupy about 40% of the country’s
primary energy and produce 20% of the national carbon dioxide budget (Stadel et al.
2011). Besides, in Hong Kong, with the development of ‘ten major infrastructure projects’
in the Policy Address 2007/08, the fuel-based carbon emissions during the construction
period leaded to a wildly concern about the negative impacts of these projects on the
environment as well as energy consumption (Wong et al. 2013). These facts show us that
both energy performance and ‘carbon’ accounting should be taken into consideration for
the sustainable construction.
With the development of computational and monitoring technology, computers
and many aid tools have been used in the architecture, engineering and construction
(AEC) industry. Building Information Model (BIM) represents the process of
development and use of a computer generated model to simulate the planning, design,

©  ASCE
Computing  in  Civil  Engineering  2015 372

construction and operation of a facility (Azhar et al. 2011). It can also act as a digital
database with comprehensive building information, work simulation and schedule control
properties. Thus, BIM brings in an effective and creative way for sustainability. Mah et al.
(2011) aim to establish a baseline for carbon dioxide (CO2) emissions quantification
based on a BIM platform in the current residential construction process. Chen et al. (2013)
combine BIM with energy simulation tools to confirm the lifecycle cost and limit the
carbon emission of a building during construction and operation stages. Since
construction industries in Ireland intend to achieve the requirements of high-energy
saving and ‘nearly zero’ standards, researchers plan to build an integrated BIM platform
to gain a low carbon energy future (McAuley et al. 2013). The McGraw-Hill Green BIM
Report (McGraw-Hill Construction 2010a) also indicated that BIM is such a smart tool
that can assist in controlling and reducing carbon emissions through energy performance
analysis. Furthermore, there are several projects planning to propose new strategies based
on the integration of BIM and green building rating systems. For example, Wu et al.
(2010) tried to achieve green building certification (such as LEED) and Motawa et al.
(2013) improved post-occupancy evaluation processes using BIM to meet sustainable
construction.
Nowadays, focuses have been mainly on either improving energy efficiency or
reducing carbon-related cost through the adoption of BIM, but an integrated platform is
needed to meet these two key requirements for sustainable construction. To reach this
objective, this paper focuses on integrating energy simulation and cost management
methods so that the project team will be able to consistently evaluate the energy
performance and see if the design can satisfy overall environmental requirements so that
to optimize the efficient lifecycle energy performance estimation and decision-making.

LITERATURE REVIEW
The eeBIM platform
The ‘Intelligent Use of Buildings’ Energy Information (IntUBE) project, funded
by the European Commission through the EU FP7, focused on the application of BIM for
sustainability designing. In this project, Crosbie et al. (2009) mentioned that BIM
combined with energy simulation tools for performing energy simulation would cost
almost 50 % of the project team’s time. Considering the time and lack of high
interoperability for the energy simulation, Gökçe et al. (2012) developed an
energy-enhanced BIM (eeBIM) platform. The eeBIM is an extended platform based on
BIM and it is built particularly aiming at enhancing energy efficiency. The eeBIM
integrates different information related to energy performance and creates a bridge model
between the BIM platform and energy simulation tools. It uses the industry foundation
classes (IFC), an international standard, in order to exchange information conveniently.
Further, it provides senor systems, which could enhance and complete energy related
database during the operation period (ibid).
In details, this eeBIM platform includes three steps as follows: (1) Simplify and
create the BIM model into an energy-related BIM model. The data contained in BIM is
too rich for energy simulation and the needs of the energy domain are also different, so
an energy model should be produced for simulation; (2) Extend the original BIM
information library and link it with the external data. Energy performance analysis
requires for the various necessary computations and information; (3) Build a linking

©  ASCE
Computing  in  Civil  Engineering  2015 373

bridge model between a BIM platform and computational application models (energy
simulation model, energy monitoring model and cost model) and successfully transfer
data between them.
In general, the eeBIM platform aims at creating an integrated platform for
extending energy related information, achieving the interoperability with the energy
simulation tools and providing a high efficient working environment focusing on the
energy field. (Gökçe et al. 2013).
The carbon finance theory
Carbon finance means giving a price to carbon. When pollution gets more
expensive, sustainable constructions will be taken into account aiming to decrease costs.
Carbon finance has been proven to be an efficient tool when controlling greenhouse gas.
Two approaches in carbon finance are commonly used: the market-based approach and
the command & control regulation (Feldman et al. 2011). In the market-based approach,
there are two methods to reduce carbon emissions, which are (1) cap and trade, and (2)
base-line and credit. In the cap and trade scheme, a maximum emission allowance will be
set under this market scheme and if there remains extra emission allowance, it can be
allowed for trading freely in the market. But if the project results to exceed this cap, they
must afford this additional part from the global carbon market. The European Emissions
Trading Scheme (EU ETS) is the largest carbon market based on this scheme. This
market concentrates on the development of the low-carbon economy around the whole
world, which intends to support and encourage effective policies and programs for
reducing emissions (Stern 2007).
When applying the base-line and credit scheme, it requires projectors to control
their emissions under the base-line. When they remain extra credits, they could also trade
these credits in the carbon market. The Clean Development Mechanism (CDM) created
by the Kyoto Protocol belongs to this scheme. Furthermore, the command & control
regulation mainly relies on government policies and suggestions to reduce carbon
emissions connected with human activities and operations (Bosi et al. 2010).
Possibilities and Advantages of Carbon Cost Evaluation based on the eeBIM Platform
in Sustainable Construction
Since BIM is an integrated digital platform for architects, engineers, contractors,
owners and facility managers, it contains a wealth of building information files. Thus,
BIM can provide an intelligent library, which owns multi-dimensional objects including
comprehensive properties (e.g., quantity and specification details). Similarly, it is also a
promising platform to achieve managing a low carbon energy future (McAuley et al.
2013). Based on related research, emissions can be reduced through increasing energy
efficiency, changing in demand, and adopting of clean power, heat and low or zero
carbon technologies. Among all the aspects related to sustainable construction, improving
the energy performance is the most effective one. But there still exists a variety of
limitations in analysis of building energy performance using BIM. The primary one of the
limitations is that most of BIM applications still do not have clear processes to transfer
information between BIM and energy simulation programs (GSA 2009), which could be
enhanced by adapting the eeBIM platform. In addition, the carbon finance theory (i.e.,

©  ASCE
Computing  in  Civil  Engineering  2015 374

giving a price on carbon) is proven to be the most efficient way in lowering carbon
emissions (Stern 2007).
A common global carbon price means that people should be responsible and pay
for their actions and activities, so this method in economic senses leads people to take
low carbon technologies and improving energy efficiency into consideration (Bosi et al.
2010). In order for the project team to be able to come up with an energy efficient design
solution while satisfying the carbon cost constraint, carbon cost evaluation based on the
eeBIM platform is introduced in this paper.
Research Objectives
This study aims to develop a BIM platform that integrates the eeBIM framework
with the carbon finance requirements: the proposed eeBIM + Carbon Finance Theory.
Based on this integrated platform, an extended sustainable information library, an
extended carbon emission data library and an extended carbon finance information
library will be created and added to enrich design solutions as well as improving design
plans, which will be energy efficient design solutions while satisfying the carbon cost
constraint for gaining sustainable construction.

RESEARCH FRAMEWORK
Simulation workflow
To achieve the objective of this study, the simulation workflow of this research
framework is designed as shown in Fig. 1 and the integrated BIM platform with extended
libraries is described as shown in Fig. 2. This process includes six steps:
1) Create a BIM model. Create a building information model and check if the BIM model
is valid for simulations.
2) Create analysis models for energy simulation. Base on the BIM model, two
energy-enhanced models will be produced: one is using the traditional method and set as
a baseline model for analyzing building energy performance; the other one is based on
the proposed eeBIM + Carbon Finance Theory and designed for supporting sustainable
construction. Since energy simulation models need information include construction,
operating schedules, equipment loads (lighting, equipment, etc.), heating, ventilating, and
air-conditioning (HVAC) systems, local weather data, and utility rates, these two models
will be checked and confirmed that all the details have been completed in this step.
3) Provide alternative design plans aiming at achieving goals of sustainable construction.
According to the extended sustainable building information library, data is provided
including impacts of site location, building shapes, envelope choices, building orientation,
alternative energy sources, low-carbon technologies and so on. When the project team
chooses useful information employing the proposed eeBIM + Carbon Finance Theory,
they should follow three rules: (a) Clarify the primary goal of this project and establish
the decision objectives; (b) Then, specify the types and level of detail of information that
can be reasonable and affordable in current practice; (c). Based on this library, make
comparison about energy performance.
4) Compare different preliminary models and confirm the design plan. There are two key
standards to make a decision: (a) energy usage and cost; (b) energy performance of the
design plan.

©  ASCE
Computing  in  Civil  Engineering  2015 375

5) Conduct energy simulation and carbon cost evaluation. According to the extended
carbon emission data library and the extended carbon finance information library, energy
consumption and carbon cost are calculated. They are two significant decision-making
criteria for sustainable construction.
6) Compare and evaluate the proposed model. In the last step, check if the proposed
model performs much better than the baseline model, and confirm the model is
satisfactory.

Data collection and model building


The data collection of this simple case study is based on the studies conducted by
Stern (2007) and Chen et al. (2012). Referring to Table 1, carbon cost, related carbon
emission and estimated possible improvement for a baseline model and an improved
model have been listed and calculated. Some existing deployment incentives about
carbon finance are also listed in Table 1. And energy-related carbon emissions (C) can be
calculated depending on the following equation (Zhang et al. 2013):
(1)
where i stands for the different kinds of energy (i=1,2…), is the type i of energy
consumption, and represents the carbon-emission coefficient for energy type i. For

©  ASCE
Computing  in  Civil  Engineering  2015 376

example: based on research in Taiwan, average carbon emission is calculated following


the actual electricity consumption, which is C=0.768*EC, in which EC is electricity
consumption (kg/kWH) (Chen et al. 2012).
Considering the HVAC systems in Table 1, carbon emission of the baseline
model is 136198(kg-CO2/type) according to Eq. (1). Further, according to the EU ETS,
an illustrative carbon price can be decided as £70/tC (£1/tC = £0.273/tCO2) (Stern 2007).
So depending on the size of construction project, total carbon cost of the HVAC systems
can be figured out. The additional options providing for sustainable construction are
skylightings and thermal insulation, which are Zero-Carbon technologies. The carbon
cost of roofs and windows’ plans can also be calculated in the same way. Following this
way, decisions and design plans can be produced based on carbon-related information
coming from three libraries.

CONCLUSIONS
With the eeBIM platform integrating the carbon finance theory (eeBIM + Carbon
Finance Theory) developed in this study, the CO2 emission requirement of sustainable
construction and improving energy performance will be both taken into account for the
decision making and construction information. Firstly, this study adapted an eeBIM
platform, which can help build energy simulation models in a timely manner. Then, three
extended libraries have been proposed and connected with this energy-enhanced BIM
platform. They are the extended sustainable building information library, the extended
carbon emission data library, and the extended carbon finance information library. These

©  ASCE
Computing  in  Civil  Engineering  2015 377

three libraries could provide information related to sustainable construction and carbon
cost aiming at achieving lower carbon emission while enhancing the energy efficiency.
Furthermore, the developed platform estimates the carbon consumption as well. The
proposed platform is able to evaluate the overall sustainability and help develop an
optimized building design solution at the design stage. In particular, this paper discussed
the analysis of existing sustainable building design systems through intensive literature
review, the research framework, and also provided a case study with the integrated cost
evaluation according to the carbon finance theory. Future work includes a series of
complete case studies, in which the effectiveness and efficiency of the proposed approach
are evaluated.

Table1. Carbon-related Data about Improving the Sustainability Performance


Objectives Criteria Subcriteria Descriptions
Carbon cost (NTD/unit):
R1: Roof (Baseline)
1760/sqm;
Roof
Carbon cost (NTD/unit):
Building R2: Green surface roof
1850/sqm;
component Carbon cost (NTD/unit):
Extended W1: Single-Reflective (Baseline)
alternatives 4402/unit;
building
Window
information Carbon cost (NTD/unit):
data W2: Single-Clear
3981/unit;
H1: The HVAC system (Baseline) Carbon emission: 136198
The HVAC (kg-CO2/type);
Equipment
systems H2: Include skylightings and (Near) Zero-Carbon
Thermal insulation technology
Carbon emission: 1.20
G1: Low-E glass (Baseline)
Material (kg-CO2/kg);
Glass types
alternatives Carbon emission: 0.96
G2: Reflective glass
(kg-CO2/kg);
Carbon emission: 273.03
Construction M1: Concrete (Baseline)
(kg-CO2/m3);
material M2: Change fuels in concrete Estimated possible
sources Construction Concrete
(Using non fossil fuels) improvement: 10%-70%;
material M3: Improve efficiencies (Wet to Estimated possible
options dry process for PC manufacture) improvement: 10%-20%;
Carbon emission: 6.69
Steel M4: Steel frame (90X90)
(kg-CO2);

Existing Fiscal incentives: including reduced taxes on biofuels in the UK;


deployment Feed-in tariffs: they are a fixed price support mechanism usually
Extended incentives combined with a regulatory incentive to purchase output;
carbon finance
information Deployment UK Energy Technologies Institute intends to achieve some goals
policy including a 60% reduction in emissions by 2050.

*All these data are collected and summarized from the studies by Stern (2007) and Chen et al. (2012);
*R1, W1, H1, G1 and M1 are carbon related data for the base-line model, while the others are options for improving
energy performance and reducing carbon emission.

REFERENCES
Azhar, S. (2011). “Building information modeling (BIM): Trends, benefits, risks, and
challenges for the AEC industry.” Leadership and Management in Engineering,
11(3), 241-252.

©  ASCE
Computing  in  Civil  Engineering  2015 378

Bosi, M., Cantor, S., and Spors, F. (2010). “10 years of experience in carbon finance:
insights from working with the Kyoto mechanisms”.
Chen, P. and Li, Y. (2012). “BIM-based integration of carbon dioxide emission and cost
effectiveness for building in Taiwan.” National Taiwan University, Taiwan.
Crosbie, T., Dawood, N. and Dean, J. (2009). “Energy profiling in the life-cycle
assessment of Buildings.” Management of Environmental Quality: An
International Journal, 21(1), 20-31.
Feldman, A., and R. Mellon (2011). “Putting a price on pollution: What it means for
Australia’s property and construction industry.” Green Building Council of
Australia, Australia.
Gökçe, K. U., and Gökçe, H. U. (2012). “eeBIM for energy efficient building operations.”
Proc., 14th Int. Conf. on Computing in Civil and Building Engineering, Moscow
State Univ. of Civil Engineering, Moscow, Russia.
Gökçe, H. U., and Gökçe, K. U. (2013). “Integrated System Platform for Energy Efficient
Building Operations.” Journal of Computing in Civil Engineering.
GSA (2009). “05 – GSA BIM Guide for Energy Performance. Version 1.0.” GSA
Building Information Modeling Guide Series.
Mah, D., Manrique, J., Yu, H., Al-Hussein, M. and Nasseri, R. (2011). “House
construction CO2 footprint quantification: a BIM approach.” Construction
Innovation: Information, Process, Management, 11(2), 161-178.
McAuley, B., Hore, A. V., and West, R. (2012). “Use of Building Information Modelling
in Responding to Low Carbon Construction Innovations: an Irish Perspective.”
International Conference on Management of Construction: Research to Practice,
Montreal.
McGraw-Hill Construction (2010a). Green BIM: How Building Information Modeling is
Contributing to Green Design and Construction. Smart Market Report.
McGraw-Hill Construction (2010b). The Business Value of BIM in Europe – Getting
BIM to the bottom line in the UK, France and Germany. Smart Market Report.
Motawa, I., and Carter, K. (2013). “Sustainable BIM-based evaluation of buildings.”
Procedia-Social and Behavioral Sciences, 74, 419-428.
Stern, N. (Ed.). (2007). “The economics of climate change: the Stern review.” Cambridge
University press.
Stadel, A., Eboli, J., Ryberg, A., Mitchell, J., and Spatari, S. (2011). “Intelligent
sustainable design: integration of carbon accounting and building information
modeling.” Journal of Professional Issues in Engineering Education and Practice,
137(2), 51-54.
Wong, J. K., Li, H., Wang, H., Huang, T., Luo, E., and Li, V. (2013). “Toward
low-carbon construction processes: the visualization of predicted emission via
virtual prototyping technology.” Automation in Construction, 33, 72-78.
Wu, W. (2010). “Integrating building information modeling and green building
certification: the BIM-LEED application model development.” Doctoral
dissertation, University of Florida.
Zhang, J., Zhang, Y., Yang, Z., Fath, B. D., and Li, S. (2013). “Estimation of
energy-related carbon emissions in Beijing and factor decomposition analysis.”
Ecological Modelling, 252, 258-265.

©  ASCE
Computing  in  Civil  Engineering  2015 379

Optimizing Disaster Recovery Strategies Using Agent-Based Simulation

Mohamed S. Eid1 and Islam H. El-Adaway2


1
Ph.D. Student, Department Civil and Environmental Engineering, University of
Tennessee – Knoxville, 851 Neyland Drive, 324 John D. Tickle Building, Knoxville,
TN 37996. E-mail: [email protected]
2
Associate Professor of Civil Engineering and Construction Engineering and
Management Program Coordinator, Department of Civil and Environmental
Engineering, University of Tennessee - Knoxville, 851 Neyland Drive, 417 John D.
Tickle Building, Knoxville, TN 37996. E-mail: [email protected]

Abstract

Mitigation, preparedness, response, and recovery represent the four-phase


methodology for emergency management. However, disaster recovery is considered
the least understood aspect in the emergency management science and practice.
Achieving a sustainable disaster recovery requires the participation of different
stakeholders within the host community through the post-disaster planning and
implementation phases. Despite the increasing rate and magnitude of natural hazards
in the last decades, it is noted the lack of literature for holistic sustainable disaster
recovery models that captures the associated stakeholders’ decisions and actions. To
this end, this paper presents an agent-based model to study the disaster recovery
strategies of the different associated stakeholders. The model demonstrates two main
types of agents; (1) the residents of the impacted region along with their strategies for
mitigating financial impacts and maximizing individual welfare, and (2) the
government– state and regional – along with their strategies to mitigate the disasters
impacts and to increase the host communities’ resilience to hazardous events. The
authors used a comprehensive social vulnerability index to better guide the
investment efforts at the various levels. Ultimately, the agent-based simulation model
helps in better understanding the interrelationship between the different stakeholders,
and consequently determining the optimum combination of disaster recovery
strategies. The model is applied on three coastal counties in Mississippi to adopt more
holistic sustainable disaster recovery processes after hurricane Katrina.

INTRODUCTION

Disaster recovery is commonly referred as the restoration and repair of the


damaged built environment to the pre-disaster conditions. Despite that disaster
recovery is one of the four pillars in emergency management, it is still the least
understood in both the practice and science perspectives (Smith and Wenger 2007).
This is due to the vast uncertainty of the available outcome for the different disaster
recovery plans. In addition, each disaster recovery process is different in terms of
location, impacted population properties and the current and projected host
community’s resilience. Recent sustainable disaster recovery studies suggest the
participation of the different stakeholders in both the planning and implementation

©  ASCE 1
Computing  in  Civil  Engineering  2015 380

phases to achieve a successful disaster recovery for the host community (Smith and
Wenger 2007, Olshansky et al. 2006). The participation of the different stakeholders
increases the individual utility from the development process through decreasing the
implementation of undesired plans by the host community (Boz and El-Adaway 2014,
Boz et al. 2014). Moreover, the communication between the recovery agencies and
system users increases the recovery rate, quality of the disaster recovery process
outcome and the host community’s resilience (Chang and Rose 2012, Olshansky et al.
2006). To this end, it is recommended that the different governmental sectors, NGOs,
and local residents participate with the other host community stakeholders in the
planning and decision making phases to attain a successful disaster recovery process.
In respect of the commonly used disaster recovery strategies by the different
stakeholders, the recovery strategies of the governments often include repair,
retrofitting, rebuild, and changing land usage patterns. From the residents point of
view, different decisions are preferred over others that may even lead them to move
out from the effected region and start a new life somewhere else (Cutter et al. 2006,
Olshansky 2006). These different post disaster strategies – both on governmental and
residential levels – need to be optimized to increase the overall community’s welfare.

GOAL AND OBJECTIVES

This paper develops a holistic agent based disaster recovery model that takes
into account the different stakeholders attributes and vulnerability to hazardous events
as well as their different recovery strategies. Ultimately, this research will help better
support the community’s welfare by decreasing the community’ vulnerability and
increasing the individuals’ utility functions.

BACKGROUND INFORMATION

Olshansky et al. (2006), Olshansky (2006) and Cutter et al. (2006) illustrated
several disasters examples and their recovery processes. Through their studies, one
can understand patterns of successful recovery key items, and government and
residents commonly used strategies. The local governments interaction with the
different stakeholders in the host communities played a significant role in the
recovery stages associated with both the 1995 Kobe earthquake in Japan and the 2005
hurricane Katrina in the United States. The plans - that had been negotiated and
discussed with the residents in the impacted regions - achieved a high approval rate
by the residents and increased the host communities’ welfare. Moreover, the
commonly used government recovery plans included: financial compensation; repair;
rebuild; and upgrade the effected infrastructure. Further, the impact of the insurance
policies purchased by residents and subsidized by the government played an
important role in the preparedness phase that highly affected the recovery rate.
The goal of a sustainable disaster recovery is not merely to restore the system
functionality, but also to increase the impacted region resilience to hazardous events.
As such, several research has been carried out to investigate and quantify the
communities’ resilience and vulnerability to hazards (Burton G, 2012, Cutter et al.
2003, Gilbert, 1995). In the social science field, vulnerability is considered in general
as “the potential for loss” (Mitchell 1989). To this end, different models were

©  ASCE 2
Computing  in  Civil  Engineering  2015 381

developed to quantify the communities’ vulnerability to hazardous events (Burton


2012, Cutter et al. 2003, and Turner et al 2003). Cutter et al. (2003) introduced a
Social Vulnerability Index (SoVI) model to assess the host community’s vulnerability
to disaster. SoVI is a relative measure of the overall vulnerability of the studied
regions to the hazardous events by assessing the host community social factors which
affect the vulnerability of the residents in these regions. These factors includes:
income, age, household values, education, percentage of mobile homes, etc. SoVI is
considered the most widely recognized vulnerability assessment tool in the social
science field.
Realizing the aforementioned, the following section of the paper presents the
associated research methodology to attain the prescribed goals and objectives.

METHODOLOGY

This study adapts a four step methodology to attain an optimal disaster


recovery plan that included (1) modeling the different host community’s stakeholders
along with the different agents’ utility functions as well as modeling their learning
behaviors which will optimize the decision making process; (2) collecting the
relevant data for the different agents’ attributes as well as the most commonly used
strategies by these agents; (3) implementing the agent based model (ABM) on the
NetBean computer platform to optimize the disaster recovery strategies, and finally
(4) interpreting and analyzing the results generated from the developed model.
Social Learning Module

Resident Resident

Application Approval Reactive Reinforcement


Resident Resident
Criteria Learning Module

Resident Agent LDRA SDRA

• Insurance policy? • Check the residence • Funding approved


• Repair damaged application eligibility. residential applications
property? • Pass approved • Re-distribute funding
• Apply for government application to SDRC. proportions.
assistance?

Figure1. Overall Agents Interactions

Model Overview

The proposed ABM represents the residents in the impacted community as


well as the associated local state, and federal governmental agencies. Each of these
agents has their own actions and utility functions. However, the scope of this paper is
limited to the residents and state disaster recovery strategies leaving the local and the
federal strategies for future work. Figure 1 presents the model overall structure. After
a hazardous event, each resident checks the damages in the household and assesses if
repair is required. The resident then determines if it should apply for assistance from
the Local Disaster Recovery Agency (LDRA). Also, the agent determines the
compensation amount received from the insurance policy, if had been purchased. The

©  ASCE 3
Computing  in  Civil  Engineering  2015 382

residents also chose the actions of repairing their damaged houses or leaving the
impacted region. Meanwhile, the State Disaster Recovery Agency (SDRA) offers
different residential disaster recovery action plans. These plans are transmitted to the
LDRAs that are in direct contact with the local residents. LCDAs propose the state’s
action plans to the residents to choose one that will increase their objective functions.

Residents
Resident agents tend to increase their current wealth through maintaining their
household value, decreasing potential expenses and increasing income. The residents
tend to maintain the household value through repairing it either by own investments,
insurance coverage or through the government aids. In this context, the
aforementioned actions act as the agents’ decision variables. The proposed ABM
illustrates the resident’s utility function as shown in the following equation:
Ui = Hi + Ii – T – P + C – R
Where, U is the utility function of resident i, H is the household value for
resident i, I is the monthly income for resident i, T is monthly distributed tax amount
(income and property taxes), P is the insurance premium cost if any, C is the
insurance compensation value if any and R is the self-paid repair costs.
Thus, to maximize the objective function, resident tends to communicate with
the other residents to learn which of the decision variables increase the other
residents’ utility function. To this end, the authors investigated different learning
techniques to be used for the resident agents. The techniques investigated included
Roth E’rve reactive learning, Derivative-Follower, Q-learning, Bayesian learning,
Maximum-Likelihood and Genetic Algorithms. Genetic Algorithms (GAs) was used
as GAs is capable of demonstrating the social learning of a group of individuals from
each other through mimicking the most fit of them (Riechmann 2001, Vriend, 1998).

Residents and Local Disaster Recovery Agency


Local disaster recovery agency (LDRA) in this agent-based perspective act as
communicators between the state disaster recovery agency (SDRA) and the local
residents. Also, LDRA assesses the recovery applications of the impacted residents.
Accordingly, LDRA offers the SDRA’s disaster recovery plans to the local residents
to choose from. To this end, the resident has to choose one of the proposed plans that
will maximize the utility function as shown in the following equation:
E[Uj] = [Resident Utility + Gj x Aj ] x prj
Where; E[Uj] is the expected utility of plan j, G is the government maximum
award for plan j, A is the government average acceptance probability of plan j, and pr
is the probability used in the reactive reinforced learning module discussed below.

Resident’s Reactive Reinforced Learning


The resident choice from the different offered plans depends on the maximum
expected utility function obtained across the different plans. However, in case of
denying the resident application by the LDRA if the applicant did not meet the
criteria or due to insufficient funding by the SDRA, the agent should learn from this
step, and thus choose a different plan in the following steps. Roth E’rve Reactive
Reinforced Learning (1998) was found best to illustrate this learning process. The

©  ASCE 4
Computing  in  Civil  Engineering  2015 383

algorithm determines which decision variable has been used, and the achieved reward
(positive or negative) by applying this plan as shown in the following equation.
Ej(k)
Where, for each available action j, E is the reward given the actual used action
k. If j=k, E takes the value of +1 or -1 if the application is approved or denied,
respectively. Otherwise, E = 0.
Thus, the model can change the propensity of the decision variables and
eventually their selection probabilities as shown below:
Resident action’s propensity: qj(t+1)= qj(t)[1- ] + Ej(k) x (1- )
Resident action’s probability: prj(t)= qj(t)/ qj(t)
Where; qj(t) is the propensity of action j in time t, is the forgetting parameter
of this plan, and is the experimenting parameter. Both and allows the agent to
explore more options further on. pr is the probability distribution of action j.

State Disaster Recovery Agency


State Disaster Recovery Agency (SDRA) is considered – along with the
residents – a main controlling agent in the proposed disaster recovery ABM.
Depending on the available funds, the SDRA distribute the funds on the different
proposed plans. Moreover, the funding distribution proportion is adjusted at each time
step depending on the increase in utility for the different residents applying for each
plan, as shown in the following equation.
qk(t+1)= qk(t)[1- ] + IRk x (1- )
Where; qk(t) is the propensity of plan k in time t, IRk is the immediate reward
for applying plan k. The calculated immediate reward is the relative fitness of each
plan taking into account the increase of each resident’s utility function after applying
plan k; and their corresponding average SoVI. In this essence, the learning module
acts as the SDRA’s multi-objective function with the following parameters:
1- Maximize ( Ui)
2- Minimize AvgSoVI
Where, Ui is the change of utility for resident i, and AvgSoVI is the average
Social vulnerability index.
To this end, the model can recalculate the funding distribution proportions p
for each plan k using the propensities as follow:
pk(t)= qk(t)/ qK(t)

Model Implementation

The proposed model was implemented using GeoMason on a NetBeans IDE


7.4 platform. GeoMason is a GIS agent based model developed as an open source
Java-based discrete-event multi-agent simulation toolkit by the department of
computer science, George Mason University (Sullivan et al. 2010). The use of GIS
made it easy to gather the needed attributes for the residents in the studied regions.
Moreover, GIS facilitates the representation of the residents, the hazardous events and
the spatial relationship between them. The model was tested using data associated
with the following coastal counties in Mississippi; Harrison, Hancock and Jackson.
The three aforementioned counties suffered a great share of Hurricane Katrina
damage in 2005 as they are highly vulnerable to natural disasters (Burton 2012). To

©  ASCE 5
Computing  in  Civil  Engineering  2015 384

this end, social data were collected for the three counties following the SoVI
methodology so as to calculate their relative social vulnerability that will guide the
recovery process. The data were collected from the US Census Bureau for each
census tract in the three aforementioned counties. Moreover, data was gathered from
the Mississippi Development Authority (MDA) to determine the set of action plans
followed by them for the housing sector. The most recognized disaster recovery
strategies were; (1) Housing Assistance, which includes repair, rebuild and relocation
financial funding to the damaged household; (2) Public Home Assistance, which
essentially targeted low income families so as to rebuild and house them; and (3)
Elevation Grant, which is an upgrade to elevate the household up to 6 feet and 4 inch,
thus making it more flood resilient. Finally, Hurricane Katrina’s data were gathered in
regard to the damage applied by the Hurricane on the study region’s households. The
data were input to the computer model to determine the optimum funding proportions
to each of the action plans introduced by the SDRA.

RESULTS AND ANALYSIS

The outcome of the model was compared to the actual data gathered from the
MDA in regard to the expenditure proportions and the changes in the host
community’s social vulnerability calculated from the actual social properties of the
three coastal counties. The data comparison ranges from the year 2007 to 2012 as
MDA started federal reporting in 2007 and the most recent available data for SoVI
calculation in 2012. Figure 2 and Figure 3 illustrate the different funding proportions
used by the MDA and the proposed ABM outcome, respectively. The ABM starts
with a uniform funding distribution funding between the three action plans. It can be
noticed from Figure 2 the domination of the Housing Assistance plan over the other
two plans, .This can be justified by the pressure on the disaster management by that
time as the Housing Assistance plan have the highest rates of approval and recovery
in comparison to the other disaster recovery plans. On the other hand, the proposed
ABM model presented an alternative funding pattern that increases the residents’
utilities and the community’s resilience. As shown in Figure 3, the Public Home
Assistance and the Elevation Grant show a significant share. This is because public
home assistance targets the low income, thus highly affect the social vulnerability,
and the elevation grant increases the households’ value that in return affects both the
utility function and the resilience of the households. Still, due to the large
compensation award from Housing Assistance plan in comparison to the other plans,
more residents tends to apply for it, besides it is easier to be accepted by the LDRA.
Figure 4 and Figure 5 illustrate the SoVI (minimum, average and maximum)
change through the years and point out the difference between the actual SoVI and
the projected SoVI using the proposed ABM. Through a quick comparison, the
proposed ABM achieved a better SoVI score with maximum and minimum
vulnerability of 8.42 and -9.43, respectively. While the actual maximum and
minimum SoVI score post recovery process are 13.851 and -6.728, respectively.
To this point, the model can facilitate the decision making process for the
disaster recovery agencies by examining the complex system and the outcome of the
different disaster recovery strategies. Moreover, the system enables the disaster

©  ASCE 6
Computing  in  Civil  Engineering  2015 385

recovery agencies to simulate the different recovery scenarios with different


combinations of disaster recovery plans and predict the host community’s welfare.

Figure 2 Actual Funding Distribution Figure 3 Proposed funding distribution

Figure 4 Actual SoVI Scores Figure 5 ABM Projected SoVI Scores

CONCLUSION

This paper presented an agent based model for disaster recovery strategies
optimization. The model represented the residents of the impacted region as well as
the local and state government disaster recovery agencies. The model is able to
represent the interrelation between the different stakeholders in the disaster recovery
process. The model also utilized a comprehensive social vulnerability assessment tool
to better guide the recovery process to an optimum funding proportions, thus,
increasing the community’s welfare. The model was then implemented via a Java-
based computer model utilizing GIS interface on the coastal Mississippi’s counties.
Nevertheless, the current model’s assumptions create certain limitations. The
Model takes into account the residents and state recovery agency as the main
controlling agents, while the local government acts as an assessor of the applicants
eligibility. Thus, the model did not fully capture the negotiation process between the
local government and the residents. Moreover, the model did not address the federal
agency role in the recovery process that highly affects the available recovery funding.
To this end, the future work for this research is to fully capture the disaster
recovery stakeholders’ interrelationship including the federal government, the
negotiation process between the local government and the residents, as well as the
insurance companies’ relationship to the recovery process. Moreover, further research
on the different policies utilized in the other states - handling various types of
disasters - will be carried out to incorporate into the model. Last but not least,
adjustments to the current agents will be implemented including: (1) better learning
modules capturing the spatial and economic standards learning barriers; and (2)
irrationality of some agents. Also, to provide a holistic disaster recovery decision
tool, the future work targets the utilization of different economic and environmental
indicators at the different recovery and agent levels. These indicators, along with the

©  ASCE 7
Computing  in  Civil  Engineering  2015 386

social indicator, can give a broader understanding of the complex system and a better
prediction of the outcome, thus, achieve a sustainable disaster recovery.

REFERENCES

Boz, M. A., and El-adaway, I. H. (2014). “Managing Sustainability Assessment of


Civil Infrastructure Projects Using Work, Nature, and Flow”. Journal of
Management in Engineering, 30(5), 1-13.
Boz, M., El-adaway, I., and Eid, M. (2014) “A Systems Approach for Sustainability
Assessment of Civil Infrastructure Projects Construction” Research Congress.
May 2014, 444-453.
Burton, C. G. (2012). “Social vulnerability and hurricane impact modeling”. Natural
Hazards Review. 11(2), 58–68.
Chang, S. E., & Rose, A. Z. (2012). “Towards a theory of economic recovery from
disasters”. International Journal of Mass Emergencies and Disasters. Vol. 32,
No. 2, pp. 171–181.
Cutter, S. L., Boruff, B. J., & Shirley, W. L. (2003). “Social vulnerability to
environmental hazards”. Social Science Quarterly, 84(2), 242–261.
Cutter, S. L., Emrich, C. T., Mitchell, J. T., Boruff, B. J., Gall, M., Schmidtlein, M. C.
Melton, G. (2006). “The long road home: Race, class, and recovery from
Hurricane Katrina”. Environment: Science and Policy for Sustainable
Development, 48(2), 8–20.
Erev, I., & Roth, A. E. (1998). “Predicting How People Play Games: Reinforcement
Learning in Experimental Games with Unique, Mixed Strategy Equilibria”.
The American Economic Review, 88(4), 848–881.
Gilber, C. (1995). “Studying disaster: a review of the main conceptual tools”.
International Journal of Mass Emergencies and Disasters. 13, 231-40.
Mitchell, J.K. (1989). “Hazards Research”. In Gaile, G.L. and Willmott, C.J., editors,
Geography in America, Columbus, OH. Merrill, 410-24.
NetBeans IDE 7.4, Sun Microsystems, Santa Clara, California, USA, accessed:
October 15, 2013.
Olshansky, Rober B., Johnson, Laurie A and Topping, Kenneth C (2006).
“Rebuilding Communities Following Disaster: Lessons from Kober and Los
Angeles”. Built Environment, Vol. 32, No. 4, pp. 354-474.
Olshansky (2006). “Planning After Hurricane Katrina”. Journal or the American
Planning Association. Vol. 72, No. 2, 2006, pp. 147-153.
Riechmann, T. (2001). “Genetic algorithm learning and evolutionary games”. Journal
of Economic Dynamics and Control, 25(6), 1019–1037.
Smith, G. and D. Wenger (2007), “Sustainable Disaster Recovery: Operationalizing
An Existing Agenda”, Chapter 14 of Handbook of Disaster Research,
Rodriguez, H., E. Quarentelli, and R. Dynes (editors), Springer, New York.
Sullivan, K., Coletti, M., & Luke, S. (2010). GeoMason: geospatial support for
MASON. Fairfax, VA: Department of Computer Science, George Mason
University. Technical Report Series.
Vriend, N. J. (2000). “An illustration of the essential difference between individual
and social learning and its consequences for computational analyses”. Journal
of Economic Dynamics and Control.

©  ASCE 8
Computing  in  Civil  Engineering  2015 387

An Analytical Approach to Evaluating Effectiveness of Lift Configuration

Seokyon Hwang, Ph.D.1


1
Associate Professor, Construction Management Program, Lamar University, P.O.
Box 10565, Beaumont, TX 77710. E-mail: [email protected]

Abstract

Construction lifts for building construction play an essential role in the


vertical transportation of various resources. Practitioners often apply zoning to divide
a building into a number of vertical zones to which lifts are assigned. For each zone,
they configure service floors and lifts in a variety of ways, attempting to maximize
lifting efficiency while minimizing costs for lifting operation. Service floors of zones
are often overlapped on one or more floors for convenient transit between zones.
This paper presents an analytical approach to evaluating the effectiveness of the
results of such overlapping configuration, based on lifting time. For this, a simple,
yet practical mathematical model is proposed to estimate lifting time. An application
example shows its potential for preparing an economical lifting plan.

Keywords: Construction lift; Floor zoning; Overlapping zones; Lifting efficiency

INTRODUCTION

Construction lifts are an essential means of vertical transportation in building


construction for lifting personnel, materials, tools, and equipment up to working
floors. Thereby, lifting plans directly affects the efficiency of resource utilization and
ultimately construction costs. Preparation for an effective lifting plan involves a
process of comprehensive analysis of many variables and strategic decision-making
regarding the procurement, installation, and operation of lifts (Daewoo E&C 2002).
One of the critical decisions made in the process is the configuration of multiple lifts.
In order to meet lifting demand while minimizing costs for lifting operation and
interruptions to the progress of work, practitioners apply the zoning method to
configure service floors and lifting capacities. The configuration results need to be
examined to evaluate their efficiency.
Zoning divides a building into a certain number of zones. For each zone, one
or more lifts are assigned to cover service floors of the zone. Zoning normally
involves three difficult decisions concerning: (1) how to handle progressively
changing lifting demand, (2) where to overlap adjacent zones, and (3) how many
floors to be overlapped. This study focuses on finding solutions to these challenging
issues. In the field of elevator study, Newell (1998) notes that effective strategies for
handling peak traffic can reduce the number of elevators. The same should be true
1
©  ASCE
Computing  in  Civil  Engineering  2015 388

for the case of temporary lifts. Since the greatest peak traffic, more specifically up-
peak traffic, develops in the morning (Park et al. 2013), it is reasonable to analyze the
morning up-peak traffic to handle the aforementioned three issues. The ultimate goal
of this research endeavor is to create a method for developing cost-effective and
demand-meeting lifting plans. To that end, the study presented in this paper has
concentrated on creating an analytical approach to evaluating the effectiveness of the
configuration of service floors and lifts based on lifting time, concentrating on the
aforementioned three issues.

PERMANENT ELEVATORS VS. TEMPORARY CONSTRUCTION LIFTS

While there is a great similarity between permanent elevators and temporary


construction lifts, there are a few notable differences in term of service and operation.
Permanent elevators operate automatically within a fixed zone, according to a set of
rules—control algorithms—determining the motion of elevators. Service requests are
detected automatically as passengers press a call button. A zone and service floors in
the zone normally remain the same, although they can be adjusted by changing the
setting of operation rules. A permanent elevator in motion continues to move in the
current direction until it completes its service for passengers in the car and as long as
there are remaining requests of services in that same direction; it changes its direction
if there are requests in the opposite direction while there are no more remaining
requests in the current direction (Ioannou and Martinez 1996). This means that
changes to moving direction of an elevator can be triggered at any time before it
reaches the last floor of its serving zone in the current direction.
For construction lifts, an on-board operator normally operates the lift car
manually. Operation of temporary lifts requires other temporary facilities—a safety
fence, a gate, and a landing platform—for each landing area where passengers are
loaded and unloaded. A lift operator detects service requests by locating passengers
waiting at the temporary landing area while driving the lift. To load passengers, the
operator has to operate the door of lift and the fence gate. In addition, service floors
of a lift progressively change as construction progresses. A temporary lift normally
maintains its moving direction until it reaches the last floor of its serving zone in the
current direction. This is particularly true during the period of peak traffic.

LIFTING CYCLE AND DEMAND

Figure 1 depicts the typical round trip cycle of a lift. A lift begins its cycle of
trip from ‘Wait’ on the lowest floor of its service zone, during which passengers are
loaded. While it is traveling upward, it makes a certain number of stops to unload or
load passengers until it reaches the highest floor of its service zone. As it returns to
the lowest floor, it may repeat what it did during the up-trip. While this pattern of
round trip is typical, the length of round trip time (or cycle time) varies depending on
the service requested per trip. The total lifting time is equal to the sum of individual
cycle time. When a lift is in a mode of inter-floor traffic, the lift changes its traveling
direction in the middle of up- or down-trip (Figure 1). Park et al. (2013) reports that
inter-floor traffic is not frequently observed on building construction sites. This can

2
©  ASCE
Computing  in  Civil  Engineering  2015 389

be explained by two reasons: (1) unlike permanent elevators, passengers cannot cause
direction changes while a lift is in motion and (2) inter-floor traffic normally occurs
during the non-peak periods when lifts serve transportation of materials. Minimizing
lifting time during peak traffic is one of the primary goals of the configuration of
service zones and lifting operation. Particularly highlighted traffic is up-peak, which
normally develops early morning in the beginning of each work day. This is because
the daily lifting demand experiences the greatest surge during the morning peak
period and that surge can delay lifting workers, resulting in the loss of labor hours.

Figure 1. The typical round trip cycle of a lift.

CONDITIONS AND ASSUMPTIONS

Operation of temporary lifts should satisfy requires a few minimal conditions.


Every single floor in need of lifting service should be covered by at least one lift.
Accordingly, every floor served by a lift must have at least one landing platform,
including a fence and gate for safety. The gate should remain closed at all times, and
its operation must be under the responsibility of lift operators. A landing platform
needs to occupy space in its proximity, including floor and façade. This introduces
significant interruptions to various trades’ work near the proximity, for which
keeping the number of landings minimal, thus, is favored (Hwang 2009). There
should be some overlapping of service floors between vertically adjacent zones. At
least, one floor needs to be overlapped, so passengers can transfer between zones
without moving up and down stairs, with or without small tools. In addition, many
building projects arrange temporary storages inside the buildings when there is not
sufficient storage space on the ground or because indoor storages are more effective
to protect materials from weather conditions and vandalism. In case of tall buildings,
the storages are often located on upper floors. This practice requires the overlapping
of lifts on multiple floors. Meanwhile, it is rational to assume the following
conditions with regard to lift operation during the morning up-peak period. During
this period, the number of workers who want to come down to the lowest floor is
minimal, so its impact on round trip time is negligible. In addition, lifts do not make
inter-floor travel during the peak traffic to concentrate on lifting workers from the
lowest floor to their designated upper floors.

PREVIOUS WORK ON CALCULATION OF ROUND TRIP TIME

3
©  ASCE
Computing  in  Civil  Engineering  2015 390

Up-peak analysis is critical to determining the capacity of elevators. The


analysis requires clear definition of many variables, including number of floors, floor
heights, population on each floor, and number of probable stops during an up-trip
(Siikonen 2000). Clear definitions of those variables are essential to calculating
round trip time (RTT) where the time comprises travel time, waste time, door time,
and passenger time that are determined by those variables (Hakonen and Siikonen
2008). Various attempts have been devoted to creating models for RTT calculation—
such as Barney’s model (Barney 2003). Newell (1998) formulated time consumption
for round trip of elevators to evaluate the efficiency of different strategies for
handling peak traffic. Recently, Park et al. (2013) pointed to the limitation of
Barney’s equation due to the contextual differences in operation of permanent
elevators and temporary lifts; thereby, they created an alternative model by modifying
the Barney’s model. Considering the contextual differences and the conditions
discussed in the previous section, the modified model should better fit in the peak-
analysis for construction lifts.

A SIMPLIFIED MATHEMATICAL MODEL

Model Development

In this study, RTT is classified based on the events occurring in the typical
pattern of a round trip as shown in Figure 1—waiting time, travel time, operator time,
and passenger time. Travel time is further classified into up- and down-trip time.
Up-trip is divided into two types—travelling through non-service-floors (out of zone)
and travelling through service-floors. Operator time accounts for time consumed by
the operator at each stop to perform a few simple tasks, such as opening and closing a
lift door and gate, as well as communicating with passengers. Passenger time is
considered for passengers’ boarding or un-boarding a lift. Waiting time is needed
while a lift briefly stays on the lower departing floor or the upper departing floor.
The detailed classification of the operational sequence of a round trip, along with the
conditions and assumptions for lift operation discussed in the previous section, allows
for identification of essential parameters to create a mathematical model for
calculation of RTT. The model is expressed as Eq. (1).

Eq. (1)

where = ith round trip; = ith round trip time; = distance across service-floors;
= distance across non-service-floors; = average velocity of a lift car when it
travels service-floors; = average velocity of a lift car when it travels non-service-
floors; = average operator time per stop; = average passengers time per person;
= number of stops; = number of passengers transported per round trip; =
average waiting time at the top floor of a zone; = average waiting time at the
lowest floor. Accordingly, total RTT can be represented as where =
time for completing lifting; = total number of round trips. As shown above, the model
accounts for parameters relevant to lift capacity in a deterministic manner. Parameter
values for waiting time, operator time, and passenger time can be estimated as a

4
©  ASCE
Computing  in  Civil  Engineering  2015 391

constant average value. It is assumed that average velocity can be estimated.


Randomness is considered in terms of the number of stops of a lift wherein it is
difficult to estimate a value for the parameter.

Randomness of the number of stops

The number of stops per cycle depends on the number of passengers on board
each cycle. For example, let us assume that ten passengers are on board. If all ten
passengers want to go to different floors, then the lift has to make ten stops. On the
other hand, if all of them go to the same floor, then the lift will make only one stop.
Thus, the range of number of stops in this example case is 1 (minimum) to 10
(maximum). It is hard to deterministically estimate the number of stops, because that
is most likely random. To handle the randomness, a technique is created by applying
probability distribution. The following delineates the technique in a sequential order.
Given the range based on the number of passengers on board, find the number
of intervals and the width of each interval. For this, Sturges’ rule—a rule for
determining the number of classes—is effective to handle a small sample size:
and , where n =
number of intervals (smallest integer greater than or equal to right side of equation),
N = number of passengers on board, and w = interval width (maximum and minimum
represent the largest value and smallest values in data). Once the intervals are
established, then, find the median value of each interval to represent the number of
stops for that interval. Select a probability distribution for the median values, then
calculate corresponding probability mass function and cumulative density function.
The frequency of lift stops is assumed to follow the calculated probability distribution.

AN APPLICATION EXAMPLE

The mathematical approach was applied with a hypothetical case to test its
applicability and effectiveness for measuring the lifting time per zoning configuration,
and the impact of the overlapping of service floors. It was assumed that two lifts are
to be assigned to two zones dividing a 29-story building from 1st floor to rooftop,
progressively. Figure 2 shows the zoning plan and configuration of two lifts. The
example considers two different periods—Period 1 (P1) and Period 2 (P2). For each
period, two zoning plans were considered—for Period 1, P1-1 and P1-2, and for
Period 2, P2-1 and P2-2. P1-1 and P1-2 consider four configurations of two lifts (L1
and L2), respectively. The same was considered for P2-1 and P2-2. It was assumed
that ten percent of daily manpower would come down to the 1st floor and go up again
during the morning peak. In terms of lift capacity, the following parameters are
commonly applied to all configurations: = variant; = variant; = 1 m/sec;
= 3 m/sec; = 5 sec; = 10 sec; = 5 sec; = 5 sec; = variant; =
12 men; height of a floor = 4 m. The range of the number of stops is 1 (min
passenger) to 12 (max passenger) when a lift is loaded. Table 1 shows the number of
stops of each lift (L1, L2) that is randomly generated by applying the Sturges’ rule-
based technique.

5
©  ASCE
Computing  in  Civil  Engineering  2015 392

Figure 2. Zoning and lift configuration of an application example.

Table 1. Number of Stops of Each Lift per Cycle.


Cycle No.
Lift No. 1 2 3 4 5 6 7 8 9 10 11 12 13
L1 7 7 7 11 7 9 7 2 9 11 11 2 7
L2 4 4 9 7 7 11 4 7 11 7 7 9 4

Table 2 presents a summary of the results of lifting time calculation. Below


discusses a few notable findings based on the results shown in Table 2. It was found
that the configuration of overlapped service floors did not much impact the gross
lifting time of all lifts, , l = the number of lifts. It rather significantly affected
the individual total lifting time, Tc of L1 and L2. Tc of a lift also depends on the
number of cycles, which in turn is determined by the demand. For example, the
configurations of 2C-1, 2, and 3, respectively, showed a large difference in Tc
between L1 and L2 where L1 served much larger demand that that served by L2.
That resulted in a long gross lifting time despite very short lifting times of L2. That
difference was primarily due to unbalanced allocation of demand. Although L2 in
P1-1 and P1-2 traveled much longer distance than L1, Tc of it is much smaller. In
addition, P1-2 shows that as demand difference decreased, the difference in Tc
between L1 and L2 also decreased. In this example, all of these findings prove that
lifting time depends more on lifting demand than travel distance. Meanwhile, the
result can differ when the length of a zone is short, and, accordingly, the number of
service floors of a zone is small, but the travel distance to the zone is extremely long.

6
©  ASCE
Computing  in  Civil  Engineering  2015 393

The results relevant to 2C-3 of P2-1 are particularly notable. In this case, Tc’s of L1
and L2 were well balanced and much shorter than those of L1 and L2 of 2C-0.
Furthermore, the lift configuration required only about half as many landings as 2C-0.
Generally, when demand is balanced, it is favored to reduce the length of a zone.

Table 2. Results of the Example.


Zoning Zoning Demand Cycles Cycles Tc min # of
plan config. Lift (men) (net) (actual) (min) /man landing
P1-1 2C-0 L1 96 8.00 8 48 0.50 42
L2 96 8.00 8 48 0.50
2C-1 L1 146 12.17 13 88 0.60 22
L2 46 3.83 4 17 0.36
2C-2 L1 151 12.58 13 89 0.59 23
L2 42 3.50 4 16 0.39
2C-3 L1 154 12.83 13 91 0.59 24
L2 39 3.25 4 16 0.41
P1-2 2C-0 L1 96 8.00 8 48 0.50 42
L2 96 8.00 8 48 0.50
2C-1 L1 130 10.83 11 71 0.55 22
L2 62 5.17 6 25 0.41
2C-2 L1 127 10.58 11 72 0.56 23
L2 65 5.42 6 26 0.39
2C-3 L1 123 10.25 11 72 0.59 24
L2 69 5.75 6 26 0.38
P2-1 2C-0 L1 118 9.83 10 71 0.60 60
L2 119 9.83 10 70 0.60
2C-1 L1 104 8.67 9 57 0.55 31
L2 133 11.08 12 80 0.60
2C-2 L1 110 9.17 10 65 0.59 32
L2 127 10.58 11 72 0.56
2C-3 L1 117 9.75 10 66 0.57 33
L2 120 10.00 10 64 0.53
P2-2 2C-0 L1 118 9.83 10 71 0.60 60
L2 119 9.83 10 70 0.60
2C-1 L1 94 7.83 8 50 0.53 33
L2 142 11.83 12 81 0.57
2C-2 L1 100 8.33 9 57 0.57 35
L2 136 11.33 12 80 0.59
2C-3 L1 107 8.92 9 58 0.55 37
L2 129 10.75 11 72 0.56

CONCLUSIONS

The evaluation of lift–serving zones configuration is complex due to the many


parameters involved. Progressively changing lifting demand and options for
overlapping service floors of adjacent zones make the evaluation particularly

7
©  ASCE
Computing  in  Civil  Engineering  2015 394

complicated. From the application example results, the following conclusions can be
drawn. Although the velocity of a lift is an important parameter that affects lifting
time of a lift, lifting time depends more on the demand served by the lift. Meanwhile,
as the building height grows, lifting time depends on the combination of the number
of floors served and the travel distance where lifting demand and travel time are
primarily determined by the two parameters. Thus, balancing demands between lifts
that serve different zones is critical to shorten the overall lifting time. Given the
selected capacity of a lift, a practical solution to shorten lifting time is to reduce the
number of stops of a lift where passenger time and operator time contribute more to
the growth of lifting time. It is worth noting that the existence of landings interferes
with many trades’ work by occupying building’s external façade and space in the
near proximity. Thus, the number of landings should be taken into consideration
when determining the number of service floors served by a lift. It is also critical to
know when to install additional lifts as work progresses and lifting demand grows
and vice versa. Conclusively, accurately evaluating lifting capacity of such
configuration can greatly assist in coordinating schedules for the installation and
removal of lifts.
There are a few agenda to be further investigated in future study. In the
present study, the normal probability distribution was applied to account for the
randomness of the number of lift stops. The influence of other distributions, such as
Beta or Poisson distribution, need to be examined. It is also worth considering
probability distribution for the number of passengers transported per cycle. Finally,
future study will look into permanent elevators, jointly with the temporary lifts,
where permanent elevators are normally used along with temporary lifts as building
enclosure and internal finish work approach to completion.

REFERENCES

Barney, G.C. (2003). “Elevator Traffic Handbook: Theory and Practice.” Taylor &
Francis Routledge.
Daewoo E&C (2002). “Telekom Malaysia Project.” Technical Report, Daewoo
Engineering & Construction, CD version.
Hakonen, H., and Siikonen, M. L. (2008). “Elevator Traffic Simulation Procedure.”
Elevator World, 57(9), pp. 180-190.
Hwang, S. (2009). “Planning temporary hoists for building construction.” Proc. of the
2009 Construction Research Congress, ASCE, 2009, pp. 1300–1307.
Ioannou, P.G., and Martinez, J.C. (1996). “Scalable simulation models for
construction operations.”1996 Winter Simulation Conference (28th), IEEE
Computer Society, pp. 1329–1336.
Newll, G.F. (1998). “Strategies for serving peak elevator traffic.” Transportation
Research Part B: Methodological, 32 (8), 583–588.
Park, M., Ha, S., Lee, H., Choi, Y., Kim, H., and Han S. (2013). “Lifting demand-
based zoning for minimizing worker vertical transportation time in high-rise
building construction.” Automation in Construction 32, pp. 88–95
Siikonen, M. (2000). “On Traffic Planning Methodology.” Elevator Technology,
27(3), pp. 267-274.

8
©  ASCE
Computing  in  Civil  Engineering  2015 395

A Case Study of BIM-Based Model Adaptation for Healthcare Facility


Management—Information Needs Analysis

Zhulin Wang1; Tanyel Bulbul2; and Jason Lucas3


1,2
Department of Building Construction, Virginia Tech, Blacksburg, VA 24061.
E-mail: [email protected]; [email protected]
3
Department of Construction Science and Management, Clemson University, 2-136
Lee Hall, Clemson University, Clemson, SC 29634-0507.
E-mail: [email protected]

Abstract

Facility management (FM) is responsible from the proper functioning of


building systems, such as mechanical, electrical, plumbing, and fire protection. The
increased level of complexity in building systems means more work for the owner’s
FM personnel. The FM work has unique challenges for finding the right resources of
information to detect faulty systems and deficient pieces of equipment. This paper is
focused on how Building Information Modeling (BIM) can support FM operations for
buildings with more complex systems and strict operation and maintenance
requirements, such as healthcare facilities. In order to get a better understanding of
the dynamic environment in which the FM operates, a period of work shadowing has
been arranged with a healthcare building owner. Case narratives are formed as an
outcome of the shadowing experience and process models are created to identify the
basic workflows. These workflows include the interactions, decision points,
documents and information exchange among the owner’s personnel. Based on these
process models, information needs have been discussed in order to extend the
functionality of BIM and add value to FM personnel’s operation and maintenance
activities.
Keywords: BIM; Facility management; Case study; Information flow

BACKGROUND

Facility management (FM) is defined as the “integration of processes within


an organization to maintain and develop the agreed services which support and
improve the effectiveness of its primary activities” (EN 15221-1, 2006). FM
guarantees the regular operation of the built environment by constantly fixing
problems, doing inspections, and dealing with incidents. FM in healthcare facilities is
important due to the healthcare environment’s unique features (Lucas and Bulbul et
al. 2013a, 2013b, 2013c). Time is essential to safety in a healthcare environment as it
sometimes determines the level of injury, or even life or death. The FM team’s quick
and accurate response plays an important role in facilitating good healthcare services.
The complex building systems and healthcare facility require FM personnel to locate
multiple sources of information (dynamic or static) to assist with their detection and
judgments, communicate promptly and cooperate as a team in their operation,
maintenance and repair activities. To support healthcare FM in effectively doing their

©  ASCE 1 of 8
Computing  in  Civil  Engineering  2015 396

work, healthcare facility information needs to be unified into one model. Sharing the
information with all the stakeholders frees the FM personnel from having to look for
information from various systems, divisions, or people, and from producing repeated
communication, documentation, and reports, thus saving FM personnel time (Lucas
& Bulbul et al., 2011).

BIM use is explored for healthcare facilities in the early programming period
with benefits of visualization, time saved relative to concept updates, and quantity
takeoffs (Manning et al., 2008), but several problems have to be addressed in order to
apply BIM successfully to healthcare FM. For example, some equipment types (such
as fire alarm system, HVAC system, elevator system, etc.) are managed by different
contractors, and the FM personnel have to go to each proprietary system to locate
needed information. There are various types of information stored in different formats
(such as printed documents, engineering drawings, handwritten papers, oral
information, etc.), which have originated from different phases of the project, by
different parties, and at different levels of detail. In addition to this, the FM
procedures need to follow the guidelines or standards by different organizations, such
as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO),
Occupational Safety and Health Administration (OSHA), and Facility Guidelines
Institute (FGI). Former work (Lucas & Bulbul et al., 2013c) has analyzed major
healthcare standards and guidelines and has discussed how they can be incorporated
into a healthcare facility information framework to support facility operation and
performance compliance.

This paper discusses a case study analysis to identify information needs and
communication methods of the FM group in a healthcare setting. It examines how
BIM can be implemented to help close the gaps and seamlessly transfer information.
Similar case studies have been conducted regarding the links between facility
information and the healthcare delivery process (Mohammadpour et al., 2012),
BPMN flow charts and UML use cases have been created to examine information
needs for existing healthcare facility maintenance operations and identify the
information origin along the facility lifecycle (Lucas & Bulbul et al., 2013 a), and a
product model is developed as a result of various case analyses (Lucas & Bulbul et
al., 2013 b). Based on the previous work used to identify information links, this paper
discusses case study research conducted on the analysis of routine maintenance and
emergency reactions in a clinical organization.

Study Design and Method

This case study research is built upon a one-week work shadowing experience
in a healthcare organization. The observation took place at a clinic housed in a four-
floor building that serves around 1500 patients and their companions per day. The
shadowing was conducted with FM personnel to learn the details of their regular
work activities and their interactions with the FM web-based system, clinical groups,
the mechanical systems, and the HVAC system. The work of the FM personnel
includes preventive maintenance tasks, building inspections, emergency responses,

©  ASCE 2 of 8
Computing  in  Civil  Engineering  2015 397

furniture repair, and some cleaning. Two kinds of tasks are selected for the case
study, a preventative (routine) task and an emergent task. The preventative task is
analyzed with respect to the interaction of the systems under normal conditions. The
emergent response analysis involves the interaction of different systems when there is
an inherent danger to the occupants or systems and response time is limited. The aim
of preventative maintenance tasks is to maintain and replace systems before a
problem occurs and reduce the possibility of an emergent situation. The aim of
emergent responses is to contain the damage to people and the facility as quickly as
possible. Among the possible incidents, a fire event has been selected for analysis.
The preventative maintenance task for this event is the monthly inspection of all fire
extinguishers in the building. The emergent task is responding to a small fire that is
caused by events such as a motor burnout, vacuum filter bag bursting (as dust
particles passing into the motor could get ignited or cause an internal explosion), or
equipment overheating.

After the above two scenarios are identified, the work processes for the FM
personnel to finish the tasks and their interactions with other personnel and systems
are documented using the Business Process Modeling Notation (BPMN). BPMN is a
flow-chart based notation for defining business processes at various levels of
complexity (White & Miers, 2008). Specifically, a BPMN diagram separates different
participants of a system in separate lanes. Arrow lines are used to present the flow of
the activity and dotted arrow lines show the communication between different
participants. After completion of scenarios and BPMN diagrams, an interview with
the FM manager was arranged for finalizing the diagrams.

Case Scenario and BPMN Diagrams

Routine case: The monthly inspection of fire extinguishers is completed by


the FM personnel in three steps: (1) check the status of every fire extinguisher in the
healthcare facility (clean the boxes, check if the pressure is full, pin is in good
position, and the discharge nozzle and orifice are not blocked), (2) take records
(match the number of the extinguisher with that on the form which contains numbers
of all the extinguishers in the building, sign the inspection card on each extinguisher,
and sign the corresponding item on the form), and (3) report any problems for
corrective maintenance. The FM director is responsible to supervise the performance
of the FM personnel.

In the observed healthcare facility, web-based maintenance management


software is used for communication, documentation, and management of work orders
by the maintenance team and administration. It allows the users to send and receive
work orders to and from specified people within the system. The detailed
requirements of a task can be included into a work order request for reference and to
provide instructions on how to complete a task in accordance with regulations, codes
and previous experiences. It can also schedule periodical preventative work, such as
monthly inspections, and automate the creation and sending of these work orders. The
status of the work orders can be tracked by the administration and the FM personnel
so that everyone is on the same page without abundant inquiry and reporting. The

©  ASCE 3 of 8
Computing  in  Civil  Engineering  2015 398

cost (hours spent) is also put into the system for tracking and reports can be created
easily for a person, a team, a place or area, a piece of equipment, or a time period.
The narrative of the preventative maintenance process according to the activity
sequence is listed below. The BPMN diagram is shown in figure 1.

Table 1: Narrative of Preventative Maintenance Process


• The FM team gathers the guidelines and standards from related organizations (such as JCAHO,
OSHA, and NFPA) and the internal documentation, to define the requirements for the work orders.
This information is entered to the FM’s maintenance management software.
• According to the input from the FM team, the work orders are automatically created for routine
inspection of fire extinguishers.
• When the scheduled time is due, (in this case every month), the maintenance management software
creates a preset work order and sends it to the assigned FM personnel.
• The FM personnel conducts the fire extinguisher inspection as required, updates the work order
status, and closes the work order when the preventive maintenance work is completed.
• The maintenance management software archives the work order and generates a report.
• If problems are identified during the inspections, the FM personnel will set a corrective work order
in the maintenance management software.
• When the problem is fixed, the work order will be updated and closed.
• Once closed, the maintenance management software will archive the work order and generate a
report for the corrective maintenance work.
Emergent case: The emergent fire case involves more parties in the process,
including fire fighters, police, alarm monitoring company, dispatch center etc. The
narrative of the emergent response process is listed below according to the activity
sequence and information flow, and the BPMN diagram is shown in figure 2.

Table 2: Narrative of Emergent Response Process


• When the smoke detector senses the fire, or someone notices the fire and uses the pull station, the
signal will be sent to the fire alarm system panel inside the clinic building indicating the location of the
smoke detector or the pull station. The signal will also be sent to the alarm monitoring company,
which is responsible for informing emergency services of the fire incident.
• On hearing the fire alarm and seeing the alarm signal, the dispatch center announces “Code Red”.
• The FM and security personnel head for the panel, which indicates the location of the reacting fire
extinguisher or pull station.
• The security will notify the dispatch center of the fire location and the dispatch center will announce
the fire location through the building’s speaker system;
• On hearing the alarms and notification of the fire location, the two designated staff members on
each floor will respond by evacuating the staff members, ambulatory patients and contractors safety
exits. Non-ambulatory patients are moved to the refuge spaces with two hours firewall protection
accompanied by their nurses to wait for transportation by the firefighters.
• The FM personnel goes to the fire source, tries to control the fire if possible, or leaves the fire if it’s
too large to contain, and then reports to facility manager of the fire status.
• The FM personnel will clear the building starting from the top floor to make sure that the patients
and staff members have been evacuated or transferred.
• Then the FM personnel will exit the building after all residents are cleared, warned or reported;
• After receiving the emergency call, the police, fire fighters, and ambulance team will head for the
location. The firefighters, police, and ambulance team are responsible for checking and putting out the
fire, transporting non-ambulatory patients waiting at the refuge spaces, and keeping people from
entering the building.
• The fire fighters, ambulance team, and police will discuss the fire situation with the FM personnel
and FM director to decide whether the hazard has been diminished.
• After the confirmation of hazard being diminished, the FM director will inform the dispatch center
and the dispatch center will announce “All clear” and give “OK to reset ” to the alarm monitoring
company to reset the system to normal status.
• After the “All clear” announcement, everybody is allowed to go back to the building.
• The FM director will work with the insurance company to assess the damage in the building, and the
FM personnel will repair the damages and replace any fire extinguishers that have been used.

©  ASCE 4 of 8
Computing  in  Civil  Engineering  2015 399
Figure
g 1: Routine fire case BPMN diagram
g
©  ASCE 5 of 8
Computing  in  Civil  Engineering  2015 400
Figure 2: Emergent fire case BPMN diagram (partial)
©  ASCE 6 of 8
Computing  in  Civil  Engineering  2015 401

Discussion of Information Needs

From this shadowing experience, we have observed that much time of the FM
personnel has been spent on just walking, from maintenance shop to the place of
issue, back for drawing sheets and other information, to the place of issue again, back
for materials and tools, then completing the work, and back again for updating the
work order in the FM management system. Further time was lost in confirming the
right location and right piece of equipment. Extending BIM to support FM activities
can provide intuitive virtual presentation of the facility and save time of FM
personnel by integrating various sources of information and providing quick and
visual access to dynamic and static facility data. The functionality and information
necessary for the routine case and the emergent case have been discussed below.
Routine case: The expected functionality from BIM-based FM support
system for this routine job includes: storing and checking maintenance history,
indicating the position and serial numbers of the equipment, checking for omitted
equipment, storing equipment service life information, and automatically reminding
personnel of equipment maintenance needs and service life limit. The information
needed includes: the maintenance history, the equipment locations on the installation
drawings, the equipment information of serial numbers, service life begin date and
service life limits, suppliers’ contacts, and possible problems and corresponding
solutions. Among these, the equipment information like serial number and location in
spaces or zones could be stored using the structure and data format of COBie’s
component in BIM. Such information is not difficult to identify and available at the
beginning of the equipment’s service life. Although the case of fire extinguisher does
not seem very hard to do and does not involve complex problems to detect and fix, it
is still an example of the various FM activities, such as elevator inspection, HVAC
unit inspection, and generator inspection. Further, in a larger building, or a set of
buildings, the quantity of these activities is substantial and prone to error when
performed manually.
Emergent case: The expected functionalities from BIM in the emergency
case includes: presenting the fire location, receiving alarm signals and providing
reaction guidance according to the life safety plan, providing patient location
information and room availability for evacuation purpose, and the ability to access
this information using mobile devices. The source of information needed for the
emergent case functionality include: instant instructions in case of incident according
to the life safety plan and training modules, the dynamic data from the fire alarm
panel, and the dynamic information of patient location, mobility and room number.
The life safety plan is easy to get and can be integrated into the 3D model in support
of decision-making. However, the dynamic data from the fire alarm system and the
clinical system is not easy to obtain. Moreover, in consideration of other kinds of
incidents, more complex systems such as the mechanical system and the HVAC
system may also be involved. Many of these systems in a facility may be equipped
with automated sensors by different suppliers and managed on different proprietary
systems, they are independent from each other and also from BIM, so there is no
automatic flow of information among these systems. With dynamic system data being
captured updated as additional properties of BIM’s objects, the FM personnel can

©  ASCE 7 of 8
Computing  in  Civil  Engineering  2015 402

access all data at no time and conduct maintenance activities with comprehensive
understanding of the facility and their tasks.

CONCLUSION
This paper presents a case study based on the shadowing experience in a
healthcare clinic. The case scenarios regarding the fire safety issues are generated.
BPMN diagrams and narratives are developed to analyze the sequence of activities
and interaction among different participants. Points of improvement or
communication gaps are identified for the purpose of using BIM to further facilitate
the FM performance in consideration of patient and staff member safety, cost
reduction and efficiency. The information needed to realize the functionality and the
information origins are analyzed. For future research, the feasibility of linking the
BIM system to systems in a healthcare facility to obtain and update dynamic
information in the 3D model will be further studied. The former ontology research
(Lucas & Bulbul et al, 2013a) has laid the foundation of how to organize the collected
data and what functions to perform using the data. Moreover, the feasibility of using
device support (such as a mobile smart phone or tablet to provide immediate
information to the FM personnel without them leaving the jobsite) to run the adapted
BIM tool will also be studied. As for the information to be stored or included in a
BIM based model, more cases need be created to reflect the various aspects of the
healthcare environment. More case studies will be created for risk assessment and
damage assessment to support decision making of the FM personnel.

REFERENCES
EN 15221-1: 2006. (2006). Facility management–Part 1: terms and definitions.
Lucas, J., Bulbul, T., & Anumba, C. (2013). (c) Gap Analysis on the Ability of
Guidelines and Standards to Support the Performance of Healthcare Facilities.
Journal of Performance of Constructed Facilities.
Lucas, J., Bulbul, T., & Thabet, W. (2013). (a) An object-oriented model to support
healthcare facility information management. Automation in Construction, 31,
281-291.
Lucas, J., Bulbul, T., Thabet, W., & Anumba, C. (2013). (b) Case Analysis to Identify
Information Links between Facility Management and Healthcare Delivery
Information in a Hospital Setting. Journal of Architectural Engineering, 19
(2), 134-145.
Lucas, J. Bulbul, T. Thabet, W. (2011) “A Facility Information Model to Support
Operations and Maintenance of a Healthcare Environment” Proceedings of the
CIB W078, W102 Computer Knowledge Building Conference, October 2011,
Sophia Antipolis, France.
Manning, R., & Messner, J. (2008). Case studies in BIM implementation for
programming of healthcare facilities. ITcon, 13 (special issue), 246-257.
Mohammadpour, A., Anumba, C., Bulbul, T., & Messner, J. (2012). Facilities
Management Interaction with Healthcare Delivery Process. Construction
Research Congress 2012, 728-736.
White, S. A., & Miers, D. (2008). BPMN modeling and reference guide:
understanding and using BPMN. Lighthouse Point: Future Strategies Inc.

©  ASCE 8 of 8
Computing  in  Civil  Engineering  2015 403

Estimating Fuel Consumption from Highway-Rehabilitation


Program Implementation on Transportation Networks

Charinee Limsawasd1; Wallied Orabi2; and Sitthapon Pumpichet3


1
Ph.D. Candidate, OHL School of Construction, Florida International University,
Miami, FL 33174. E-mail: [email protected]
2
Assistant Professor, OHL School of Construction, Florida International
University, Miami, FL 33174. E-mail: [email protected]
3
Post-Doctoral Research Associate, Department of Electrical and Computer
Engineering, Florida International University, Miami, FL 33174. E-mail:
[email protected]

Abstract
Reducing fuel consumption on roadway networks can have a huge impact
on the nation’s economy and environment. Existing ad-hoc transportation
planning efforts that allocate limited funding on need-based criteria are
insufficient for providing a significant reduction in fuel consumption. Therefore,
there is an urgent need for new research to analyze the impact of planning effort
on fuel consumption to support transportation’s decision making. This paper
presents the development of a new model for estimating fuel consumption in
transportation networks under budget constraints by taking into consideration the
effect of pavement deterioration on fuel consumption. The model is composed of
three main modules to (1) estimate vehicle fuel consumption of transportation
networks; (2) allocate limited funding to competing highway rehabilitation
projects; and (3) evaluate the impact of pavement roughness and deterioration on
fuel consumption. An application example is analyzed to evaluate the developed
model and illustrate capabilities of the model. The application result demonstrates
the significant impact of highway rehabilitation planning on fuel consumption on
roadway networks. This study should prove useful to planners and decision
makers in evaluating the impact of highway rehabilitation efforts on fuel
consumption.

Keywords: Sustainability; Fuel consumption; Transportation networks; Highway


programs

1
©  ASCE
Computing  in  Civil  Engineering  2015 404

INTRODUCTION
According to the U.S. Energy Information Administration (EIA),
transportation ranked second among all economic sectors with 28% of total
energy consumption in 2014 (EIA 2014). Transportation sector is also the largest
consumer of petroleum (EIA 2014) and is therefore the second highest greenhouse
gas (GHG) generating sector (EPA 2013). That is, reducing fuel consumption on
transportation networks can have a direct and significant impact on improving the
nation’s economy and environment. Substantial reduction of fuel consumption can
be achieved not only by developing energy-efficient engines, but also by
optimizing consumption related to vehicle-pavement interaction. To this end,
current highway programming and planning efforts are based on ad-hoc models
that allocate limited budget following need-based criteria such as worst pavement
conditions or highest traffic volume. These models are therefore insufficient for
providing significant reduction of fuel consumption and GHG emissions.
Accordingly, there is a pressing need for new models in evaluating and
minimizing fuel consumption as a result of highway programming and planning
decisions.
Existing research on fuel consumption in the transportation sector focused
on: (1) analyzing the impact of several factors (e.g. vehicle type and pavement
roughness) on fuel consumption (Watanatada et al. 1987; Epps et al. 1999; Amos
2006); (2) estimating vehicle fuel consumption (Bennett and Greenwood 2003;
Chatti and Zaabar 2012; Zaabar and Chatti 2010); (3) modeling life-cycle energy
consumption at the road level (Zhang et al. 2009; Yu and Lu 2012); and (4)
modeling life-cycle energy consumption for network rehabilitation programs
(Zhang et al. 2012; Wang 2013). Despite the significant contributions of these
studies, no reported research enables evaluating and estimating fuel consumption
at the network level using data readily available to planners and decision makers
in developing highway rehabilitation programs.
In order to address this important research gap, this paper presents the
development of a novel model for estimating fuel consumption of highway
rehabilitation programs. The model is developed in three main stages that are
designed to (see Figure 1): (1) allocate limited funding to competing rehabilitation
projects; (2) model the impacts of pavement condition deterioration on fuel
consumption; and (3) evaluate and estimate the impact of highway rehabilitation
decisions on total fuel consumption in transportation networks. The following
sections describe these three stages in detail.

Network Fuel Consumption Estimating Model

(1) (2) (3)


Allocation of Pavement Performance Estimation of Total
Rehabilitation Budget Forecasting Fuel Consumption

All Possible Highway Pavement Performance at Year Total Fuel Consumption of


Programs y after Rehabilitation Entire Network

Figure 1. Network fuel consumption estimating model.

2
©  ASCE
Computing  in  Civil  Engineering  2015 405

ALLOCATION OF REHABILITATION BUDGET


The main objective of this stage is to explore and identify all the possible
highway rehabilitation programs that satisfy the limited available funding. To this
end, a list of candidate highway rehabilitation projects and their estimated costs
are analyzed using a spreadsheet to enumerate all possible alterative combinations
of project selection. Then, only the rehabilitation programs that have a total cost at
or less than the available funding will be considering for further evaluation (see
equation 1).

Where,
= total cost (in dollar) of the rehabilitation program; P = number of projects;
= cost of rehabilitation project (p); and F = available funding.

PAVEMENT PERFORMANCE FORECASTING


The main objective of this stage is to forecast the impact of traffic
conditions on the performance of road pavement over time. The deterioration of
pavement surface conditions can significantly affect total fuel consumption. In
this paper, pavement surface condition is measured in terms of its roughness that
increases over time as a result of deterioration due to the traffic load. Therefore,
the increase over time in pavement roughness is estimated using equation (2),
which is adapted from Paterson and Attoh-Okine (1992). Pavement roughness is
estimated on a yearly basis over the span of the rehabilitation program taking into
consideration pavement age, previous year roughness, and cumulative traffic load.
Traffic load is represented by the overall weight of all vehicles traveling on that
road section, which is typically measured in terms of equivalent standard axle load
(ESAL).

Where,
= pavement roughness index (in meter per kilometer) of road section (p) at
year (y) after rehabilitation; SNC = modified structural number; and =
cumulative equivalent standard axle loads (ESAL) of road section (p) at year (y)
after rehabilitation (in million ESAL per lane).
To this end, a new pavement performance algorithm is developed to
forecast the deterioration in pavement conditions. Figure 2 shows a flowchart of
this new algorithm that consists of eight main steps as follows:
1. Check whether road section (p) is selected for rehabilitation. If the
section is included in the rehabilitation program, proceed to step 2; otherwise,
proceed to step 3.
2. Set the initial pavement roughness index (IRIp,I) of section (p) to the
expected value after rehabilitation. It is assumed that rehabilitation efforts will
decrease IRI to 0.5 m/km for flexible pavement, according to MnDOT (2003) and
Utah LTAP (2004). Go to step 4.
3. Set the initial pavement roughness index to the current IRI (IRIp,N) if the
section (p) is not included in the rehabilitation program. For example, a road

3
©  ASCE
Computing  in  Civil  Engineering  2015 406

section with a current IRI of 3 m/km that is not selected for rehabilitation
implementation will have its initial pavement roughness index set to 3 m/km.
4. Calculate section pavement roughness index (IRIpy) of the road section
(p) at the end of each year (y) after rehabilitation over an analysis time span
identified by the user using equation (2).
5. Check whether the current year is the last of the analysis span (Y). If
yes, proceed to step 7; otherwise, go to step 6.
6. Verify if the road section is due to periodical maintenance during the
current year. In this paper, periodical maintenance is implemented in 5-year
cycles. If a processing year falls in the maintenance cycle year (MD), a road
section (p) will then move to check with the next criteria in step 7; otherwise,
repeat steps 1-6 for the next year (y+1). Please note that the performance
improvement due to periodical maintenance is not included in this study.

Start

User Input : SNC

p=1

Yes 1 No
Is project rehabilitated ?
Assign initial IRI at post- Assign initial IRI with current
2 3
rehabilitation with 0.5 m/km IRI from actual data

y=1

Calculate pavement roughness at year y after


4
rehabilitation

5
Is processing year last year of
operational lifespan?
Yes
No
6
Does processing year fall in the No
maintenance cycle year ?
Yes
7
Is current IRI higher than maximum No
acceptable value?
Yes
Assign IRI with
8
0.5 m/km

Yes
Is y < Y? y=y+1
No
Is p < P? p=p+1

End

Figure 2. Pavement performance forecasting algorithm.

4
©  ASCE
Computing  in  Civil  Engineering  2015 407

7. Check whether the current estimated IRI (IRIpy) is higher than the
maximum acceptable value according to applicable practices and guidelines. If
yes, assume this road section will undergo new rehabilitation effort and go to step
8; otherwise, proceed to step 6. In this paper, the maximum acceptable IRI value
is assumed to be 4 m/km representing an average value for poor pavement
conditions, according to MnDOT (2003).
8. Adjust the current pavement roughness index (IRIpy) to 0.5 m/km as a
result of implementing new rehabilitation effort if the road section (p) has an IRI
greater than the maximum acceptable value (4 m/km). All steps are repeated for
each road section of the transportation network.
ESTIMATION OF TOTAL FUEL CONSUMPTION
The objective of this stage is to evaluate and estimate the impact of
decisions made during programming of highway rehabilitation efforts on total fuel
consumption in the transportation network. To this end, the model evaluates the
impact of planning decisions, such as selection of rehabilitation projects from a
group of roads due for repair, on the total network fuel consumption. The fuel
consumption is estimated as a function of traffic volume, highway mileage,
vehicle speed, vehicle type, and pavement condition. An average vehicle fuel
consumption rate is estimated for each road section depending on the free flow
speed, type of vehicles, and pavement surface conditions of that section. In this
study, equation (3) was developed for passenger car based on HDM 4 model
mentioned in Chatti and Zaabar (2012).

Where,
= average vehicle fuel consumption rate on road section (p) in year (y) of
the rehabilitation efforts (in milliliter per vehicle-kilometer); = free flow
speed on road section (p) (in mile per hour); and = pavement roughness index
(in meter per kilometer) of road section (p) at year (y) after rehabilitation.
In this model, pavement conditions are measured using the international
roughness index (IRI). As discussed in the previous section, the pavement surface
conditions deteriorate over time, which will in turn significantly affect fuel
consumption. Therefore, the average vehicle fuel consumption rate for each road
section is estimated on a yearly basis. The total fuel consumption of the
transportation network resulting from the implementation of the selected highway
rehabilitation program is therefore estimated using equation (4), as follows:

Where,
TF = total fuel consumption of the transportation network (in liter); Y = number
of years of the rehabilitation effort; P = number of road sections undergoing

5
©  ASCE
Computing  in  Civil  Engineering  2015 408

rehabilitation; = traffic volume (in terms of AADT) on road section (p); and
= length of road section (p) (in kilometer).

APPLICATION EXAMPLE
An application example is analyzed to evaluate the performance of the
developed model and illustrate its capabilities in estimating the total fuel
consumption of transportation networks as a result of highway rehabilitation
decisions. The example analyzes a hypothetical rehabilitation program applied to
a transportation network in Miami-Dade County, Florida. Several road sections
throughout the network are assumed to be suffering of varying surface conditions
and in need of rehabilitation efforts. In order to improve the overall pavement
conditions of the network, ten candidate rehabilitation projects are being
considered by the decision makers under budget limitations. Table 1 shows the
current pavement roughness index, section length, free flow speed, number of
lanes, rehabilitation cost, and total equivalent standard axle load for each of these
projects.
Table 1. Candidate Rehabilitation Projects.
Traffic Speed No. Construction Total ESAL
IRI Length
Project Volume Limit of cost (million (million
(m/km) (km)
(veh/day) (mph) Lane dollars) ESAL/lane)
1 4.50 45,500 4.61 40 4 8.02 0.3546
2 3.20 55,000 3.40 40 3 4.44 0.5715
3 2.80 37,500 6.52 40 2 5.67 0.5845
4 3.00 50,500 3.22 45 3 4.2 0.5247
5 4.00 35,000 3.28 35 2 2.85 0.5455
6 4.00 48,500 2.60 40 3 3.39 0.5039
7 3.80 33,500 2.72 45 3 3.55 0.3481
8 5.00 63,000 4.28 45 3 5.58 0.6546
9 4.00 13,000 2.80 40 1 1.22 0.4052
10 3.80 71,000 3.60 45 3 4.7 0.7377

The funding available for this rehabilitation program is assumed to be 25


million dollars over a 5-year programming period. In this example, the decision
makers are restricted to allocate the budget with a minimum of a 23-million dollar
threshold. The total fuel consumption of the network is estimated for an analysis
life span of 20 years. The newly developed model was applied to this application
example and was capable of estimating the total network fuel consumption for
about 93 potential rehabilitation programs. The results demonstrate the significant
impact of highway rehabilitation decision making on the total network fuel
consumption, as shown in figure 3.
The estimated total network fuel consumption of the potential
rehabilitation programs ranges between 342.2 and 544.3 million liters. This shows
that the fuel consumption for the rehabilitation program with the highest expected
consumption (alternative #2) is 1.6 times greater than that of the program with the
lowest fuel consumption (alternative #1). In addition, figure 3 also shows the
estimated total network fuel consumption using the current ad-hoc planning and
programming decision making models that depend on need-based criteria such
as worst pavement conditions or highest traffic volume. If the project selection

6
©  ASCE
Computing  in  Civil  Engineering  2015 409

570

Fuel Consumption (Millions Liter)


520

Alternative#2
470
Highway Programs based
420 on Traffic Volume

370
Highway Programs based
on Pavement Conditions
320
Alternative#1
270
0 20 40 60 80 100
Possible Highway Programs

Figure 3. Fuel consumption of all possible highway programs under the


budget limitation.
decision was made solely based on roads with the highest traffic volume, the
expected total fuel consumption is approximately 510 million liters, which is
about 24% higher than the expected fuel consumption of 409 million liters if the
decision was based solely on roads with the worst pavement conditions. In
addition to these four alternative rehabilitation programs, the model was able to
estimate the fuel consumption for a wide range of other candidate rehabilitation
programs that satisfy the funding constraints, as shown in Figure 3. This can prove
useful for planners and decision makers in selecting the rehabilitation programs
with less fuel consumption and greenhouse gas emissions compared to existing
decision making practices.
CONCLUSION
A new model was developed to support highway rehabilitation decision
making for transportation networks. The model is capable of estimating the total
fuel consumption of transportation network undergoing rehabilitation under
limited funding. The model takes into consideration the impact of pavement
condition deterioration on vehicle fuel consumption. The model was designed into
three main stages to allocate the limited funding to competing rehabilitation
projects; forecast the deterioration in pavement roughness over the analysis time
span; and estimate the total network fuel consumption of candidate rehabilitation
programs. An application example was analyzed to illustrate the performance and
capabilities of the developed model and the results showed that decision making
in highway rehabilitation programs can have a significant impact on the total
network fuel consumption. The result findings of this research can be used to
expand and improve the current ad-hoc and need-based decision making practices
in order to provide energy savings and reduce greenhouse gas emissions.
The study presented herein can be expanded to include some other factors
to improve the rehabilitation planning process. For example, the fuel consumption
estimation can be extended to include the impact of construction process. Vehicle
fuel consumption rate can be proposed to cover other vehicle types. Also, the

7
©  ASCE
Computing  in  Civil  Engineering  2015 410

performance improvement due to periodic maintenance can be included in the


analysis to generate more realistic programs. Future works are planned to take
these limitations into account and integrate the developed model with a multi-
objective optimization technique to enhance efficiency of decision making
process.
REFERENCES
Amos, D. (2006). Pavement smoothness and fuel efficiency: An analysis of the
economic dimensions of the Missouri smooth road initiative. Missouri
Department of Transportation.
Bennett, C. R., and Greenwood, I. D. (2003). “Volume 7: Modeling road user and
environmental effects in HDM-4.”, Version 3.0, World Road Association.
Chatti, K., and Zaabar, I. (2012). “Estimating the effects of pavement condition on
vehicle operating costs.” Transportation Research Board.
EIA. (2014). "Monthly energy review November 2014." Rep. No. DOE/EIA-
0035(2014/11), U.S. Energy Information Administration, Washington, DC.
EPA. (2013). "Sources of greenhouse gas emissions."
http://www.epa.gov/climatechange/ghgemissions/sources.html#ref3 (Mar.
31, 2014).
Epps, J., Leahy, R., Mitchell, T., Ashmore, C., Seeds, S., Alavi, S., and
Monismith, C. (1999). "WesTrack—the road to performance-related
specifications." International Conference on Accelerated Pavement Testing.
MnDOT. (2003). “An overview of Mn/DOT’s pavement
condition rating procedures and indices.”
http://www.mrr.dot.state.mn.us/pavement/pvmtmgmt/ratingoverview.pdf
(Nov. 4, 2014).
Paterson, W.D., and Attoh-Okine, B. (1992). “Simplified models of paved road
deterioration based on HDM-III.” Annual Meeting of the Transportation
Research Board, January 1992.
Utah LTAP. (2004). Transportation asset management system.
Wang, T. (2013). Reducing greenhouse gas emissions and energy consumption
using pavement maintenance and rehabilitation: Refinement and
application of a life cycle assessment approach, University of California,
Davis.
Watanatada, T., Dhareshwar, A. M., and Lima, Paulo Roberto S Rezende. (1987).
Vehicle speeds and operating costs: Models for road planning and
management.
Yu, B., and Lu, Q. (2012). "Life cycle assessment of pavement: Methodology and
case study." Transportation Research Part D: Transport and Environment,
17(5), 380-388.
Zaabar, I., and Chatti, K. (2010). "Calibration of HDM-4 models for estimating
the effect of pavement roughness on fuel consumption for US conditions."
Transportation Research Record: Journal of the Transportation Research
Board, 2155(1), 105-116.
Zhang, H., Keoleian, G. A., and Lepech, M. D. (2012). "Network-level pavement
asset management system integrated with life-cycle analysis and life-cycle
optimization." J Infrastruct Syst, 19(1), 99-107.
Zhang, H., Lepech, M. D., Keoleian, G. A., Qian, S., and Li, V. C. (2009).
"Dynamic life-cycle modeling of pavement overlay systems: Capturing the
impacts of users, construction, and roadway deterioration." J Infrastruct
Syst, 16(4), 299-309.

8
©  ASCE
Computing  in  Civil  Engineering  2015 411

GPU-Powered High-Performance Computing for the Analysis of Large-Scale


Structures Based on OpenSees

Yuan Tian; Linlin Xie; Zhen Xu; and Xinzheng Lu

Key Laboratory of Civil Engineering Safety and Durability of the China Education
Ministry, Department of Civil Engineering, Tsinghua University, Beijing, P.R. China
100084. E-mail: [email protected]

Abstract

Numerical simulations using various finite element (FE) software packages


have been widely adopted to investigate the seismic performance of large-scale
important structures, such as super-tall buildings and large-span bridges. Among
these FE software packages, Open System for Earthquake Engineering Simulation
(OpenSees), as an open-source FE software program, has increasingly become one of
the most influential packages. However, the computational efficiency of the solvers
for linear systems of equations (SOE) in OpenSees, which use the direct method,
cannot satisfy the demand for numerical simulation of large-scale structures.
Consequently, two new parallel-iterative solvers for the sparse SOE are proposed and
implemented in OpenSees, based on two graphics processing unit (GPU)-based
libraries, CuSP and CulaSparse. The time history analysis of a 141.8 m frame-core
tube building and a super-tall building (the Shanghai Tower with a height of 632 m)
are performed using the proposed solvers. The speed-up ratio of the proposed solvers
is up to 9 to 15, with high accuracy in results when compared with the efficiency of
the existing central processing unit (CPU)-based SparseSYM solver in OpenSees.
This research outcome can provide an effective computing technology for the
numerical analysis of the seismic behavior of large-scale structures.

Keywords

Large-scale structures; Numerical analysis; OpenSees; GPU; Parallel-iterative


solver

©  ASCE
Computing  in  Civil  Engineering  2015 412

INTRODUCTION

In recent years, more and more large-scale structures (e.g., high-rise buildings,
super-tall buildings, large-span bridges and high dams) are being designed and built
worldwide. Concurrently, research on the seismic behavior of large-scale structures
has become increasingly common, owing to the important social function of these
structures and the frequent occurrence of earthquakes. In addition, research to date
indicates that numerical simulation, using finite element (FE) analysis, is an effective
method of investigating the nonlinear seismic behavior of such structures (Lu et al.
2011, 2013a; Xu et al. 2013; Li et al. 2014; Xie et al. 2014a). Among existing FE
software, OpenSees, an open-source FE program, has been widely used because it is
versatile, extensible and shareable (McKenna et al. 2009).
A multi-layer shell element, based on the principles of composite material
mechanics, has been developed in OpenSees for shear walls, core tubes and floor
slabs, which are important components of large-scale structures (Lu et al. 2013b; Xie
et al. 2014a, 2014b). This multi-layer shell element has been applied in investigating
the seismic behavior of super-tall buildings and large-span bridges, providing useful
references and an effective tool for further research on the seismic behavior of
large-scale structures. However, the abovementioned research also indicated that the
computational efficiency of OpenSees based on CPU computing cannot satisfy the
demand for numerical simulation of large-scale structures. This restricts further
investigation on the seismic behavior of such structures using this software package.
Recently, GPU have been rapidly developed and applied in the general
computing field, due to their powerful parallel computing capability and low cost
(Owens et al. 2007). Further, seismic damage simulations of urban areas have been
conducted by Lu et al. (2014a) using GPU/CPU cooperative computing. Their
benchmark cases indicate that the computing efficiency of GPU could be up to
almost 40 times that of a traditional CPU. Accordingly, a GPU may have the
potential to provide a high-performance alternative for the seismic performance
analysis of large-scale structures based on OpenSees.
In this study, two new parallel-iterative solvers for the sparse SOEs are
proposed and implemented in OpenSees, based on GPU-powered high-performance
computing. The nonlinear time history analysis (THA) of a 141.8 m frame-core tube
building and a super-tall building (the Shanghai Tower with a height of 632 m) are
performed using the proposed solvers. In comparison with the existing CPU-based
SparseSYM solver in OpenSees, the speedup ratio using the proposed solver is up to
9 to 15, with high accuracy in result. The outcome of this study can provide an
effective computing technology for numerical analysis of the seismic behavior of
large-scale structures.

©  ASCE
Computing  in  Civil  Engineering  2015 413

COMPUTING METHOD

Solvers for the Sparse SOEs in OpenSees


A critical issue for nonlinear THA, using OpenSees, is how to define the
system of equations (SOE), which includes two basic classes: the Linear System of
Equation (LinearSOE) class and the Linear System of Equation Solver
(LinearSOESolver) class. The LinearSOE class and the LinearSOESolver class
determine the storage and solution method of the SOE, respectively.
In the LinearSOE class, the SOE is usually composed of mass, stiffness and
damping matrices. For large-scale structure models using beam and shell elements,
the stiffness matrix and mass matrix usually exhibit significant sparse characteristics.
The damping matrix will be also sparse, if classical Rayleigh damping is adopted.
Hence, a storage method suitable for sparse SOE is required in the LinearSOE class.
The LinearSOESolver class in OpenSees (Version 2.4.3) integrates three
types of sparse SOEs, namely: SuperLU SOE, UmfPack SOE and SparseSYM SOE.
For these sparse SOEs, direct methods are generally adopted, such as triangular and
elimination decomposition. However, direct methods are difficult to be highly
parallelized. Based on CPU computing, only a few solvers can be paralleled with a
low efficiency. The direct method is also not suitable for GPU-based parallel
computing. Therefore, a new and efficient solver for the sparse SOEs is required in
the LinearSOESolver class.

GPU-based Solvers for the Sparse SOEs in OpenSees


GPUs have powerful floating point operations and parallel computing
capabilities, which are very favorable for large-scale computational problems.
However, according to the above analysis, the solvers for the sparse SOEs in
OpenSees are not suitable for parallel processing using a GPU. Therefore, it is
necessary to develop a new solver for the sparse SOEs in OpenSees, which is more
suitable for GPU-based parallel processing. To improve the performance of the
GPU-based solver and retain compatibility with the original computing programs in
OpenSees, the solver is developed based on the following rules:
(1) The iterative method including the conjugate gradient (CG) algorithm, Bi-CG
algorithm and GMRES algorithm is used to maximize the parallel computing ability
of the GPU.
(2) The SOE class (corresponding to the LinearSOE class) and the Solver class
(corresponding to the LinearSOESolver class) are designed separately. The SOE
class is designed inheriting the LinearSOE class, to improve the compatibility of the
solver.
(3) The solving function in the Solver class is designed in the form of a dynamic-link
library (DLL) to improve extensibility, which is convenient for modification.
Figure 1 illustrates the specific solving process of GPU-accelerated solvers.

©  ASCE
Computing  in  Civil  Engineering  2015 414

The matrix is integrated in CPU, and then copied into the graphics memory to
perform parallel computing. Finally, the results are returned into CPU for subsequent
computing.

CPU GPU

Preceding Steps

Build LinearSOE

Integrate Matrx A
and Vector b

Build corresponding Copy A, b, and x


LinearSOESolver to graphics memory

Solve the equation system


in parellel iterative method

Return Vector x to Return Vector x to


model memory

Subsequent Steps

Figure 1. Flow chart of GPU-based solvers for the sparse SOEs

There exists several storage formats for sparse matrices. Among these, the
compressed sparse row (CSR) format is commonly adopted. This storage format can
quickly convert into the coordinate (COO) format, with less storage space than
alternatives. Some useful matrix characteristic values, such as the number of
non-zero elements in a row of the matrix, can be quickly obtained. In addition, the
CSR format is convenient for the parallel computing of matrix multiplication and
vector-matrix multiplication. Hence, this format is used to store the sparse matrices
in this study. A SparseGenRowLinSOE class is provided in OpenSees, which can
store sparse matrices in the CSR format. Therefore, this class is used directly as the
designed SOE class in this work.
The designed Solver class should be written in a language suitable for GPU
computing. Therefore, two GPU-accelerated solving libraries for the sparse SOEs,
based on Compute Unified Device Architecture (CUDA) (NVIDA, 2014a), are
adopted, namely: (1) CulaSparse, which is a linear algebra function library
(Humphrey et al. 2010) and (2) CuSP, which is an open source library for sparse
linear algebra (NVIDIA, 2014b). The corresponding scripts of CuSP are provided in
detail at http://opensees.berkeley.edu/wiki/index.php/Cusp. The corresponding
source codes that illustrate the implementation procedure of these two solvers in
detail are available at the website of OpenSees (PEER, 2014). This will facilitate the
reproduction of this research.

©  ASCE
Computing  in  Civil  Engineering  2015 415

CASE STUDY

A performance benchmark is conducted for the proposed GPU-based solvers.


Two structural models are adopted in this benchmark, named TBI2N model and
Shanghai Tower model, respectively. The TBI2N model is a frame-core tube
building designed according to the Chinese building design codes for the TBI
program (H = 141.8 m) (Lu et al. 2014b). It contains 23,945 nodes, 23,024
fiber-beam elements and 16,032 multi-layer shell elements, as shown in Figure 2(a).
The OpenSees model is freely assessable (Lu, 2014c), which can be conveniently
shared and reused in the research community. The Shanghai Tower model (H = 623
m) (Lu et al. 2011) contains 53,006 nodes, 48,774 fiber-beam elements and 39,315
multi-layer shell elements, as shown in Figure 2(b).
The El-Centro EW ground motion is adopted to perform the nonlinear THA.
According to the design conditions, peak ground acceleration (PGA) values of 1000
gal and 400 gal are selected for the TBI2N and Shanghai Tower models,
respectively, to investigate the strong nonlinear behavior they would be subjected to
in extremely rare earthquakes. Detailed information of the GPU and CPU hardware
platforms and their respective solvers are illustrated in Table 1.

(a) the TBI2N model (b) the Shanghai Tower model


Figure 2. Benchmark models

Table 1. Hardware platform and solvers for the comparison


Platform Hardware Solvers

CPU computing Intel Core i7-3970X 3.5GHz SparseSYM

GPU Intel Core i7-4770X 3.4GHz & CulaSparse Solver &


computing NVIDIA Geforce GTX Titan CuSP Solver

©  ASCE
Computing  in  Civil  Engineering  2015 416

1.2
0.9

Top Displacement /m
0.6

/m
0.3
0
-0.3
-0.6
-0.9
0 5 10 15 20 25 30 35 40 45
Time /s/s
Cusp CulaSparse CPU_SparseSYM

(a) TBI2N model


0.9
Top Displacement /m

0.6
0.3
/m

0
-0.3
-0.6
-0.9
0 5 10 15 20 25 30 35 40 45
/s
Time /s
Cusp CulaSparse CPU_SparseSYM

(b) Shanghai Tower model


Figure 3. Comparison of top displacement time history curves

Table 2. Computing Time and speedup ratio of Three Solvers

Computing
Computing
time of the
time of the Speedup Speedup
Platform Solvers Shanghai
TBI2N ratio ratio
Tower
model
model
CPU SparseSYM 168 h \ 409 h \
CulaSparseSolver 18 h 9.33 38 h 10.76
GPU
CuSPSolver 11 h 15.27 27.5 h 14.87

Based on above platforms, the performances of the three solvers (two


GPU-based solvers and one CPU-based solver) are compared. Figure 3 presents the
comparison of top displacement time-history curves. As shown in Figure 3, there are
nearly negligible differences between the results of the CPU solver and the GPU

©  ASCE
Computing  in  Civil  Engineering  2015 417

solvers, which is acceptable. Table 2 shows the comparison of the computing times
and the speed-up ratios. The speed-up ratio is up to 9-15 times by using the two
proposed GPU-based solvers. Evidently, the GPU-based solvers in OpenSees exhibit
significant reliability and high efficiency for the nonlinear THA of large-scale
structures.

CONCLUSIONS

In this study, two new parallel-iterative solvers for the sparse SOEs are
proposed and implemented in OpenSees, based on two GPU-based libraries (CuSP
and CulaSparse). The THAs of two high-rise buildings are conducted using the two
proposed GPU-based solvers and an existing CPU-based solver. The results indicate
that GPU-based solvers have a good agreement with CPU-based solver in accuracy.
Furthermore, a speedup ratio of 9-15 times is achieved using the proposed solvers.
This work provides an important computing technology for the high performance
analysis of large-scale structures based on OpenSees.

ACKNOWLEDGEMENTS

The study is supported by National Key Technology R&D Program (No.


2013BAJ08B02), National Natural Science Foundation of China (No. 91315301
51378299), and Beijing Natural Science Foundation (No. 8142024). The authors also
acknowledge Bo Han and Mengke Li for their contributions to this work.

REFERENCES

Humphrey, J. R., Price, D. K., Spagnoli, K. E., Paolini, A. L., and Kelmelis, E. J.
(2010). “CULA: hybrid GPU accelerated linear algebra routines.” SPIE
Defense, Security, and Sensing. International Society for Optics and
Photonics, 770502-770502.
Li, M. K., Lu, X., Lu, X. Z., and Ye, L. P. (2014). “Influence of the soil-structure
interaction on the seismic collapse resistance of super-tall buildings.” Journal
of Rock Mechanics and Geotechnical Engineering,
doi:10.1016/j.jrmge.2014.04.006.
Lu, X., Lu, X. Z., Zhang, W. K., and Ye, L. P. (2011). “Collapse simulation of a super
high-rise building subjected to extremely strong earthquakes.” Sci. China. Ser.
A., 54(10), 2549-2560.
Lu, X., Lu, X. Z., Guan, H., and Ye, L. P. (2013a). “Collapse simulation of reinforced
concrete high-rise building induced by extreme earthquakes.” Earthq. Eng.
Struct. D., 42(5), 705-723.
Lu, X. Z., Xie, L. L., Huang, Y. L., Lin, K. Q., Xiong, C., and Han, B. (2013b).

©  ASCE
Computing  in  Civil  Engineering  2015 418

“Nonlinear simulation of super tall building and large span bridges based on
OpenSees.”
http://www.luxinzheng.net/download/OpenSees_Workshop/LuXinzheng.pdf
>(Feb. 6, 2015).
Lu, X. Z., Han, B., Hori, M., Xiong, C., and Xu, Z. (2014a). “A coarse-grained
parallel approach for seismic damage simulations of urban areas based on
refined models and GPU/CPU cooperative computing.” Adv. Eng. Softw., 70,
90-103.
Lu, X. Z., Li, M. K., Lu, X., and Ye, L. P. (2014b). “A comparison of the seismic
design of tall RC frame-core tube structures in China and the United States.”
Proceedings of the 10th National Conference in Earthquake Engineering,
Earthquake Engineering Research Institute, Anchorage, AK, doi:
10.4231/D3DZ03252.
Lu, X. Z. (2014c). “OpenSees Tall Buildings.”
http://www.luxinzheng.net/download/OpenSeesTallBuildings.zip >(Feb. 6,
2015).
McKenna, F., Scott. M. H., and Fenves, G. L. (2009). “Nonlinear finite-element
analysis software architecture using object composition.” J. Comput. Civil
Eng., 24(1), 95-107.
NVIDIA (2014a). “CUDA C programming guide.”
http://docs.NVIDIA.com/cuda/pdf/CUDA_C_Programming_Guide.pdf >(Feb.
6, 2015).
NVIDIA (2014b). “CuSP Home Page.” http://cusplibrary.github.io/ >(Feb. 6, 2015).
Owens, J. D., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A. E., and
Purcell, T. J. (2007). “A survey of general-purpose computation on graphics
hardware.” Comput. Graph. Forum., 26, 80–113.
PEER (2014). “Subversion Repositories.”
http://opensees.berkeley.edu/WebSVN/listing.php?repname=OpenSees&path
=%2Ftrunk%2FSRC >(Feb. 6, 2015).
Xie, L. L., Huang, Y. L., Lu, X. Z., Lin, K. Q., and Ye, L. P. (2014a). “Elasto-plastic
analysis for super tall RC frame-core tube structures based on OpenSees.”
Engineering Mechanics, 31(1), 64-71. (in Chinese)
Xie, L. L., Lu, X., Lu, X. Z., Huang, Y. L., and Ye, L. P. (2014b). “Multi-layer shell
element for shear walls in OpenSees.” Computing in Civil and Building
Engineering, ASCE, 1190-1197.
Xu, L. J., Lu, X. Z., Guan, H., and Zhang, Y. S. (2013). “Finite element and
simplified models for collision simulation between over-height trucks and
bridge superstructures.” J. Bridge. Eng., ASCE, 18(11), 1140-1151.

©  ASCE
Computing  in  Civil  Engineering  2015 419

Automated Underground Utility Mapping and Compliance Checking Using


NLP-Aided Spatial Reasoning
Shuai Li1 and Hubo Cai2
1
Ph.D. Student, Lyles School of Civil Engineering, Purdue University, 550 Stadium
Mall Dr., West Lafayette, IN 47907. E-mail: [email protected]
2
Assistant Professor, Division of Construction Engineering & Management, Lyles
School of Civil Engineering, Purdue University, 550 Stadium Mall Dr., West
Lafayette, IN 47907. E-mail: [email protected]

Abstract

The recurrent underground utility incidents (e.g., utility strikes and utility
conflicts) highlight two underlying causes: failure to comply with spatial rules
prescribed in utility specifications and unawareness of utility locations. It is critical to
address these two causes to prevent utility incidents. This paper presents a framework
that integrates natural language processing (NLP) and spatial reasoning to infer the
vertical positions of underground utilities from textual utility specifications and plans,
and to automate the utility compliance checking. The natural language processing
(NLP) algorithm extracts the spatial rules specified in textual utility documents, and
converts the extracted spatial rules to a computer-interpretable format. The spatial
reasoning scheme models the spatial components in a spatial rule as topological,
directional, and proximate relations, and executes the extracted spatial rules in a
geospatial information system (GIS) to automate the depth estimation and the
compliance checking. Several examples are presented to prove the concepts.

INTRODUCTION

Underground utility incidents such as utility conflicts and utility strikes are
long-standing and worldwide problems, which cause time and cost overruns in
construction projects, property damages, environmental pollution, and personnel
injuries and fatalities. In the United States from 1994 to 2013, 5,623 significant
pipeline incidents were reported with a total of 362 fatalities, 1,397 injuries, and $ 6.6
billion in property damage (PHMSA 2014). The actual total cost is magnificently
higher than the amounts being reported because numerous incidents that are not
classified as significant or serious have not been reported. The recurrent incidents
highlight two underlying causes. One is the failure to comply with provisions
prescribed in engineering principles, utility industry codes, and government
regulations. The second cause is the unawareness of the location, especially the
vertical position or buried depth of many utility lines. To address the root causes of
underground utility incidents, this paper presents a framework that integrates natural
language processing techniques with spatial reasoning schemes in a geographical

©  ASCE 1
Computing  in  Civil  Engineering  2015 420

information system (GIS) to infer the vertical positions of underground utilities from
textual utility specifications and plans, and to automate the utility compliance
checking.

LITERATURE REVIEW

Methods of natural language processing (NLP) and spatial reasoning can


extract spatial rules out of utility specifications to assist in automated utility
compliance checking. While little research has been done in the area of NLP in the
utilities domain, NLP has attracted quite some research efforts in the construction
domain. Logcher et al. (1989) developed a knowledge-based query system that takes
routine communications through a natural language analyzer, and produces answers
by retrieving data from a central database and processing the data based on available
knowledge. In order to construct a thesaurus in the field of civil engineering, Abuzir
and Abuzir (2002) developed a tool to extract terms and relations between them from
hypertext markup language (HTML) documents that contain two types of data: the
textual contents and the tags denoting the layout and linking structure between the
texts. The authors used document structure and lexical syntactic features to extract the
terms from the HTML-based documents. To automate the expedient retrieval of CAD
documents from database, Yu and Hsu (2013) adopted a content-based text mining
technique to extract the textual content of a CAD document, and used a vector space
model to match the extracted content with data stored in a corpus database for
document retrieval.
Al Qady and Kandil (2010) proposed to automatically/semi-automatically
extract semantic information from the text of contract documents using a natural
language processing tool. Zhang and El-Gohary (2014) designed a semantic NLP-
based algorithm to extract quantitative requirements from regulatory textual
documents. In that study, hybrid syntactic (syntax/grammar-related) and semantic
(meaning/context-related) text features were synergized in a pattern-matching
approach for information extraction. Their algorithm achieves a very promising
performance with high precision and recall. Despite the achievements of these efforts,
direct expansion and application of the existing methods for underground asset
management are not considered appropriate, mainly because the performances of
NLP algorithms are highly application-dependent (Salama and El-Gohary 2013).
Approximately 80% of information and knowledge in the utility industry possess
spatial components that have not been explicitly captured and modeled by the current
approaches. Given the complicated and various spatial patterns prescribed in utility
specifications and plans, recognizing and modeling these spatial relations is of great
importance.
The spatial relations can be reasoned out through a set of spatial operations
including topological (e.g., touch, within, contain), directional (e.g., above, below),
and proximity (e.g., distance) analysis. Nguyen and Oloufa (2002) identified various
topological information commonly used in building design and construction and
categorized them into more specific groups. They created a topological relation
deduction algorithm to support automatic spatial reasoning in a computer-based
building design system. Borrmann and Rank (2009) developed a spatial query

©  ASCE 2
Computing  in  Civil  Engineering  2015 421

language, which enables the spatial analysis of building information models and the
extraction of partial models subject to certain spatial constraints. These research
efforts pave the way for our study. Our study complements previous research by
incorporating a NLP module to automatically provide semantic spatial rules for
reasoning.

SYSTEM FRAMEWORK

The framework consists of three critical modules. Module 1 is the


development of a taxonomy for underground utility mapping and compliance
checking, which assists the information extraction in module 2 and the spatial
reasoning in module 3. Module 2 is NLP-assisted information extraction. This is
based on the premise that languages used in utility specifications and plans exhibit
certain patterns that can be modeled and used to extract complicated spatial
information and to formulate spatial rules. Module 3 is spatial reasoning, where the
buried depth of underground utilities can be estimated, and the code compliance can
be checked through reasoning out the semantic spatial rules in a GIS environment.

TAXONOMY FOR UNDERGROUND UTILITIES

Utility documents contain many domain specific terms such as “encased” and
“ground rod”. It is difficult and error-prone to recognize or tag these terms using the
off-the-shelf NLP packages. However, these terms are important for the follow-up
data manipulation in GIS. Our study adopts a underground utility taxonomy as a
corpus database to assist domain terminology recognition. In addition, the spatial
relations documented in this taxonomy further facilitate the recognition of spatial
terms in the specification language through enhancing the part-of-speech tagging in
the NLP module, to be detailed later. In the spatial reasoning module—Module 3, the
vocabulary used for an entity archived in GIS might be slightly different from that in
utility documents (e.g., “gas line” and “natural gas pipeline”). The taxonomy can
match the multiple vocabularies that refer to a single entity, and thus eliminate the
ambiguity in the spatial reasoning in GIS.

NLP ASSISTED SPATIAL RULES EXTRACTION

Specification language modeling. We manually annotated 100 clauses


extracted from utility specifications and plans. Through a careful investigation of
these annotated sentences, a specification language model (SLM) is proposed to
capture the linguistically expressed structure, and to decompose the specification
languages into components. Each component can be modeled separately and
composed back together. This facilitates the retrieval of information and the
understanding of the nested spatial relations and quantitative requirements. The main
components of SLM include figure (the subject of the clause, e.g., “gas line”),
landmark (a reference object or event, e.g., “roadway”), entity attribute (an attribute
of either figures or landmarks, e.g., “encased”), spatial descriptor (a geometric spatial
relation between the landmark and the figure, e.g., “under”), deontic indicator

©  ASCE 3
Computing  in  Civil  Engineering  2015 422

(proposed by Zhang and El-Gohary (2014), an indicator of obligation, permission and


prohibition e.g., “must”), verbs (an action to take, e.g., “have”), quantitative value (a
value and a unit to define the quantified requirement, e.g., “2.5 ft”), and comparative
relation (a restriction on the quantitative requirement, e.g., “minimum”). Using an
excerpt “excavation below underground assets should not be conducted within a
distance of 300 mm below the asset located at the lowest level”, the essential
components of SLM are presented in Figure 1.

Figure 1. Essential components of SLM

Spatial information extraction. Figure 2 illustrates the two-part process of


extracting the hierarchical spatial information from the textual documents:
preprocessing of texts and information retrieval. In this study, we use natural
language toolkit (NLTK) (Steven et al., 2009) to build Python programs for
processing and analyzing the specification language data in textual utility documents.
NLTK provides off-the-shelf functions for effectively and efficiently
implementing the pre-processing tasks. Sentence segmentation is to recognize the
typical sentence boundaries. Followed by sentence segmentation, word tokenization
is executed to divide a string into identifiable basic units, i.e. tokens. A token could
be a single world, a number, a punctuation mark, or a symbol. Thereafter, de-
hyphenation is performed to remove the hyphens that are used to indicate
continuation of a word across two lines. Pre-processing of raw texts is useful as it
prepares the language data at the individual word level for the follow-up analysis
such as part-of-speech tagging and chunking.
Part-of-Speech (POS) tagging, or simply tagging, is the process of classifying
words into their parts of speech (i.e., lexical and functional categories) and labeling
them accordingly (Steven et al., 2009) with tags. In this study, we enhance the NLTK
built-in tagger by enabling it to explicitly recognize words with spatial implication
(e.g., spatial preposition “under” and spatial verb “bury”). Chunking or partial parsing
is the task of identifying syntactic patterns of short phrases and subsequently

©  ASCE 4
Computing  in  Civil  Engineering  2015 423

extracting these short phrases from a part-of-speech tagged sentence (Steven et al.,
2009). Chunking is a required task in information extraction (Morarescu, 2007). The
main rationale behind chunking is that by investigating the patterns of part-of-speech
tags, separate pieces of information (e.g., single words) are organized or grouped
together to form a more meaningful phrase (e.g., noun phrase). Hence, entities and
relations between them can be identified and extracted from these meaningful phrases.
In this study, we use regular expressions in Python to specify the chunk patterns and
use NLTK built-in method to chunk the main components of the SLM. These
elements not only carry critical information but also constitute the functional structure
of the specification language. The NLTK built-in methods are limited in correctly
recognizing and labeling the domain-specific terms. One option to deal with this
problem would be to look up the words, especially the un-chunked words, in an
appropriate list of terms. Such list is also known as gazetteer. In this study, a number
of gazetteer lists are compiled to group the domain-specific vocabularies under the
classes of the taxonomy. For example, terms including, but not limited to, “encased”,
“non-encased”, “high voltage”, “low voltage” are stored in the utility attribute
gazetteer list. The gazetteer lists serve as a corpus database where the domain-specific
terms can be inquired and identified by using gazetteer lookup. Thereafter, the
targeted information (i.e., the essential components in the SLM) is retrieved based on
a set of well-designed rules. For example, the noun phrase directly follows a spatial
preposition is regarded as the landmark. After retrieving the information, a
morphological analysis is conducted to map various nonstandard forms (e.g., plural
form of noun) of a word to the lexical form (e.g., singular form of noun). In addition,
the comparative relation is mapped to its symbolic form (e.g., minimal is mapped to >
=). The retrieved information is stored in tuples.

Figure 2. NLP process

©  ASCE 5
Computing  in  Civil  Engineering  2015 424

SPATIAL REASONING IN GIS


Modeling semantic spatial relations. In order to cultivate the environment to
support automatic deduction of the spatial rules, the semantic spatial relations should
be modeled in a computer-interpretable manner. In this study, the various spatial
relationships are represented using buffer analysis based on a set of directional,
proximate, and topological operations. Two-dimensional (2D) and three-dimensional
(3D) buffers are generated in GIS environment using polygon and multi-patch
respectively. Multi-patch—a GIS industry standard—is a geometry used as a
boundary representation for 3D objects. Multi-patch can be composed of triangle
strips, triangles, or rings, which can be used to construct 3D features in Arc GIS,
saving existing data, and exchange data with other non-GIS 3D software packages.
To model the spatial relations, a buffer is constructed around the landmark. To model
the spatial relations, a buffer is constructed around the landmark. The characteristics
of such a buffer are usually inherent in the spatial relation clause. Specifically, if a
spatial relation has directional implications, then the directional information should
be used to further constrain how the buffer is built. For instance, the spatial
preposition “under” has a directional implication of below, therefore a vertical buffer
that stretches below the landmark is constructed in such a context. Figure 3 gives an
example of vertical buffer to represent the under relation. Having established the
buffers, the semantic spatial relations are transformed to the topological queries on
the buffers and features (representing the figure) in GIS. Figure 3 illustrates that, after
the 3D vertical buffer is built, the relationship “the utilities under the roadway” is
equivalent to the aggregation of the topological relations of “the utility intersects with
the buffer” and “the utility inside the buffer”.

Figure 3. Modeling spatial relations

Estimating utility depth. In the estimation of utility depth, we assume the


horizontal locations of both underground utilities and their surroundings (e.g.,
roadway) are known. Utility specifications often contains clauses such as “lines that
under or within 5 feet of roadways must have a minimal depth of 4 feet”. The spatial

©  ASCE 6
Computing  in  Civil  Engineering  2015 425

rulees will be exxtracted usinng the NLP P techniques and fed intto the spatiaal reasoning
module. After implementin
i ng the spatial rules, the figures
f and landmarks arre supposed
to be
b selected in n GSI. In a 2D2 GIS (e.g.., ArcMap), the program m will insert a column in
the attribute tabble of the figure
f (i.e., utility)
u and name it as buried
b depthh. Then the
quaantitative req
quirement associated
a w the deptth in the maain spatial rule
with r will be
coppied to the atttribute tablee. In a 3D GIS (e.g., ArccScene), thiss quantitativee value will
be directly
d used
d to modify thet Z value of o the geomeetry, which will
w move it to the right
placce in the 3D scene. Figuure 4 presents an example of estimatiing utility deepth.

Fiigure 4. Estiimating burried depth

Compliiance check king. For thee compliancee checking, the


t selected figures and
landdmarks willl be checkedd against thee quantitativve spatial relations speccified in the
maiin spatial ruules using thhe aforemenntioned buffeer analysis. If the confiiguration of
figuures and lan ndmarks aree not comppliant with the t spatial rules, the system
s will
highlight the seection of thee non-compliiant instancee to warn thee users. In Figure 5, the
spaatial rule “waarn all otherr personnel to
t keep at leeast 10 m cleear from equuipment” in
guide for und dertaking work
w near underground
u d assets waas used to check the
commpliance. An n unauthorizzed person wasw found too be inside thet 10 m spphere buffer
built around thee excavator, and thus waas identified as a non-com mpliant instaance.

F
Figure 5. Compliance ch
hecking exaample

CO
ONCLUSIONS

©  ASCE 7
Computing  in  Civil  Engineering  2015 426

This paper presents a framework to automatically estimate the vertical depth


of underground utilities and to check the code compliance for underground utilities. It
was found that the specification language model coupled with off-the-shelf NLP
techniques can be used to extract spatial rules from utility documents. By leveraging
the spatial reasoning schemes and the extracted spatial rules, the buried depth of
underground utilities can be roughly estimated, and the compliance checking can be
effectively performed, in an automatic manner. Further investigation will be
conducted to evaluate the performance of the NLP algorithms and the accuracy of the
estimated vertical depth of underground utilities.

ACKNOWLEDGEMENT

This research was funded by the National Science Foundation (NSF) via
Grant CMMI-1265895. The authors gratefully acknowledge NSF’s support. Any
opinions, findings, conclusions, and recommendations expressed in this paper are
those of the authors and do not necessarily reflect the views of NSF or Purdue
University.

REFERENCES

Abuzir, Y., and Abuzir, M.O. (2002). “Constructing the civil engineering thesaurus
(CET) using ThesWB.” Proc., Intl. Workshop on Info. Tech. in Civ. Eng.
2002, ASCE, Reston, VA, 400-412.
Al Qady, M. and Kandil, A. (2010). “Concept Relation Extraction from Construction
Documents Using Natural Language Processing” Journal of Construction
Engineering and Management, 136 (3), 294-302.
Borrmann, A. and Rank, E. (2009), Topological analysis of 3D building models using
a spatial query language, Advanced Engineering Informatics, 23, 370-385.
Logcher, R.D., Wang, M.T., and Chen, F.H.S. (1989). “Knowledge Processing for
Construction management data base.” Journal of Construction Engineering
and Management, 115(2), 196-211.
Nguyen, T.H. and Oloufa, A.A. (2002). “Spatial information: classification and
application in building design” Computer-aided Civil and Infrastructure
Engineering, 17, 246-255.
PHMSA (Pipeline & Hazardous Materials Safety Administration). (2014).
“Significant pipeline incidents.”
http://primis.phmsa.dot.gov/comm/reports/safety/sigpsi.html?nocache=1494
Salama, D. M., and El-Gohary, N.M. (2013). "Semantic text classification for
supporting automated compliance checking in construction." Journal of
Computing in Civil Engineering.
Yu, W. and Hsu, J. (2013). “Content-based text mining techniques for retrieval of
CAD documents.” Automation in Construction (31), 65-74.
Zhang, J. and EI-Gohary, N. M. (2014). “Semantic NLP-based information extraction
from construction regulatory documents for automated compliance checking”
Journal of Computing in Civil Engineering.

©  ASCE 8
Computing  in  Civil  Engineering  2015 427

Information Extraction for Freight-Related Natural Language Queries

Dan P. K. Seedah1 and Fernanda Leite2


1
Research Associate, Center for Transportation Research, University of Texas at
Austin, Department of Civil, Architectural and Environmental Engineering, Austin,
Texas 78701. E-mail: [email protected]
2
Assistant Professor, Construction Engineering and Project Management Program,
Department of Civil, Architectural and Environmental Engineering, University of
Texas at Austin, 301 E Dean Keeton St. Stop C1752, Austin, TX 78712-1094. E-mail:
[email protected]

Abstract
The ability to retrieve accurate information from databases without an
extensive knowledge of the contents and organization of each database is extremely
beneficial to the dissemination and utilization of freight data. Advances in the
artificial intelligence and information sciences provide an opportunity to develop
query capturing algorithms to retrieve relevant keywords from freight-related natural
language queries. The challenge is correctly identifying and classifying these
keywords. On their own, current natural language processing algorithms are
insufficient in performing this task for freight-related queries. High performance
machine learning algorithms also require an annotated corpus of named entities which
currently does not exist in the freight domain. This paper proposes a hybrid named
entity recognition approach which draws on the individual strengths of models to
correctly identify entities. The hybrid approach resulted in a greater precision for
named entity recognition of freight entities – a key requirement for accurate
information retrieval from freight data sources.

INTRODUCTION
Natural Language Processing (NLP) applications provide users with the
opportunity to ask questions in conversational language and receive relevant
answers—rather than formulating a query into possibly unfriendly (or “unnatural”)
formats that machines can understand (Safranm 2013). It provides individuals who
have no in-depth knowledge of a particular area or domain to question and receive
answers either by using a search engine or, more popularly in recent times, through
speech recognition. Numerous advances in this area have been made over the years
but challenges still remain (Liddy 2001); particularly, in identifying domain specific
keywords from a multitude of questions.
Even as search engines and consumer electronic products become more
accessible. NLP applications will continue to have an increasing role in both our
social and work activities. Policy makers making decisions about transportation
infrastructure improvements would benefit if they could ask questions such as “How
many accidents occurred on Interstate 35 [at Dallas] in 2013 compared to 2012?”,
“How many trucks crossed the border between the U.S. and Mexico in the first
quarter of 2014?”, “Which are the top commodities exported from the U.S. to Brazil
in the last decade?” – and receive answers instantaneously. Interestingly, the answers

©  ASCE 1
Computing  in  Civil  Engineering  2015 428

to the questions provided above are stored in some of the available freight databases.
A two stage process has to function if various NLP advances offer decision makers
this tool, specifically the approach must:
1. Correctly identify only the relevant information and keywords when dealing
with multiple sentence structures, and
2. Understand multiple data sources to determine which ones best answer a
user’s query.
This paper addresses the former challenge as off-the-shelf domain-independent NLP
systems can identify entities such as a person, a location, date, time, and a
geographical area, but cannot extract information for specific questions in the freight
planning domain. In freight planning, entities such as unit of measurement, mode of
transport, route names, commodity names, and trip origin and destination are
predominant when performing information extraction tasks as shown in Table 1.

Table 1. Named Entity Types for Freight-Related Natural Language Queries


Named Entity Type Examples
Domain Dependent
ORIGIN & DESTINATION …. from Austin to Houston …,
COMMODITY sugar, milk, gravel, mixed freight
TRANSPORT MODE truck, rail, air, carload, vessel
LINK Interstate 35, Mississippi River
UNIT OF MEASURE number of truckloads, average travel time
Domain Independent
DATE & TIME July 4th, 1776, Three fifteen a m, 12:30 p.m.
LOCATION GDP of Los Angeles-Long Beach-Santa Ana, CA
MONEY 2 trillion US Dollars, GBP 10.40
PERCENT seventeen pct., 12.55%
ORGANIZATION U.S. Department of Transportation

BACKGROUND RESEARCH ON INFORMATION EXTRACTION AND


NAMED ENTITY RECOGNITION
Despite their popularity and use in internet search engines, machine
translation, automatic document indexing, and consumer electronic products,
examples of Natural Language Processing (NLP), Information Extraction (IE), and
Named Entity Recognition (NER) usage in civil engineering are limited. Examples in
the literature include work by Pradhan et al. (2011), Liu et al. (2013), Zhang and El-
Gohary (2013), Cali et al. (2011), Pereira et al. (2013), and Gao and Wu (2013). A
review of the literature shows that there is currently no IE or NER applications in the
freight transport field, much less specifically for freight data retrieval. Furthermore,
natural language query examples found in the civil engineering literature followed a
structured pattern such that the process of parsing and correctly categorizing named
entities is quite straightforward (Pradhan et al. 2011, Liu et al. 2013). Most user
queries relating to freight transportation were found not to follow a similar pattern or
sentence structure.

©  ASCE
Computing  in  Civil  Engineering  2015 429

A number of domain specific information extraction techniques have been


proposed by practitioners– with each having its pros and cons. These are mainly
categorized into rule-based and machine/learning based approaches. Rule-based
named entity detection captures keywords using pattern matching (Nadeau and
Sekine 2007). The main setback with this approach is that if the exact phrase is not
contained in the pattern, the application fails to recognize the entity being identified.
Dictionary-based recognition, which is categorized under the rule-based approach,
utilizes reference lists to identify entities by searching the dictionary (Boldyrev et al.
2013). Though effective, there are issues where keywords may be wrongly
categorized.
To improve upon the rule-based approaches, researchers have developed
statistical named entity classifiers using supervised learning. Examples cited in the
literature include Hidden Markov Models (Bickel et al. 1998), Decision Trees (Sekine
et al. 1998), Maximum Entropy Model (Borthwick 1999), Support Vector Machines
(Asahara and Matsumoto 2003), and Conditional Random Fields (Lafferty et al.
2001). Though powerful, these classifiers require an annotated corpus of named
entities to train the models. With larger training sets, the models become “smarter”
and are able to better handle ambiguities in assigning categories to keywords.
Unfortunately, such an annotated corpus for the freight planning domain does not
currently exist.
This paper presents a hybrid approach for recognizing keywords in the freight
planning domain using a combination of the information extraction techniques
discussed. The hybrid approach is known to improve classification results when
compared with output from individual models (Florian et al. 2003; Srivastava et al.
2011; Oudah and Shaalan 2012). Depending on the known scope or range of values
of a category, a particular technique is chosen to handle keywords for that category.
For example, words which identify a location or a place are handled with domain-
independent statistical models and words signifying commodity names are recognized
using dictionary-based techniques. Roadway names and units of measures which tend
to vary tremendously and are domain specific were found to be best handled using a
handcrafted rule-based approach.

RESEARCH APPROACH
The research task is to represent multiple natural language queries into a
format that a computer can understand and process. This requires converting
unstructured data from natural language sentences into structured data, and
identifying specific kinds of information relating to the freight planning domain. In
IDEF0 nomenclature, the input for this task is any naturally formed question relating
to freight planning. The control is the grammar for the query language, which in this
case is the English language. The reasoning mechanism involves i) developing an IE
and NER approach that addresses freight-related queries, ii) ensuring ambiguity in
names are correctly handled, e.g., relevant roadways names are constrained to only
places specified in the query, and iii) resolving conflicting query items, e.g., pipelines
move only liquid and gas commodities. The expected output from this task is a list of
data items with very high categorization accuracy of named entities—ideally greater
than 95%.

©  ASCE
Computing  in  Civil  Engineering  2015 430

Collecting Freight Related Queries


A sample collection of 100 freight related queries as shown in Table 3 was
collected by requesting questions from researchers who are active in the field of
freight. The order in which the questions were received was first randomized and
using k-fold cross validation, grouped into training and testing subsets. K equals 10
was used in creating the training and test subsets in this research task because of the
small sample size.

Table 2. Sample Queries Used in Testing and Comparing IE and NER Models
1. What are the top five commodities/industries utilizing IH-35 as a major freight
corridor?
2. What is the average travel time and level of service on major arterial roads
during peak hours?
3. What is the number of truck related accidents which occurred on IH-35 from
May 2013 to June 2013?
4. What is the total number of oversize/overweight vehicle permit fees collected in
Texas for FY 2013?
5. What is the number of bridges along the IH-45 corridor requiring improvements?
6. Where are Amazon shipping facilities?
7. How has the focus on freight changed in the various highway trust fund bills?
8. With the expansion of the Panama Canal, what mode of freight will see the
greatest change within the US?
9. If $500 million was available for freight infrastructure nationally, where and how
would you suggest the money be spent?
10. Who pays for freight?

Annotating the Questions


Corpus development involves assigning entity categories to keywords. This
process was performed manually as it requires identifying keywords. In this example,
keywords were annotated such that it can be utilized in training a domain independent
classifier. For example, the word “2012” is tagged TIME and “gravel” is tagged
COMMODITY. The token “San Antonio” is tagged DESTINATION twice for “San”
and “Antonio”. Non-named entities were tagged with “O”.

The Domain-Independent Machine Learning Model


The Stanford NER 7 class model which is a Conditional Random Fields
model is trained on the MUC-7 dataset and addresses seven entities: Person,
Location, Organization, Time, Date, Percent, and Money (Finkel et al. 2005).
Conditional Random Fields are probabilistic, undirected graphical models which
compute the probability, , of a possible label sequence, ,
given the input sequence . In NER, the input sequence corresponds
to the tokenized text and the label sequence, , are the entity tags. Text segmentation
is performed based on the model knowing beginning and ending of a phase and the
corresponding tags for that phrase (Klinger and Friedrich 2009). The Stanford NER 7

©  ASCE
Computing  in  Civil  Engineering  2015 431

class model is selected to demonstrate the strengths of domain independent models in


correctly classifying freight-related keywords such as location and time.

Training a Domain-Dependent Machine Learning Model


The Stanford CRF model was trained using the annotated corpus described
earlier. With the k equal 10 subsets, 1 subset is held for testing and the remaining 9
subsets for training. This process is repeated k times such that each question is
included in the test and training sets at least once.

Developing the Domain-Dependent “Dictionary-Based” NER models


Regular expressions and dictionaries are used here in developing a rule-based
IE and NER model. Regular expression patterns were developed for each named
entity category using external data sources and sample text. The disadvantage of
using regular expressions and dictionaries as discussed earlier in the literature review
is that, if the model does not recognize a pattern the word is not tagged even though it
may fall in a particular category.
For “LOCATION” named entities, a combined list of U.S. states and the
Census Bureau’s rank of the largest 293 cities by population as of July 1, 2013 was
used. Whenever a question was submitted, the sentence was parsed through this list of
cities which are compiled as regular expression patterns. When a match is found in a
sentence, the matching city name (or phrase) is extracted and tagged as a
“LOCATION” entity.
To develop the “COMMODITIES” regular expression patterns, 1,600
commodity group names from the Standard Transportation Commodity Codes
[STCC] was compiled. Using an approach similar to what was used in finding
“LOCATIONS” a matching list of commodities was sought
For the “TRANSPORT MODE” category, data values specified in various
freight data dictionaries was compiled. This list contained all modes of transport
including the descriptive text such as

.
An approach similar to what was used in the “TRANSPORT MODE” category
was used in developing the “LINK” category. Regular expression patterns were
developed from a list of roadway suffices and data dictionary values. Examples of
keywords identified this approach include:

The DATE and TIME regular expressions patterns were also developed by
modifying an existing temporal expressions pattern developed by Bird (2009b) to
include, amongst others, terms such as “non-peak period”, “past month”, “last N
years”, and the four seasons
This UNIT OF MEASURE category was also developed using data values from
the various freight data dictionaries. Examples of keywords identified include "tons",
"value", "level of service", "AADT", "truck traffic", "travel time", "percentage",
"count", In addition to the above, descriptive texts such as “average”, “number of”,
“top five’, “most”, and “cheapest” are included into this category.

©  ASCE
Computing  in  Civil  Engineering  2015 432

Developing the Domain-Dependent “Feature-based” NER models


To address the limitations of the dictionary-based model, a feature-based
model was proposed. By examining the prefixes and suffixes relating to a named
entity, further refinement of the dictionary-based model can be made. For example,
the route entity name ‘CR 2222’ (i.e., County Road 2222) was found to be captured
in the TIME category as ‘2222’, (i.e., the year ‘2222’). However, by examining the
explicit prefixes and suffixes relating to each category, the model can determine the
most likely category.

COMPARISON OF MODELS
The performance of an NER model is based on the model’s ability to correctly
identify the exact words in a sentence that belong to a specific named entity type or
category. The commonly used metric for quantitative comparison of NER systems are
Precision, Recall, and F-measure. The trained and untrained Stanford CRF models
and the dictionary-based and feature-based rules were tested with 100 questions
collected and used in developing the initial freight data corpus. The output of the
hybrid model is shown in Figure 1.

Figure 1. Hybrid Model Results

The trained CRF model recorded a high precision for the categories it is
familiar with – in this case the LOCATION and TIME – with f-measures of 77.08
and 69.52, respectively. When combined with the dictionary-based rules, slightly
better improvements were recorded for the TIME category, i.e. f-measure 78.43. The
trained CRF model also performs well with the UNIT OF MEASURE category which
recorded an f-measure of 59.65. The dictionary-based rules perform best with the
COMMODITY and MODE categories recording 61.36 and 68.33 f-measures,
respectively. Concerning ORIGIN and DESTINATION, the feature-based rules
provided the best opportunity to classify these categories though the current setup
showed very low f-values. With more robust rules the classification of these entities
can be improved and additional training of the CRF model may assist with better
classification of this category. The LINK category was equally classified by both the

©  ASCE
Computing  in  Civil  Engineering  2015 433

trained and the dictionary-based rules which when combined record f-measures of
70.69. Based on the observations from the result, the best combination sub-models for
freight transport entity classification as shown in Figure 2, are:
• Dictionary-based rules for the COMMODITY and MODE categories
• A combination of dictionary-based rules and a trained CRF for the LINK
category
• A trained CRF model to handle TIME, UNIT OF MEASURE, and
LOCATION entities, and
• Feature based-rules to handle ORIGIN and DESTINATION entities. It is
probable that should a larger corpus be eventually developed, the trained CRF
model may be able to better handle this category.

Figure 2. Pipeline Architecture for Freight Related NER hybrid system

CONCLUSION
The technological shift towards NLP usage is inevitable considering the
investments being made by large technology companies and consumer adoption of
speech recognition products. Applications in the civil engineering domain is however
lagging as an annotated corpus for training machine learning algorithms, for example,
does not currently exist. This paper presents two main contributions to NLP usage in
the civil and transport data domains. The first contribution is the development of an
NER approach to correctly identify and classify keywords from freight-related natural
language expressions and queries. Future research on freight database querying can
utilize this work to develop applications that do not require stakeholders to
necessarily have in-depth knowledge of each database to get answers to their
questions. The second contribution is the beginning of a collection of freight-related
questions to develop a freight specific corpus similar to what has been done in the
bio-medical field. This annotated corpus can be further expanded to the broader
transportation planning domain. Providing decision-makers with the ability to ask
questions in conversational language and receive relevant answers is an exciting
prospect for policy development, planning, management, and funding of
infrastructure projects.

©  ASCE
Computing  in  Civil  Engineering  2015 434

REFERENCES

Bickel, P. J., Ritov, Y., and Ryden, T. (1998). “Asymptotic normality of the
maximum-likelihood estimator for general hidden Markov models.” The
Annals of Statistics, 26(4), 1614–1635.
Bird, Steven, Ewan Klein, and Edward Loper. ) “Natural Language Processing with
Python.” O'Reilly Media, Inc., 2009.
Boldyrev, A., Weikum, G., and Theobald, M. (2013). “Dictionary-Based Named
Entity Recognition.”
Borthwick, Andrew. “A Maximum Entropy Approach to Named Entity Recognition.”
PhD diss., New York University, 1999.
Calì, Davide, Antonio Condorelli, Santo Papa, Marius Rata, and Luca Zagarella.
"Improving intelligence through use of Natural Language Processing. A
comparison between NLP interfaces and traditional visual GIS interfaces".
Procedia Computer Science 5 (2011): 920-925.
Finkel, J. R., Grenager, T., and Manning, C. (2005). “Incorporating non-local
information into information extraction systems by gibbs sampling.”
Proceedings of the 43rd Annual Meeting on Association for Computational
Linguistics, Association for Computational Linguistics, 363–370.
Florian, R., Ittycheriah, A., Jing, H., and Zhang, T. (2003). “Named entity recognition
through classifier combination.” Proceedings of the seventh conference on
Natural language learning at HLT-NAACL 2003-Volume 4, Association for
Computational Linguistics, 168–171.
Gao, Lu and Hui Wu. (2013) “Verb-Based Text Mining of Road Crash Report.”
Transportation Research Board 92nd Annual Meeting Compendium of Papers.
Lafferty, J., McCallum, A., and Pereira, F. C. (2001). “Conditional random fields:
Probabilistic models for segmenting and labeling sequence data.”
Liu, Xuesong, Burcu Akinci, Mario Bergés, and James H. Garrett Jr. “Domain-
Specific Querying Formalisms for Retrieving Information about HVAC
Systems.” Journal of Computing in Civil Engineering 28, no. 1 (2013): 40-49.
Nadeau, D., and Sekine, S. (2007). “A survey of named entity recognition and
classification.” Lingvisticae Investigationes, 30(1), 3–26.
Oudah, M., and Shaalan, K. F. (2012). “A Pipeline Arabic Named Entity Recognition
using a Hybrid Approach.” COLING, 2159–2176.
Pereira, Francisco C., Filipe Rodrigues, and Moshe Ben-Akiva. "Text analysis in
incident duration prediction." Transportation Research Part C: Emerging
Technologies 37 (2013): 177-192.
Pradhan, Anu, Burcu Akinci, and Carl T Haas (2011). Formalisms for Query Capture
and Data Source Identification to Support Data Fusion for Construction
Productivity Monitoring, Automation in Construction, 20 (4), 389-98
Safranm, N. (2013). “The Future Is Not Google Glass, It’s Natural Language
Processing.” Conductor Blog, Oct. 10, 2014.
Sekine, S., Grishman, R., and Shinnou, H. (1998). “A decision tree method for
finding and classifying names in Japanese texts.” Proceedings of the Sixth
Workshop on Very Large Corpora.

©  ASCE
Computing  in  Civil  Engineering  2015 435

Srivastava, S., Sanglikar, M., and Kothari, D. C. (2011). “Named entity recognition
system for Hindi language: a hybrid approach.” International Journal of
Computational Linguistics (IJCL), 2(1).
Zhang, Jiansong, and Nora M. El-Gohary. 2013. “Semantic NLP-Based Information
Extraction from Construction Regulatory Documents for Automated
Compliance Checking.” Journal of Computing in Civil Engineering.

©  ASCE
Computing  in  Civil  Engineering  2015 436

BIM-Based Planning of Temporary Structures for Construction Safety

Kyungki Kim1 and Yong Cho2


1
Robotics & Intelligent Construction Automation Laboratory, School of Civil and
Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr.,
Atlanta, GA 30332-0355. E-mail: [email protected]
2
Robotics & Intelligent Construction Automation Laboratory, School of Civil and
Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr.,
Atlanta, GA 30332-0355. E-mail: [email protected]

Abstract

Despite significant advances in Building Information Modeling (BIM)


technology for construction planning, existing technology still lack the capabilities of
planning and analyzing temporary structures. While temporary structures such as
scaffolding and temporary stair towers significantly impact construction safety, they
often do not appear in BIM. Temporary structure objects manually inserted into BIM
cannot automatically generate information of their impact on construction safety.
Understanding this deficiency, this research attempts to generate required temporary
structures and analyze associated safety risk automatically. Specifically, this research
focused on planning of temporary stair towers used during roof construction activities.
For automation, a set of algorithms were created that analyze geometric conditions in
BIM, generate required temporary stair towers, and analyze their impact on safety.
The algorithms were imbedded into a commercially-available BIM platform and
tested on an ongoing construction project. As a result, optimized locations and shapes
of temporary stair towers and associated potential safety hazards were identified and
visualized in BIM and schedule. The main contribution of this research would be to
automatically perform more practical construction safety planning by incorporating
temporary structures.

INTRODUCTION

Establishing a practical and effective construction plan is essential for


successful construction delivery. Planning for safety is one of the critical tasks in
construction planning. Construction planners are required to analyze construction site
conditions using two- or three-dimensional building designs and construction
schedules to identify potential safety hazards and apply preventive measures.
However, current construction industry relies on manual efforts of identifying
and preventing safety hazards. Considering complexity of construction projects and
lack of human resources for construction planning in many projects, thorough

©  ASCE 1
Computing  in  Civil  Engineering  2015 437

planning for safety can be highly time-consuming and error-prone. Furthermore,


practical safety planning becomes even more complex and challenging if temporary
structures are taken into account. Even with their impact on safety, temporary
structures such as scaffolding and temporary stair towers are rarely planned and
delineated in building drawings or models (Ratay 1996). While many recent
construction projects manually insert important temporary structure objects into BIM
for more realistic visualization and site coordination, manual efforts to analyze
potential safety problems related to temporary structures remain existing.
Understanding the deficiencies, this research enables automated generation of
required temporary structures and analyze associated safety risk automatically.
Specifically, this research focuses on planning of temporary stair towers used during
roof construction activities. To automate the processes, a set of algorithms have been
created that (1) analyze geometric conditions in BIM, (2) generate required temporary
stair towers, and (3) analyze their impact on safety.
This paper is organized as follows. Next section provides a review of previous
works in computer-assisted planning of temporary structures. The algorithm section
presents descriptions about the proposed temporary structure planning algorithm.
Then, the case study section presents the developed software prototype embedded
into a commercially available BIM platform and its implementation in safety
planning in an ongoing building construction project. The last section discusses
current limitations and potential future research studies.

RELATED WORKS

Computer-assisted planning and management of temporary structures

This section presents computer-assisted technology used for designing and planning
of temporary structures. As shown in Table 1, the capabilities of existing technology
were summarized according to the topics related to planning and analysis of
temporary structures. Many approaches exist that focus on expanding the benefits of
information technology. Existing industry and academic approaches were reviewed to
identify the problems that information technology can solve. Automated, manual, and
insufficient processes were marked as A, M, and I, respectively.

©  ASCE 2
Computing  in  Civil  Engineering  2015 438

Table 1. Computer-assisted approaches for temporary structure planning

Smart scaffolder (2014)


Scia scaffolding (2009)
Jongeling et al. (2008)
Sulankivi et al. (2010)
Kim and Ahn (2011)

Akinci et al. (2002)

Zhang et al. (2013)


Kim et al. (2014)
Topics of concern
Planning Analysis of building geometry
M I I I
Temporary structure design
generation I A M M M M A A
Temporary structure type
selection A
Structural stability of temporary
structures A
Analysis Impact on safety (incl. spatial
conflict) M A A I

Impact on productivity (incl.


duration) A
Impact on cost
A

(Automated: A, Manual: M, Insufficient: I)

In order to identify required temporary structures from a digital model


automatically, both geometric conditions as well as non-geometric condition (i.e.,
materials and construction schedule) need to be analyzed by computers. Currently,
few approach automatically analyzes spatial and temporal project conditions and
identifies required temporary structures for specific construction tasks. Most of the
approaches conduct this process manually based on visual analysis. Only the safety
management tool developed by Kim and Ahn (2011) automatically recognized the
perimeter of a BIM model and incorporated a scaffolding system model around the
building model. This research, however, did not present any detailed methodology to
analyze a building’s geometry. Kim et al. (2014) established a theoretical foundation
in support of automated temporary structure type selection. Even though they focus
on automated type selection, they suggest a useful method to analyze the model’s
geometric condition based on the relationship between the work face and the base
surface. While their approach enables automated selection of temporary structure
types, the work face and base surface of each situation should still be specified
manually by a user.
There are several successful approaches to generate detailed designs of
temporary structures both automatically and manually. The tool developed by Kim

©  ASCE 3
Computing  in  Civil  Engineering  2015 439

and Ahn (2011) automatically designs scaffolding systems around a building model.
Smart Scaffolder (2014) also generates pre-defined types of scaffolding systems
automatically around walls. While these two approaches assist users to generate
scaffolding systems rapidly, they design scaffolding systems exclusively for walls
and fail to distinguish scaffolding systems for different construction tasks. Scia
scaffolding (2009) provides functions in its user-interface that assist users to design
scaffolding systems manually. It also provides automated code-compliance and
structural stability checking for scaffolding systems. Sulankivi et al. (2010) and Kim
and Ahn (2011) incorporated safety features such as guardrails into the temporary
structure models. Lee et al. (2009) developed a tool that generates the formwork
layouts based on the prioritized design requirements. Zhang et al. (2013) presents a
BIM-based automation that identifies potential falling hazard locations and generates
fall protection systems such as guardrails and covers. Even though this approach
demonstrated its capability to analyze the building geometry to automatically identify
and design temporary structures needed for fall protection, many types of temporary
structures used by construction workers were not addressed in this research.
Several approaches incorporate the temporary structure models into the main
models and 4D construction simulations to analyze their impact on the project.
Jongeling et al. (2008) inserted temporary structure objects into building models to
simulate the distances between work crews. Akinci et al. (2002) specified the space
occupied by a scaffolding system in an attempt to analyze the spatial conflicts
between spaces occupied by construction activities and temporary structures.
However, these efforts rely on manually generated temporary structure plans.

The need for BIM-based automation of temporary structure planning

Model-based technology such as BIM is considered as one of the solutions to


improve construction safety. Many industry projects incorporated critical temporary
structures into building models to visualize and simulate construction project
realistically. However, the main benefit from applying model-based technology is
often limited to visualizing the temporary structures. The essential tasks for planning
and analyzing temporary structures have not been incorporated into BIM and users
cannot take full advantage of rich information in BIM.
Currently available technology does not suggest a methodology that
overcomes the explained drawbacks. Little research has been conducted to
automatically analyze geometric and non-geometric conditions of temporary
structures on a construction project. Since identification and design of required
temporary structures still requires experiences engineers to understand complex
building geometry and schedule, potential exists to assist them in their decision
making. Furthermore, an integrated tool that utilizes the rich as precise information
available from digital building models, assesse the project conditions, and develops a
temporary structure utilization plan automatically does not exist.
Recognizing the needs, this research attempts to develop and test a BIM-based
tool that automatically generates a temporary structure plan that supports pro-active
management of temporary structures utilizing the rich information available in digital

©  ASCE 4
Computing  in  Civil  Engineering  2015 440

building models. The current scope was limited to temporary stair towers used during
roof construction.

AUTOMATED PLANNING OF STAIR TOWERS

This section presents details of proposed system for automated planning of


temporary stair towers. Temporary stair tower placement rules have been identified
based on the authors’ interviews with experts participating in an on-going
construction planning and management.

Temporary stair tower placement rules

Temporary stair towers provide construction workers with temporary access to


higher levels of job sites in a construction project. In this research, placement rules
were established focusing on temporary stair towers used by roofing activities. The
rules include:

1. A temporary stair tower has to be in a minimum distance from work locations:


Construction workers need to work within the minimum distance from a
temporary stair tower for fast evacuation.
2. Temporary stair tower placement has to avoid any spatial conflict with other
temporary structures (e.g. scaffolding): Spatial conflict between temporary
structures can cause productivity losses and safety hazards.
3. Temporary stair tower placement has to avoid any spatial conflict with other
construction activities (e.g. window and door installation): Installation
activities occurring adjacent to stair towers cause congestion and spatial
conflict that can lead to safety hazards.

The placement rules are project-specific and they provided the basis for
developing planning algorithms in this research.

Algorithm for temporary stair tower installation

The placement rules provide the knowledge to be implemented by BIM-based


automation algorithms. The specific steps of the automation are explained in this
section.
According to the placement rules, an optimal location for a temporary stair
case needs to be within a minimum distance to any work location (rule 1). In order to
measure the distances from a temporary stair tower to work locations, two types of
information have to be derived from BIM and schedule: Candidate locations for a
temporary stair tower and work locations in a project.

©  ASCE 5
Computing  in  Civil  Engineering  2015 441

Candidate locations for a temporary stair tower

In this research, candidate stair tower locations for roofing activity were
assumed to be identical to leading edges. Thus, geometric relationships between roofs
and between roofs and walls have been analyzed automatically. The steps and results
are shown in Figure 1.
All the roof elements were selected manually (1. Roof selection) and fall
edges were identified automatically. Each edge was divided into three-feet-length (0.9
meters) edges. Then, each of the short edges were analyzed if there is a sudden
change in elevations in front of the edge (2. Detect falling edges). Finally, edges in
front of walls have been removed from the list of possible locations (3. Remove edges
in front of walls). The vertices of all the edges form a group of candidates for possible
temporary stair tower locations.

Figure 1. Identification of temporary stair tower location candidates

Work locations

Work locations were identified by creating a grid and identifying grid points
within ceiling objects. As shown in Figure 2, a grid was created and each point in the
grid has been represented by its center point. By examining if each center point is
within the boundary of the roofs, work locations were identified.

Figure 2. Identification of work locations

©  ASCE 6
Computing  in  Civil  Engineering  2015 442

If the entire roof is composed of several work packages and each work
package is linked to a schedule activity, work locations for each work package need
to be identified as shown in Figure 3.

Figure 3. Grid points in one work package

Temporary stair tower location selection

In the previous steps (section 3.2.1. and 3.2.2.), candidate locations of


temporary stair towers and work locations for scheduled roofing activities were
identified. Finally, one candidate location is selected if it has the minimum sum of
distances to all the work locations. During the analysis, locations in front of
scaffolding (rule 2) and window and door installation (rule 3) were excluded from
candidate locations. Geometric information of windows and doors were extracted
from the building model. While scaffolding objects can be integrated into BIM and
schedule either manually or automatically (Kim and Teizer, 2014). Basic geometric
information in scaffolding objects, such as end points and height, were used to
examine if a candidate location is in front of a scaffolding.
The results are illustrated in Figure 4. For a selected work package, two
temporary stair towers were created as a result of analyzing the original BIM and
schedule that include scaffolding.

Figure 4. Temporary stair towers in BIM

©  ASCE 7
Computing  in  Civil  Engineering  2015 443

CONCLUSION AND DISCUSSION

Despite the importance, temporary structures are often installed without


sufficient planning efforts and their impact on the project are rarely reviewed before
they are installed. The presented research attempted to overcome existing drawbacks
in construction planning practices and technology by automatically generating
temporary structures focusing on planning of temporary stair towers. A set of
algorithms were created based on the placement rules. The results show that optimal
locations for temporary stair towers were selected for one work package by
implementing the algorithm in a realistic construction project.
By incorporating more types of temporary structures, the research has the
potential to improve safety and productivity of a construction project by assisting in
construction planners to make optimal decisions related to temporary structures.
Construction site coordination plan can become more realistic by visualizing
temporary structures and associated safety hazards into BIM.
However, for more realistic planning of these temporary structures, basic
algorithms presented in this paper need further development. First, placement rules
need to be refined to account for various possible cases. For example, in addition to
the minimum total distance to work locations, more place rules need to be established
through interviews with construction experts. Also, instead of focusing on one
location and time, the algorithms need to be improved to enable temporary structure
planning over a longer period of time in the construction schedule.

REFERENCES

Jongeling, R., Kim, J., Fischer, M., Mourgues, C., Olofsson, T., (2008) “Quantitative
analysis of workflow, temporary structure usage, and productivity using 4D
models.” J. Aut. in Constr., 17 (6), 780-791.
Kim, J., Fischer, M., Kunz, J., Levitt, R. (2014). “Semiautomated Scaffolding
Planning: Development of the Feature Lexicon for Computer Application.” J.
Computing in Civil Engr.
Kim, H., Ahn, H. (2011). “Temporary facility planning of a construction project using
BIM (Building Information Modeling).” Proc., 2011 ASCE International
Workshop on Computing in Civil Engineering, ASCE, 627-634.
Kim, K., Teizer, J. (2014). “Automated design and planning of scaffolding systems
using building information modeling.” J. Adv. Engr. Informatics, 28, 66-80.
Lee, C., Ham, S., Lee, G. (2009). “The development of automatic module for
formwork layout using the BIM.” Proc., ICCEM/ICCPM, Vol. 3, 1266-1271.
Nemetschek (2009). “Scia Scaffolding: providing an accurate design and time-saving
workflow.” <http://www.scia-online.com/www/websiteUS.nsf/0/
e87baaf3b09f3439c125758a004a411a/$FILE/Scia-Scaffolding.pdf> (Nov.10,
2012).
Ratay, R. (1996). Handbook of Temporary Structures in Construction: Engineering
Standards, Designs, Practices and Procedures, McGraw-Hill, New York.

©  ASCE 8
Computing  in  Civil  Engineering  2015 444

Smart Scaffolder (2014). “BIM Toolbox”, <http://www.smartscaffolder.com> (Dec.


20, 2014).
Sulankivi, K., Kähkönen, K., Mäkelä, T., Kiviniemi, M. (2010). “4D-BIM for
construction safety planning.” Proc., W099-Special Track 18th CIB World
Building Congress, 117–128.
Zhang, S., Teizer, J., Lee, J. K., Eastman, C. M., & Venugopal, M. (2013). “Building
information modeling (BIM) and safety: Automatic safety checking of
construction models and schedules.” J. Aut. in Constr., 29, 183-195.

©  ASCE 9
Computing  in  Civil  Engineering  2015 445

Applying a Reference Collection to Develop a Domain Ontology for


Supporting Information Retrieval

Nai-Wen Chi1; Yu-Huei Jin1; and Shang-Hsien Hsieh1

1
Dept. of Civil Engineering, National Taiwan Univ.,
No. 1, Roosevelt Rd., Sec. 4, Taipei City, 10617, Taiwan.
E-mail: [email protected]; [email protected]; [email protected]

Abstract
Technical documents are often generated during
Architecture/Engineering/Construction (A/E/C) projects and research.
Information Retrieval (IR) is a common technique to manage growing technical
document collections. However, due to the complexity of technical documents,
applying IR directly to technical documents often leads to unsatisfactory results.
Many semantic approaches, such as ontology, are applied to enhance the
performance of IR. Developing domain ontologies requires human effort and
thus, is a time-consuming task. Further, developing automated approaches for
supporting ontology development has become an important requirement.
Reference collections developed to evaluate IR performance can be found in
many IR research studies. They are representative subsets from entire
document collections and are labeled by domain experts. In order to enhance IR
performance, the authors propose an automated approach to grow the base
domain ontology from a reference collection. The authors also validate the
proposed approach on an earthquake engineering reference collection, called
the NCREE collection. The results show that the proposed approach is effective
in enhancing IR performance. In addition, the workload of the proposed
approach is also affordable for domain experts and can become a possible
solution for the automation of ontology development.

Keywords: Ontology; Information retrieval; Earthquake engineering

©  ASCE
Computing  in  Civil  Engineering  2015 446

INTRODUCTION

Engineering documents, such as research records, conference papers, or


technical reports, are often generated during
Architecture/Engineering/Construction (A/E/C) projects and research. In order to
efficiently manage the growing number of engineering documents, different types
of knowledge management approaches are applied to the digitalized documents.
Information Retrieval (IR) is one such common approach and is implemented on
common search engine systems (e.g., Google, Yahoo, and Bing). It enables people
to locate documents quickly by delivering query terms that can represent their
information requests.
Although IR is a mature technique, a generalized IR system cannot always
achieve satisfactory performance on domain specific documents due to many
reasons. For engineering documents, the terminologies of the engineering
knowledge domain, and the complexity of the structures organized by multi-topics,
often lead to inaccurate search results. Specialized IR systems are created to cope
with domain-specific documents. In many cases, the scales of domain-specific
document collections are relatively small. Such a characteristic enables
domain-specific document collections to apply more diverse algorithms with
higher computing complexity. Special purpose IR systems are often supported by
domain knowledge or semantic resources. Ontology is a common example.
Gruber (1995) defined ontology as “a formal, explicit specification of
conceptualization”. Ontology is a machine readable knowledge representation that
describes a domain knowledge by concepts, instances, and relationships. In the
A/E/C domain, ontologies are also applied to support IR tasks. For example, Lin
et al. (2012) applied domain ontology for partitioning earthquake engineering
technical documents into smaller passages. This method reduces the basic unit of
IR from a document level to a passage level and overcomes the unsatisfactory IR
performance caused by multi-topics and complicated document structure.
Despite the benefits ontology introduces to all artificial intelligence fields, its
availability still remains a problem. Ontology development requires efforts from
knowledge engineers. The limited availability of ontology also limits possible
applications. Consequently, many existing research studies were dedicated to the
automation of ontology development. Rezgui (2007) applied IR techniques to
extend domain ontologies with existing semantic resources and proposed an
iterative approach to maintain the existing ontology. Hsieh et al. (2011) utilized
the table of contents, glossary, and definitions of domain handbooks as a
structured skeleton, and then developed a semi-automated approach to refine the
ontology. Most of the existing research dedicated to the automation of

©  ASCE
Computing  in  Civil  Engineering  2015 447

construction development share a common direction that reuses existing


knowledge or semantic resources and then transforms them into an ontology
format. In this paper, the authors propose an ontology development approach
based on a similar concept. The proposed approach employs a reference collection,
which is an essential and common resource generated during an IR research. The
ontology that is developed from a reference collection is then applied to support
IR tasks.

REFERENCE COLLECTION: THE NCREE COLLECTION

A reference collection is developed for evaluating IR performance. It consists


of a set of information requests, a set of collected documents, and all the relevance
assessments between each information request and each document. The relevance
assessments are labeled by domain experts manually. By delivering the specific
information request as the query term to an IR system and then comparing the
relevance assessments with the search results, we can determine whether or not
the IR system retrieves the correct document (i.e., the retrieved document is
indeed related to the information requirement represented by a query term) .
From 2006 to 2008, Lin et al. (2008) gathered 120 earthquake engineering
technical documents from the National Center for Research on Earthquake
Engineering (NCREE) and invited domain experts from the earthquake
engineering field to establish a reference collection for the document set. The
important tasks included developing eight important information requests, making
descriptions and narratives for defining the information requests, and making
relevance assessments between the eight information requests and the 120
documents. Such a reference collection is therefore able to support IR research in
the A/E/C domain. From 2008 to 2014, the NCREE collection has grown and
expanded to 360 documents. The authors have maintained the collection by
gathering the digital format of the updated technical reports from NCREE. During
the summer of 2014, the authors again invited domain experts to update the
relevance assessments for the expanded 240 documents. Such an extension of the
NCREE collection makes it more reliable (as the size is larger and more
comprehensive) and provides increased possibility to support more IR research.
On the other hand, the reference collection itself also provides a rich knowledge
base and benefits to the researchers in the earthquake engineering domain.
Therefore, the development methodology of a domain ontology for managing the
growing document collection has motivated this research.
The NCREE collection provides a good case that illustrates the history of
growth for a document collection. In this paper, the authors adopt the initial 120

©  ASCE
Computing  in  Civil  Engineering  2015 448

NCREE documents to develop the earthquake engineering domain ontology, and


then apply the ontology to the total 360 documents for validating the ontology.

METHODOLOGY

The basic concept for developing a domain ontology from a reference


collection is shown in Figure 1. The process can be further divided into three steps:
(1) define the up-level concepts of ontology; (2) expand the concepts under each
information request; and (3) optimize the important parameters by maximizing IR
performance.
Reference
Collection

Information Document Reference


Requests Collection Assessments

Query Terms that


Represent the
Information Requests

Average Precision Provides


Average Precision the Measure to Determine
the Upper Bond of L

Base Domain Ontology :


the First Level (L=1)
Concepts of the Domain
Ontology
Level-L Concepts as Increase L and
Query Term Iterate the Process

Top-N Documents
Average Precision Provides from Retrieved NO
the Measure to Optimize N The Depth L Terminate the
Documents
and K Reaches the YES Concept
Upper Bond Expansion
Top-K Important
Terms from Each
Top-N Document

Level-(L+1)
Expanded Concepts

Figure 1. The concept for the development of a domain ontology from a reference collection.

The details of the three important steps are elaborated as follows:


1. Define the up-level concepts of ontology:
In a reference collection, “information request” indicates the most important
concepts within the knowledge domain. For an IR system (i.e., a search
engine), it covers all the possible query terms for a specific topic that users
may deliver. The information requests are often drafted by domain experts
via a brainstorming process. Since information requests reflect the most
important concepts within the knowledge domain, they represent the most
up-level concepts within the domain ontology and can serve as the initial
input for the proposed iterative concept expansion process.

©  ASCE
Computing  in  Civil  Engineering  2015 449

2. Expand the concepts under each information request:


Given the highest level important concepts (i.e., the information requests of
the reference collection), the next step is to expand the relevant concepts
under each information request. The iterative process is shown in Figure 1
and elaborated in Figure 2. Each up-level concept can be regarded as a query
term. After delivering the query term to an IR system, a set of retrieved
documents will be returned. The authors first gather the top-N documents
from the retrieved documents. The top-N documents are the most relevant
documents to a higher level concept. Therefore, the authors then acquire the
top-K important terms within each top-N document. The importance of each
term within a specific document can be ranked by many ordinary term
weighting approaches such as a TF (Term Frequency) model (Salton, Wong,
& Yang, 1975). The top-K terms per each top-N document are then regarded
as the lower level concepts that are expanded from the up-level concepts. For
a one-round concept extension, NxK relevant concepts will be expanded. As
Figure 2 illustrates, such a concept expansion process is iterative and will be
executed L times. In the next step, a discussion of the optimization procedure
of the three important parameters, namely N, K, and L is given.
3. Optimize the important parameters by maximizing IR performance:
The criteria for selecting appropriate N, K, and L parameters are based on
trial and error within reasonable thresholds. Similar to experiences from
search engine users, the number of retrieved documents often exceeds their
expectations. Thus, users might only browse the top-5, top-10, or top-20
documents among all the retrieved documents for only maintaining the
highly relevant search results (i.e., neglecting the less relevant retrieved
documents with lower rank). The same idea can be applied to selecting the N,
K, and L parameters for the proposed methodology. The possible upper
bound of the three parameters varies with different data collections. Within
the upper-bound of the three parameters, all possible permutations will be
performed to maximize the IR performance. As for the measure of the IR
performance, the reference collection can provide an average precision (AP)
(Manning, Raghavan, & Schütze, 2008) for each information request. An AP
considers both of the two important IR measures, precision and recall
simultaneously, and therefore is a good indicator to reflect the IR
performance. For each information request, once the best possible AP is
identified, the corresponding N, K, and L values can also be identified to
develop the ontology.

©  ASCE
Computing  in  Civil  Engineering  2015 450

Top-N docs from Top-k terms for


retrieved each top-N doc
documents as lower-level
concepts
Set each lower level
concept as the up-level Term 1-1
concept for the next
iteration DOC 1 Term 1-2

Term 1-3

Term 2-1
Search Engine System
DOC 2 Term 2-2
Upper-level
Query term
concept
Term 2-3

Term 3-1
NO
DOC 3 Term 3-2

Macth the
Terminate the Term 3-3
YES termination
iteration
requirement

Figure 2. The iterative process of acquiring extended concepts.

EXPERIMENTS

The proposed methodology is implemented on the NCREE collection with


the upper bounds of N, K, and L being set as five. The results are shown in Table 1.
Based on the eight information requests (as the highest level concepts), the
authors finally complete the earthquake engineering domain ontology.

Table 1. The IR performance of the ontology-based query expansion.


Information Request Parameter Average
ID Topic L N K Precision
2 Seismic Evaluation and Retrofit 2 1 3 0.763
3 Seismic Experiment 5 5 2 0.733
5 Structural Control 5 4 3 0.836
6 Seismic Hazard Simulation 2 5 5 0.784
7 Computational Mechanics 2 2 2 0.509
9 System Monitoring 5 3 1 0.773
10 Ground Motion 2 2 1 0.805
11 Geology and Geotechnical Engineering 3 2 2 0.931

©  ASCE
Computing  in  Civil  Engineering  2015 451

After the development of the domain ontology is concluded, the next


important step is validation. In the Section “REFERENCE COLLECTION: THE
NCREE COLLECTION”, the authors have indicated that the domain ontology is
designed for assisting IR tasks and managing the growing document collection.
Therefore, the authors select a common IR application “query expansion”(Qiu &
Frei, 1993) to validate the ontology. Query expansion is a technique that achieves
a high IR performance by enhancing the query terms (mostly implemented by
appending relevant keywords, synonyms, or definitions to the query term).
Each information request in the NCREE collection has a detailed description
as the definition that was provided by a domain expert. Therefore, the authors
designed three query expansion strategies for comparing the IR performance.
They are: (1) baseline: use only the topic name of the information request as the
query term; (2) description: append the human-defined description to the topic as
the query term; and (3) ontology-based: append all the expanded concepts of the
information request to the topic as the query term. The average precision of the
three strategies is shown in Table 2.

Table 2. The IR performance on query expansion applications.

Information Request Average Precision


ID Topic Ontology Description Baseline
Based
2 Seismic Evaluation and Retrofit 0.763 0.730 0.708
3 Seismic Experiment 0.733 0.560 0.544
5 Structural Control 0.836 0.671 0.675
6 Seismic Hazard Simulation 0.784 0.675 0.787
7 Computational Mechanics 0.509 0.369 0.698
9 System Monitoring 0.773 0.282 0.585
10 Ground Motion 0.805 0.737 0.517
11 Geology and Geotechnical Engineering 0.931 0.989 0.749

The results show that the strategy “ontology-based” achieves the highest IR
performance for most information requests. This indicates that the ontology
developed in this paper can not only perform successful query expansions, but
also replace human-defined definitions for IR tasks.

CONCLUSIONS

This paper proposed a semi-automated approach to develop a domain

©  ASCE
Computing  in  Civil  Engineering  2015 452

ontology from a part of the reference collection (111 documents) and applied it to
an entire document set (with 360 documents) for assisting with IR tasks. The
results showed that the proposed methodology can reorganize the existing
knowledge into better knowledge presentation and also achieve satisfactory IR
performance for the growing document collection.

REFERENCES

Gruber, T. R. (1995). "Toward principles for the design of ontologies used for
knowledge sharing." International Journal of Human-Computer Studies,
43(5–6), 907-928.
Hsieh, S. H., Lin, H. T., Chi, N. W., Chou, K. W., & Lin, K. Y. (2011). "Enabling
the development of base domain ontology through extraction of knowledge
from engineering domain handbooks." Advanced Engineering Informatics,
25(2), 288-296.
Lin, H.-T., Chi, N.-W., & Hsieh, S.-H. (2012). "A concept-based information
retrieval approach for engineering domain-specific technical documents."
Advanced Engineering Informatics, 26(2), 349-360.
Lin, K. Y., Hsieh, S. H., Tserng, H. P., Chou, K. W., Lin, H. T., Huang, C. P., &
Tzeng, K. F. (2008). "Enabling the creation of domain-specific reference
collections to support text-based information retrieval experiments in the
architecture, engineering and construction industries." Advanced Engineering
Informatic., 22(3), 350-361.
Manning, C. D., Raghavan, P., & Schütze, H. (2008). "Introduction to Information
Retrieval." Cambridge University Press.
Qiu, Y., & Frei, H.-P. (1993). "Concept based query expansion." in the
Proceedings of the 16th annual international ACM SIGIR conference on
Research and development in information retrieval, Jun. 27-Jul. 01,
Pittsburgh, Pennsylvania, USA.
Rezgui, Y. (2007). "Text-based domain ontology building using Tf-Idf and metric
clusters techniques." Knowledge Engineering Review, 22(4), 379-403.
Salton, G., Wong, A., & Yang, C. S. (1975). "A vector space model for automatic
indexing." Commun. ACM, 18(11), 613-620.

©  ASCE
Computing  in  Civil  Engineering  2015 453

BIM-Driven Islamic Construction: Part 1 – Digital Classification


A. M. Almaimani1 and N. O. Nawari2
1
PhD student, University of Florida, College of Design, Construction & Planning, School of
Architecture, P.O. Box 115702, 1480 Inner Road, Gainesville, FL 32611-5702; email:
[email protected].
2
Associate Professor, University of Florida, College of Design, Construction & Planning,
School of Architecture, P.O. Box 115702, 1480 Inner Road, Gainesville, FL 32611-5702;
email: [email protected].

ABSTRACT
This research focuses on developing Building Information Modelling (BIM)
guidelines and libraries for Islamic Architecture (IA) and aims to enhance and reduce the time
and cost of projects that use Islamic Architecture styles. Our main objective is to create BIM-
driven objects and strategies for Islamic Architecture elements that are organized
chronologically according to the history of Islamic Architecture. Islamic architecture contains
a massive amount of information that has yet to be digitally classified according to eras and
styles. Categories in use include styles and characters, construction methods, structural
elements, and architectural components. This part centers on providing the fundamentals of
Islamic Architecture informatics by building the framework for BIM models and guidelines. It
provides schema and critical data such as identifying the different models, shapes, and forms
of construction, structure, and ornamentation of Islamic Architecture for digitalized parametric
building identity.

KEYWORDS: Building information modeling (BIM), Islamic Architecture, BIM libraries,


and digital classification.

INTRODUCTION

Islamic Architecture (IA) throughout history has been characterized by distinctive


features that depict regional variations in both Islamic and non-Islamic countries. These
variations are visible in various Mosques, houses and gardens which utilize distinctive arches,
tile designs, towers and interior spaces. Islamic architecture is a vessel of Islamic civilization
and it is important that the reciprocity of these two facets be recognized. As defined by Grube
(1987), IA displays a set of architectural and spatial features, such as introspection, that are
‘inherent  in  Islam  as  a  cultural  phenomenon.  
Establishing digital classification of IA will help in facilitating a better understanding
of IA while also providing an efficient future utilization and application in the IA design realm.
Presently, there are a limited number of research efforts in digital classification of Islamic
Architecture. An example of these efforts includes the work of Djibril et.al (2006) which
developed a region based indexing and classification system for Islamic star pattern images
using rotational symmetry information. Their classification system is based on the number of

©  ASCE
Computing  in  Civil  Engineering  2015 454

folds by which an image is characterized and   the   image’s fundamental region and class.
Okamura et. al (2007) have similarly established semantic digital resources of Islamic historical
buildings focusing on Islamic architecture in Isfahan, Iran. Okamura   et   al’s   research
demonstrated that a topic maps-based semantic model applied to collaborative metadata
management paradigms can be easily exploited as a tool to enhance traditional architectural
design and cross-disciplinary studies. Another example is the research effort conducted by
Djibril et.al (2008), which investigated geometrical patterns in IA and developed an indexing
and classification system using discrete symmetry groups. It is a general computational model
for the extraction of symmetry features of Islamic Geometrical Patterns (IGP) images. IGPs are
classified into three pattern based categories. The first pattern-category describes the patterns
generated by translation along one direction. The second-pattern contains translational
symmetry in two independent directions. The third, which is called rosettes, describes patterns
that begin at a central point and grow radially outward.
These cited studies demonstrate recent efforts to classify certain geometric patterns in
Islamic Architecture into specific categories while ignoring the overall categorization of Islamic
structures into a general purpose digital classification system. This study aims to develop a
digital library of IA by classifying, categorizing, and labeling all the elements, forms and data
of Islamic architecture in order to facilitate access to the information and applications. This
work strives to enrich the creativity and skills of designers working with Islamic architecture
and culture.

METHODOLOGY
Classification

The general objective of this work is to enhance the understanding and


awareness of Islamic Architecture’s vocabulary, design proportions, canonical
nonfigurative repertoire of ornaments, and spatial order along with its disposition. A
classification of Islamic Architecture arranged chronologically by style and element
type will assist in identifying its main characteristics, symbolic discourse, semiotic
specificity and richness. Organizing IA data bonded with three dimensional forms will
assist architects who seek to use this data in their conceptual and final design phases.

There are myriad of architectural styles used throughout the Islamic regions. This work
seeks to develop a classification schema (Figure 1) of Islamic Architectural elements
in order to build a BIM digital library that clarify IA styles and their appropriate eras.
The classification chart is arranged so that element types are identified and sorted
based on their historical setting.

The data used to generate this chart is extracted from genuine Islamic
Architectural references accredited and recommended primarily by Harvard and MIT

©  ASCE
Computing  in  Civil  Engineering  2015 455

universities through the Aga Khan Program for Islamic Architecture (Islamic
architecture - Aga Khan Documentation Center, 2015). Other references include, Art
of Islam by Titus Burckhardt (2009), Historical Atlas of The Islamic World (Malise
Ruthven and Azim Nanji (2004), and Atlas Tarikh Aleslam by Hussain Moans (1987).

Figure 1 depicts style and building object type classifications that assist
organizing the information according to a State’s timeline. The hierarchal schema
shown in Figure 2 is developed and organized first, to ease the extraction of
information, and second, to help identify the sequential flow of information of an IA
element or style based on its origin, period, style, and building. Furthermore, this will
aid consumers of the library when navigating the complex sets of architectural objects
included in the digital IA data sets. These preliminary charts represent the initial
schema for the Islamic Architecture database digital library.

Figure 1: Classification chart of Islamic Architecture.

©  ASCE
Computing  in  Civil  Engineering  2015 456

Figure 2. Hierarchal schema of the digital classification of IA.


Schema

Table 1 below, illustrates an example of details of the main Islamic


Architecture classification for the digital library. It displays character and vocabulary
categorized according to historical chronology. All architectural and structural
elements are then sorted by the period in which they were found. The data shown
represents the initial phase of the IA digital library development. Another three phases
are planned which will incorporate the remaining data presented in Figure 1 into Table
1. Following the incorporation of the last phase, the digital classification part will be
considered complete.
Table 1: An example of classification details of IA
Style Period Architectural and Main Buildings & features
structural elements
Umayyad style i. Umayyad Mosque in
A.D(661-750)H(40-132)  Domes.
Damascus. Have four
 Ornamented
niches, four domes, three
celling’s.
minarets (spires), and four
 Doors.
doors. Mosaic cover all the
 Columns.
ceilings, walls, aisles,
 Ornaments and
arches, and exterior walls.
Inscriptions.
ii. Dome of the Rock Mosque.
 Walls
The only mosque with
Octagonal plan.
iii. Stone was the main
material for construction.

Bno Alnaser i. Curitiba Mosque.


A.D(1232-1492),  Accentuated the
Style, South of ii. Alhambra Palaces.
curvature of the
Andalus iii. Alhambra City.
horseshoe.

 Ornamentations.

 Decorations.

©  ASCE
Computing  in  Civil  Engineering  2015 457

 Arabic Pools
(Birka).

 quadripartite
garden
Abbasid style i. Most of the buildings
A.D 750-1250, H(132-656)  Variety in
affected by the Persian and
Minarets/spires
Iraqis.
shapes.
ii. Adobe was the main
 More ornamented
material for construction.
buildings.
iii. Samarra Mosque.
 Separated
iv. Spiraling cone of minaret.
Minbares.
v. Ibn Toulon Mosque.
 More using of vi. Baghdad City.
moqarnace
 Massive
courtyard in the
main mosques.
 Plaster

Any Islamic Architecture project requires a lot of facts and references to educate
designers regarding the vocabulary, style history and various other details related to a
particular architectural style. The digital classification will aid in providing key
information about IA that enhances and enriches design ideas and processes (Figure 3).
Examples for the use of the IA library include answering questions like: what was the
most famous buildings in the same category, what was the story, concept and
philosophy of projects, and what construction methods were used during those
projects?

©  ASCE
Computing  in  Civil  Engineering  2015 458

(a) Era: Otthomani; Character: Hijazi styel; (b) Era: Otthomani; Character: Hijazi styel;
From: Maqad.
Form: Door way.
Form: Typical Magad for a mosque.
Figure 3: Examples of data provided by the BIM-driven Islamic Construction.

CONCLUSION
Islamic Architecture has a substantial amount of distinctive information about design
concepts, spatial features, forms, façades, and building functions that need to be made available
to designers and engineers. BIM-driven Islamic Architecture is proposed in this paper as a
vehicle to produce a digital library of BIM models that include accurate details and descriptions
of Islamic construction and philosophy. The digital data classification phase of BIM-driven
Islamic construction is an essential step for the creation of a digital library that includes the
model data of the most iconic Islamic eras. The classification is based on historical chronology,
location, vocabulary and style and other related details. Information provided in this
classification aims to enlighten designers with a comprehensive overview of relevant data
regarding Islamic construction.
ACKNOWLEDGEMENT
Authors would like to thank King Abdul-Aziz University, College of environmental Design,
Architecture Department, for sponsoring Mr. Ayad Almaimani Ph.D. research at the University
of Florida.

©  ASCE
Computing  in  Civil  Engineering  2015 459

REFERENCES

Albert , F.,Albert , F., Valiente, J., & Gomis, J. (2005). A computational model for pattern and
tile designs classification using plane symmetry groups. CIARP'05 Proceedings of the
10th Iberoamerican Congress conference on Progress in Pattern Recognition, Image
Analysis and Applications, 11. Retrieved November 11, 2014, from
http://dl.acm.org/citation.cfm?id=2099369.2099458&coll=DL&dl=GUIDE
Burckhardt, T. (2009). Art of Islam, Language and Meaning. Bloomington, Indiana: World
Wisdom, Inc. Retrieved October 22, 2014.
Djibril , M., Hadi , Y., & Haj Thami , R. (2006). Fundamental region based indexing and
classification of islamic star pattern images. ICIAR'06 Proceedings of the Third
international conference on Image Analysis and Recognition , Volume Part II, 11.
Retrieved October 28, 2014, from
http://dl.acm.org/citation.cfm?id=2110938.2111024&coll=DL&dl=GUIDE
Djibril, M., & Haj Thami, R. (2008). Islamic geometrical patterns indexing and classification
using discrete symmetry groups. Journal on Computing and Cultural Heritage
(JOCCH), 1(2), 14. doi:10.1145/1434763.1434767.
GRUBE, E. (1987). The Art of Islamic Pottery (Vol. 1). The Metropolitan Museum of Art &
JSTOR. Retrieved from The Metropolitan Museum of Art Bulletin.
Hussain Moanis. (1987). Atlas Tarikh Aleslam. Cairo: Alzahraa elaam alarabi. Retrieved
August 29, 2014.
Miller, S. G. (2005). Finding Order in the Moroccan City: The Hubus of the Great Mosque of
Tangier as an Agent of Urban Change. Muqarnas : An Annual on the Visual Culture of
the Islamic World, XXII, 265-283. Retrieved october 3, 2014, from archnet:
http://archnet.org/sites/6353/publications/5427
Okamura, T., Fukami, N., Robert, C., & Andres, F. (2007, June). Digital Resource Semantic
Management of Islamic Buildings Case Study on Isfahan Islamic Architecture Digital
Collection. International Journal of Architectural Computing, 5(Volume 5, Number 2 /
June 2007). doi:10.1260/1478-0771.5.2.356.
Peterson, A. (1996). Dictionary of Islamic architecture (first edition ed.). london: routlege.
Rabbat, N. (2012, Number 6). What is Islamic architecture anyway? Journal of Art
Historiography(6), 15. Retrieved Novmber 12, 2014, from
https://arthistoriography.files.wordpress.com/2012/05/rabbat1.pdf
Ruthven, M., & Nanji, A. (2004). Historical Atlas of Islam. Harvard University Press. Retrieved
November 05, 2014.
The Aga Khan Documentation Center at MIT Libraries. (n.d.). Islamic architecture - Aga Khan
Documentation Center. Retrieved February 16, 2015, from MIT Libraries:
http://libguides.mit.edu/islam-arch.

©  ASCE
Computing  in  Civil  Engineering  2015 460

BIM-Driven Islamic Construction: Part 2 – Digital Libraries


A. M. Almaimani1 and N. O. Nawari2
1
PhD student, University of Florida, College of Design, Construction & Planning, School of
Architecture, P.O. Box 115702, 1480 Inner Road, Gainesville, FL 32611-5702; email:
[email protected].
2
Associate Professor, University of Florida, College of Design, Construction & Planning,
School of Architecture, P.O. Box 115702, 1480 Inner Road, Gainesville, FL 32611-5702;
email: [email protected].

ABSTRACT

This research focuses on developing Building Information Modelling (BIM) libraries


and guidelines for Islamic Architecture (IA). It aims to enhance design and reduce the time and
cost for projects that utilize Islamic-styles of architecture. The main objective of Part 2 is to
develop BIM-driven (BIM-IA) libraries and guidelines for Islamic Architecture components
that are organized chronologically based on Islamic Architecture history. The resultant BIM-IA
is intended to be an organized Islamic architecture database that includes the myriad of Islamic
building characteristics and styles. The proposed BIM libraries will include most Islamic
Architecture forms, styles, characters, construction methods, structural elements, and the
various architectural components that can be used projects. Application of this library is
demonstrated in an example that designs a mosque. This example shows how the BIM-IA will
assist designers when making decisions about mosque design and function while also
demonstrating how valuable time and resources can be conserved when searching for optimum
design choices. The research strives to advance building informatics by developing parametric
digital components and guidelines of Islamic Construction.

KEYWORDS

Building information modeling (BIM), Islamic Architecture (IA), BIM libraries, IA database,
and digital classification.

1
©  ASCE
Computing  in  Civil  Engineering  2015 461

INTRODUCTION

Building information Modeling (BIM) is a process that produces digital representations


of physical objects. Using this process to produce a digital library of BIM Driven Islamic
Construction is an important part of digital architecture. Digital Libraries are important because
they add a large amount of data to the existing Architectural Library. A massive amount of data
about Islamic Architecture is currently scattered throughout many different resources that is
also written in many different languages (Peterson, 1996). Establishing a BIM library for these
various resources in a coherent manner will make the ingenious artistic and structural methods
of IA consumable by architects and engineers in actual projects thereby enhancing creativity
and design skills. This digital library will be a compendium of information about every era of
Islamic Architecture which extends as far back as fourteen hundred years ago. It will do this by
synchronizing data and three dimensional forms that illustrate Islamic architecture. Furthermore,
it will catalogue Islamic architectural forms and elements that can be used to enhance design
creativity. The library will help designers access a wide variety of architectural and structural
forms in addition to styles that represent different historical eras. These include, for example,
Islamic aesthetic features such as brick patterning, the clever use of marble and stone in bands
of contrasting colors when stone is a major building material, laying emphasis on ingenious
symmetry in design as well as in organization of inner spaces and architectural motifs. The
Islamic philosopher Ibn Khaldun (1377) described the subject of aesthetic in Islamic
architecture as techniques, language and materials, in which, for instance, the walls come to
look like colorful flower beds.

The proposed approach will allow the project to have an overarching sense of unity.
This means the library will contain similar shapes, elements, constructions, structures, and more
importantly a single unified style language. The data provided will help influence design
decisions quickly because of testimonies provided in the application and due to its ease of
accessibility. Information about various Islamic architectural eras will be immediately available
to designers. For instance, a state’s history, the most famous buildings or, more importantly, the
architectural forms and elements of windows, domes, and spires will be readily accessible.
Details about windows, domes and spires will be ready to be taken to fabrication machines so
that physical models can be immediately produced. Every form will include data that enriches
a designers structural and architectural design understanding of Islamic Architecture.

©  ASCE
Computing  in  Civil  Engineering  2015 462

DIGITAL LIBRARY

Models in architecture have always played an important role throughout history.


Ancient Egyptians used models in the form of drawings and physical objects. Plans of the Tomb
of Rameses IV and the drawing of the shrine from Ghorâb are good examples (Clark and
Engelbach, 2014). Models used by the ancient Greeks and Romans have also been discovered
(Shattner, 1990). During the middle ages models were used increasingly for the design and
construction of cathedrals (Kostof, 1977). These models were used as an integral part of the
design and decoration of building exteriors and interiors. The history of architecture finds its
origins in a work by the Roman architect Vitruvius (European Architecture Series) who had
traced the origination of architecture to the imitation or modeling of nature. He observed that,
seeking shelter, humans learned lessons from swallows and bees who built their habitations.
Then, Humans started using natural materials to create forms that are based on shapes and
proportions found in nature. An example of this is the Vitruvian Man which affirms that the
figure of a man could be inscribed in both the circle and the square; the fundamental geometrical
forms on which the design universe was ordered.

In the modern digital era, modeling has advanced significantly in the last decades.
Particularly, building information modeling (BIM), which is fundamentally changing the role
of computation in building design by creating a database of building objects that can be used
from the design phase to the actual construction of a building (Nawari et. al 2014). This research
aims to develop a BIM library for Islamic Architecture. The research study represents a critical
step towards Islamic Architecture informatics.

3
©  ASCE
Computing  in  Civil  Engineering  2015 463

Using the digital classification presented in part 1 (Ayad and Nawari, 2015a), a BIM
library will be developed using Autodesk Revit software. This digital library is sorted by
historical chronology where the architectural elements will then be arranged by the period in
which they belong. For example, the BIM-Islamic Architecture library for the Ottoman period
will maintain all the architecture elements for that period from the time of its founding to its
collapse (Figure 1).

Figure 1. Examples of eras and styles forming the BIM library.

APPLICATIONS
The proposed BIM digital library for Islamic Architecture is intended to provide
enhanced guidance and intricate details regarding Islamic Architecture to its user. An organized
BIM library will make it easier for designers to learn and gain design ideas from an interactive
system that outlines the different Islamic construction eras. Furthermore, focusing on specific
Islamic architectural styles will guide the user to design a better building that will have a clear
architectural identity. For example, designing a mosque needs a great deal of three dimensional
forms and elements to support the design vision. The Islamic Architecture Library, which would

©  ASCE
Computing  in  Civil  Engineering  2015 464

be supported by the building information modeling concept, has all the forms and data that can
lead the architects toward their creative vision. The designer can choose the desired template
for their design from a list of templates provided by the BIM-driven Islamic construction. For
example, if the designer chooses the Ottoman era then all the architectural elements, characters,
calligraphy, ornaments, furniture, light fixtures and other elements will be limited to that era
(Figures 2 and 3).

Figure 2. Digital library application process.

This proposed library also has extended benefits during the conceptual design phase.
This includes the ability to enhance the expression and meaning of architectural and structural
concepts by supplying the user with information and resources that are appended to the library.
This can aid the user in identifying and selecting additional resources that are appropriate to
their design vision (Figure 3).

5
©  ASCE
Computing  in  Civil  Engineering  2015 465

Figure 3: Appended metadata when accessing the proposed digital library.

Islamic Architecture is full of designs, concepts, ideas, and a variety of multi-function


buildings that it might prove very difficult to disseminate this information even to specialists.
Overcoming the disarray can only be done by using an application that supports the architect
with a varied amount of data that can fill gaps of knowledge or missing details about Islamic
construction. To illustrate the applicability of the proposed BIM driven Islamic construction
library an example of how the library can be used to design a mosque is produced below in
Figure 4a-4k.

Figure 4a depicts a plan of a mosque and illustrates some of the key elements that will
be designed with the help of the proposed digital library. Figure 4b is a 3D view showing
additional objects that are necessary for the design of a mosque. Figures 4c and 4d, illustrate
how desiners can choose the type of ornamentaion and material from a specific era provided by
the proposed digital library. For the design of a Mehrab (Figure 4e), the BIM-driven libray
provides the designer with assistance in developing the necessary geomteric properties and type
character. The digital library offers also various options for columns and walls (Figure 4f and
4g). Examples of further options for the desing of windows, doors and entrances are given in
Figures 4h and 4k.

©  ASCE
Computing  in  Civil  Engineering  2015 466

(a)

(b)

(c) (d)

(e) (f) (g)

(h) (k)

Figure 4: Application of BIM-driven Islamic Architecture library in the design of a mosque.

7
©  ASCE
Computing  in  Civil  Engineering  2015 467

CONCLUSION

The design of Islamic architecture is approached differently than other civilizations. It


contains a large variety of building styles with various forms, characters, functions, construction
methods, and more. This research aims to develop a BIM-driven library for Islamic Architecture.
It is intended to provide enhanced guidance and a wealth of details about Islamic Architecture
as well as culture and art. The proposed BIM library offers designers an opportunity to expand
their knowledge and gain creative design ideas from an interactive BIM system that is
developed with Islamic construction and civilization in mind.

ACKNOWLEDGEMENTS
Authors would like to thank King Abdul-Aziz University, College of Environmental Design,
Architecture Department, for sponsoring Mr. Ayad Almaimani’s Ph.D. research at the
University of Florida.

REFERENCES

Almaimani, A. and Nawari, N.O. (2015). BIM-Driven Islamic Construction: Part 1-Digital
Classification. Proceeding of the 2015 ASCE International Workshop on Computing in
Civil Engineering, June 21st – 23rd, 2015, Austin, Texas.
Britannica, Encyclopædia. Encyclopædia Britannica. Ed. Encyclopædia Britannica. 22 july
2014. <http://www.britannica.com/EBchecked/topic/631310/Vitruvius>.
Clarke, Somers and R. Engelbach(2014) . Ancient Egyptian Construction and Architecture
(Dover Books on Architecture). Dover Publications, 2014. 11 november 2014.

Ibn Khaldun (1377). The Muqaddimah: An Introduction to History. Translated by Franz


Rosenthal, N. J. Dawood (1967), Princeton University Press, ISBN 0-691-01754-9

Kostof, Spiro (1977) . The Architect: Chapters in the History of the Profession. Ed. Spiro Kostof.
illustrated, reprint. University of California Press, 1977. 20 october 2014.
Nawari, O. N. and Kuenstle, M. (2015). Building Information Modeling: A Framework for
structural design, CRC Press, April 2015.
Peterson, Andrew (1996). Dictionary of Islamic architecture. first edition . london: routlege,
1996.

©  ASCE
Computing  in  Civil  Engineering  2015 468

Integration of BIM and GIS: Highway Cut and Fill Earthwork Balancing

Hyunjoo Kim1; Zhenhua Chen2; Chung-Suk Cho3; Hyounseok Moon4; Kibum Ju4; and
4
Wonsik Choi

1
Department of Engineering Technology and Construction Management, University
of North Carolina at Charlotte, 9212 University City Blvd., Charlotte, NC 28223.
E-mail: [email protected]
2
Department of Engineering Technology and Construction Management, University
of North Carolina at Charlotte, 9212 University City Blvd., Charlotte, NC 28223.
E-mail: [email protected]
3
Department of Civil Engineering, Khalifa University, Abu Dhabi, UAE. E-mail:
[email protected]
4
ICT Convergence and Integration Research Division, Korea Institute of Construction
Technology.

Abstract

The proposed paper intends to provide a technical review between BIM and
GIS and measure the different strengths and weaknesses of each approach. Then, the
proposed project aims to present a newly developed data integration approach in
facilitating the continuous work processes of the two distinct environments. This
research utilizes BIM based (IFC) system to store road components data in highway
construction and then GIS system to import data such as land boundaries, and
topographic data. In order to retrieve and integrate the two distinct types of data
formats, this research uses the concept of semantic web in RDF format to provide
semantic interoperability between BIM and GIS operations.

INTRODUCTION

Cut and fill earthwork is a very important part of highway construction


projects. Earthwork is defined as the process of moving from high areas (cut area) to
low areas (fill area) to accomplish an appropriate road grade. Minimizing the amount
of earthwork is not simple. Different cut and fill quantities correspond to different cut
and fill workloads. An optimal balancing cut and fill design alternative option leads to
less total earthwork. Traditionally, engineers in highway project use various types of
data formats for project information exchange purpose to perform cut and fill
calculations. In an earthwork operation, an accurate topographic survey of the work
site area needs to be developed and a layout (alignment) of the future road needs to be
designed, specifying the alignment and the grades. Since the process of calculating
the earthwork consists of numerous data (Goktepe and Lav, 2003), some researchers
turned to CAD/BIM technologies to include the vast amount of data in earthwork

©  ASCE 1
Computing  in  Civil  Engineering  2015 469

calculation (Kim et al., 2014; Yabuki, and Shitani, 2005). However, not much
research has been conducted in utilizing GIS data for earthwork calculations. This
research proposes an integration system of BIM and GIS to perform a spatial data
analysis needed in earthwork calculations.
A case study is conducted and applied to a highway construction project
located in the southern part of Korea, which consists of earthwork and road pavement
construction. The related project data is retrieved/integrated through semantic web
approach from remote BIM and GIS files in REST web service protocol. A prototype
system is implemented first by processing initial retrieved data and then analyzing to
perform the necessary cut and fill simulations.

LITERATURE REVIEW

Integration of BIM and GIS Systems


There are several on-going efforts in the integration of BIM and GIS systems.
As illustrated by BuildingSMARTAlliance (2010), however, there exist fundamental
differences between BIM and GIS technologies. One of the significant differences is
in data structures. BIM is a highly standardized structure while GIS is a user defined
structure. It is also important to notice that data exchange in BIM is mostly file based
while GIS is server based. Even with these two extreme differences between BIM and
GIS systems, more attention has been paid to the integration of BIM and GIS recently
as shown in Table 1 which summaries each approach with the examples of the
existing software products in the industry. There are possibly four different BIM and
GIS integration approaches: BIM based, GIS based, proprietary and semantic web
approaches.
Table 1 summaries different approaches to the integration of the two systems
based on where the main emphasis was made for developing the integrated system.
For example, ArcGIS is developed by ESRI with the emphasis on GIS features while
BIM based on integration (Kim et al., 2014) is focusing on the current IFC system by
stressing the existing BIM data schema while integrating the two systems. However,
most of the two systems (GIS or BIM based integrations) still face significant
difficulties in reading data files in IFC/GML or dealing with lines or curves instead of
object entities. Also, there is another industry attempt with CityGML, FME and
Infraworks by creating a new data schema. However, the development of the new
data schema requires a significant amount of time and efforts (IFC development took
more than 15 years to complete the whole data schema). Interestingly, there is another
effort being made in the area of semantic web technology. Karan et al. (2014)
demonstrated an integration of BIM and GIS systems by extracting building and
geographical features in RDF file format.
This research finds that the semantic web satisfies the needs of BIM and GIS
by providing the user with IFC/GML readings, and object representations in building
components. Also, the semantic web does not require new data schema development
which may take a significant time. According to the summary in Table 1, it is
concluded that the semantic web meets most of the needs and requires the least effort
in development. Therefore, this research selected the semantic web to integrate BIM
and GIS for calculating earthwork operations in highway construction.

©  ASCE 2
Computing  in  Civil  Engineering  2015 470

Table 1. Summary of existing approaches

GIS based BIM based Proprietary Semantic


integration integration Standard(s) Web
integration
Advantages
IFC Partial Yes Yes Yes
Reading
GML Partial Yes Yes Yes
Reading
Disadvantage Yes Yes Yes No
(difficult to
develop)

Cut and Fill Calculations


Hare et al. (2011) finds that cut and fill cost accounts for almost 25% of the
total project cost. A. Burak Goktepe (2003) also states cut and fill balance and earth
optimization is a crucial issue in highway preconstruction phase. In order to calculate
cut and fill quantity and find a balancing point in the earthwork, much research has
been conducted. Especially with the advance of computer technology, research
progressed to complex algorithms and CAD models. Nassar et al. (2011) took the
traditional earthwork method of Mass-haul diagrams (MD) and applied an algorithm
to automatically compute cut and fill balances of those Mass-haul diagrams. Cheng et
al. (2011) also produced a simulation model with the goal of improving productivity,
equipment efficiency, and construction safety in earthwork operations. A 3D surface
modeling system with a mobile 3D imaging system has been used to help in the
earthwork process. Chae et al. (2011) recognized that improving the process of
earthwork operations is difficult because the task generally involves human input and
automating the process is hard to accomplish. By applying computer software into cut
and fill operations, rapid calculations became possible (Banks 1998).
Due to the complexity of the cut and fill calculations, it is important to utilize
GIS data model as a type of input data (Nassar et al. 2011). However, most earthwork
applications have been limited to CAD data model, but not to GIS data. This research
utilizes a new trend of integrating GIS and BIM data model in highway projects to
perform the cut and fill calculations efficiently.

METHODOLOGY
The overall process for cut and fill calculation in this research includes the
necessary steps such as BIM data input process, GIS data input process, Semantic
Web integration , and spatial data analysis in earthwork calculations as shown in
Figure 1. At the first step of cut and fill quantity simulation process, Building
Information Modeling (BIM) model data file is extracted in describing the
infrastructure geometric information such as road shape, centerline, cross section,
elevation, curb, and embankment. Secondly, raw project data (triangle coordinates
and group, geographical reference, soil type, infrastructure data, terrain data, etc.) are
collected as GIS input information. Industrial Foundation Class (IFC) file is

©  ASCE 3
Computing  in  Civil  Engineering  2015 471

implemented for BIM model and Geography Markup Language (GML) file stores
GIS data as ISO (International Organization for Standard) standards. Then, semantic
web integration will combine GIS and BIM in the same platform and lastly, spatial
data analysis such as earthwork operations will be performed.

BIM GIS

Road Shape Spatial data Semantic Web Spatial Data Geographical


Analysis Integration Analysis reference
Centerline
Project Data Soil type
Elevation GIS data
BIM data
Input Geometry
Input
Location
Others

Figure 1. Overall Data Integration/Analysis Process

GML (Geography makeup language)


GML is used to store the geographical features for the project. GML is chosen
because it serves as both a modeling language and open interchange format on the
internet. The ability to integrate all forms of geographic information is the key to the
utility of GML. GML contains many primitive objects like geometry, coordinate
reference system, topology, time, unit of measure, map presentation styling rules etc.
The key geometry ojbects are Point, LineString and Polygon. For each polygon shape,
coordiantes of the shape need to be specified using GML elementes like
<gml:coordinates> <gml:pos> and <gml:posList>. For example, in order to represent
a point, a 3D x,y,z value should be assigned to a point element by putting the series of
values into <gml:coordinates> and </gml:coordinates> marks. When a polygon is
needed in GML file, the coordinates are specified with x,y,z values filled between the
<gml:posList> and </gml:posList> marks. The GML posList objects form a GML
linering object which is a part of GML polygon object. In this project, the
polygon/triangle groups are used to represent the whole topo surface of the worksite.
All the objects are organized in tree hierarchy XML schema in GML file for internet
use.

BIM – IFC (Industry Foundation Class)


IFC data schema is used to store infrastructure/highway data. The current IFC
data schema consists of building components and does not have data entities for the
road construction components. Thus, additional entities were needed to build a road
model to represent the highway construction components. For example,
IfcRoadComponent entity which is the superentity was created for the case study
implementation. The implementation also has many attributes of GlobalId,

©  ASCE 4
Computing  in  Civil  Engineering  2015 472

OwnerHistory, etc. Each attributes has its own data type. For example, the data type
of GlobalId is string, the data type of OwnerHistory is entity called IfcOwnerHistory.
Among the attributes of IfcRoad entity, one of the most important entities is
IfcRoadComponent. It is inherited into different structural members like IfcRoadWay,
IfcRoadBase, IfcRoadBed, IfcRoadCurb and IfcRoadDrainageSystem. The IfcRoad
entity is also inherited into IfcMunicipalRoad, IfcHighwayRoad, and other entities as
subentities. The geometry and location information of the infrastructure construction
components are stored as attributes of the entities.

SEMANTIC WEB and RDF


Semantic web technology is the next generation web standard to describe
internet resources for information sharing purpose. Using semantic web, the semantic
meaning of internet resources can be read and understood directly by computers
algorithms. In this project the main stream semantic web standard Resource
Description Framework (RDF) is adopted to depict the internet based project
resources. The RDF file is also organized in XML format, which means it contains a
lot of objects and attributes. First, the ontologies of project data model area created
with RDF. Then several instances are generated based on the original ontologies. For
example, the general GIS and BIM data model are created as two RDF ontologies as
<owl:Class> objects. Base on the reality project information, two entities
<rdf:Description> are generated to describe the GIS and BIM related remote
resources with a specific attribute <si:link> showing the URI link. Web service
module can read the above information and conduct data transfer behavior. Figure 2
shows an example of RDF file with project information, BIM and GIS links.

Project
Information

BIM Link
GIS link

Figure 2. Example of RDF file with project information, BIM and GIS
Links

CASE STUDY

The case study was conducted and applied to a highway construction project
with earth excavation/embankment, and road pavement. The total length of the
project is 318 meters and the total construction cost was $17,182,637. The quantities
of cut and fill are output from the IFC model based on the information obtained from
the IFC entities. The general method of quantifying cuts and fills is based on the
existing topography elevation in GIS data by locating the IFC model at different
elevations.

©  ASCE 5
Computing  in  Civil  Engineering  2015 473

The goal of the earthwork simulation is to minimize the earthwork by


balancing the cut and fill operation. As the road elevation changes, the earthwork
volume increases or decreases. The total range of the elevations was from 1,800 feet
to 3,600 feet. And the least amount (11,824 cy of excavation) of earthwork operation
is found at the elevation of 3,100 feet. The excavation varied from 1,200,000 cy at
1,800 feet to 165,000 cy at 3,600 feet and embankment from 13,062 cy at 1,800 feet
to 548,200 cy at 3,600 feet.

CONCLUSIONS

In this paper, a highway data model in IFC based BIM data model was used to
store infrastructure information. Then, Geographic Information System (GIS) file was
used to store geographic information, such as land boundaries, environmentally
sensitive regions, and topographic data. In order to easily retrieve and integrate those
two types of data format, the semantic web data schema was implemented to
exchange data in infrastructure (BIM) and geographic information (GIS) in two ways,
which greatly improved the interoperability of the current approach.
The developed data integration system was applied in highway construction
for earthwork calculations located in the southern part of Korea and consists of
earthwork and road pavement infrastructure. The related project data retrieved by
semantic web file from remote BIM and GIS files showed a seamless data integration
without creating a laborious data schema process. A prototype system was
implemented to process initial retrieved data which is then analyzed by the genetic
algorithms to perform multiple cut and fill simulations. An optimal construction
equipment plan is generated based on the simulation result.

REFERENCES

Banks, J. H., (1998), “Introduction to transportation engineering”, WCB/McGraw-


Hill Company, Singapore
BuildingSMARTAlliance (2010), “Introduction to BIM – GIS Integration”,
http://projects.buildingsmartalliance.org/files/?artifact_id=3592
Chae, M.J., Lee, G.W., Park, J.W., Kim, J.R., and Cho, M.Y., 3D Work
Environment Modeling for the Intelligent Excavation System (IES)”
Construction Research Council, 2009 April, ASCE, pp. 497–505
Cheng, Dan, et al. "Emergy evaluation perspectives of an irrigation improvement
project proposal in China." Ecological Economics 70.11 (2011): 2154-2162.
Cheng, J., Deng, Y., Das, M., and Anumba, C. (2014) Evaluation of IFC4 for the GIS
and Green Building Domains. Computing in Civil and Building Engineering
(2014): pp. 2216-2223.
Goktepe, A. and Lav, A. (2003). ”Method for Balancing Cut-Fill and Minimizing the
Amount of Earthwork in the Geometric Design of Highways.” J. Transp. Eng.,
129(5), 564–571
Hare, W. L., Koch, V. R., and Lucet, Y. (2011). “Models and algorithms to improve
earthwork operations in road design using mixed integer linear programming.”
Eur. J. Oper. Res., 215(2), 470–480.

©  ASCE 6
Computing  in  Civil  Engineering  2015 474

Irizarry, J., Karan, E., and Jalaei, F. (2013), ntegrating BIM and GIS to improve the
visual monitoring of construction supply chain managementOriginal Research
Article Automation in Construction, Volume 31, May 2013, Pages 241-254
Karan, E. and Irizarry, J. (2014) Developing a Spatial Data Framework for Facility
Management Supply Chains. Construction Research Congress 2014: pp. 2355-
2364.
Kim, E., Jha, M., Schonfeld, P., and Kim, H. (2007). ”Highway Alignment
Optimization Incorporating Bridges and Tunnels.” J. Transp. Eng., 133(2)
Kim, H., Orr, K., Shen, Z., Moon, H., Ju, K., and Choi, W. (2014). ”Highway
Alignment Construction Comparison Using Object-Oriented 3D Visualization
Modeling.” J. Constr. Eng. Manage., 140(10)
Nassar, K., Aly, E. A., and Osman, H. (2011). “Developing an efficient algorithm for
balancing mass-haul diagrams.” Autom. Constr., 20(8),1185–1192.
Shi, J. and Liu, P. (2014) An Agent-Based Evacuation Model to Support Fire Safety
Design Based on an Integrated 3D GIS and BIM Platform. Computing in Civil
and Building Engineering (2014): pp. 1893-1900.
Yabuki, N. and Shitani, T. (2005) A Management System for Cut and Fill Earthworks
Based on 4D CAD and EVMS. Computing in Civil Engineering (2005): pp. 1-
8.

©  ASCE 7
Computing  in  Civil  Engineering  2015 475

Towards Understanding End-user Lighting Preferences in Office Spaces by


Using Immersive Virtual Environments
Arsalan Heydarian1, Evangelos Pantazis2, Joao P. Carneiro3, David Gerber4, Burcin
Becerik-Gerber5
1,2
PhD Student, Sonny Astani Department of Civil and Environmental Engineering, University
of Southern California, Los Angeles, CA; PH (540) 383-6422; Fax (213) 744-1426; email:
{heydaria, epantazi}@usc.edu
2
Graduate Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA; PH (213) 300-3514; Fax (213) 744-1426;
email: [email protected]
4
Assistant Professor, School of Architecture and Sonny Astani Department of Civil and
Environmental Engineering, University of Southern California, Los Angeles, CA; PH (213)
(617) 794-7367; Fax (213) 744-1426; email: [email protected]
5
Assistant Professor, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA; PH (213) 740-4383; Fax (213) 744-1426;
email: [email protected]

ABSTRACT
Buildings and their systems are primarily designed based on several
assumptions about end-users’ requirements and needs, which in many cases are
incomplete and result in inefficiencies during operation phases of buildings. With
advancements in fields of augmented and virtual reality, designers and engineers have
now the opportunity to collect information about end-users’  requirements, preferences,
and behaviors for more informed decision-making during the design phase. These
approaches allow for buildings to be designed around the users, with the goal that the
design will result in reduction of energy consumption and improved building
operations. The authors examine the effect of design  features  on  occupants’  preferences  
and performance within immersive virtual environments (IVEs). Specifically, this
paper presents an approach to understand end-users’  lighting  preferences and collect
end-user performance data through the use of IVEs.
INTRODUCTION
Buildings’ energy use accounts for roughly 45 percent of the energy
consumption in the United States (EPA 2013). Buildings and their systems are
generally designed to operate based on code-defined occupant comfort ranges to ensure
satisfactory temperature, luminance, and ventilation, and standardized (recommended)
set-points to accommodate occupants’   needs,   and   comfort   levels (Brandemuehl and
Braun 1999). Previous research has shown there is a weak correlation between these
codes and the actual occupant reported satisfactory ranges. Many times these standard
set-points do not fulfill comfort and satisfaction in buildings (Barlow and Fiala 2007;
Jazizadeh et al. 2012). Research has also suggested that by tailoring the design of a
building’s  elements  and  systems  around  occupants,  there  is  a  potential  to  reduce  the  
energy consumption of buildings, as well as increase occupant satisfaction (Janda 2011;
Klein et al. 2012). User-centered design (UCD) has shown to be an effective approach

1
©  ASCE
Computing  in  Civil  Engineering  2015 476

to improve a final product based on end-users’   needs   in   many   domains,   including  


software design and automotive industry. Researchers have proposed the need for the
architecture, engineering, and construction (AEC) industry to adopt the concept of
UCD by involving end-users early on during the design phase (Bullinger et al. 2010).
Although previous studies have used behavioral models to simulate occupant behaviors
(including preferences, requirements, comfort levels, etc.) during the design phase to
improve the energy efficiency of buildings (Hoes et al. 2009; Reinhart 2004), these
studies do not usually provide a realistic representation of occupant behavior. As people
spend approximately 90% of their time indoors (Frontczak and Wargocki 2011), it is
crucial to study the interaction between occupants and their indoor environment in
order to both increase occupant satisfaction and reduce energy consumption. To
address this issue and to develop designs that are centered-around the occupants, there
is a need for accurate measurement of occupant preference and behavior.
The AEC industry has adopted the use of building information modeling (BIM)
to visually communicate and exchange information among parties involved in a project
and end-users. Although BIM provides 3D models and the necessary geometric and
semantic information about the building and its components, prior research illustrated
that BIM does not provide the necessary spatial feelings that the end-user might need
to provide design feedback (Kozhevnikov 2008; Shiratuddin et al. 2004). In order to
effectively allow end-users to fully understand the design, IVEs have opened the door
for engaging end-users in the design process of projects by combining the strengths of
pre-construction mock-ups that provide a sense of presence to users and BIM models
that provide the opportunity to evaluate alternative design options in a timely and cost
efficient manner. Furthermore, by creating a better sense of realism through its one-to-
one scale, building engineers and designers can incorporate IVEs in their work
processes as a tool to measure end-user behavior, understand the impact of design
features on behavior, and receive constructive user feedback during the design phase.
The authors’ previous research has suggested that these environments have the
potential to provide a sense of presence found in physical mockups and make
evaluation of numerous potential design alternatives in a timely and cost-efficient
manner (Heydarian et al. 2015).
As part of their research, the authors aim to improve the design process through
informed design decision making as a way to improve overall building performance by
using IVEs. The goal is to collect end-user data, incorporate them within the design
decision tools, methods, and technologies and measure the impact of designing with
end-user data on overall design quality. Specifically, the authors aim to examine
different   design  features’  (e.g., spatial geometries, wall and window geometries and
types, and etc.) effects on end-users performance and preferences. Moreover, the
authors aim to examine such effects by manipulating different design features within
IVEs in order to create actual human profiles based on their preference and
performance. These profiles then can be used to better design environments that
increase end-user satisfaction and possibly reduce the total energy consumption.
As a first step towards this goal, in this paper, the authors study lighting settings
(luminance and illuminance) in an office environment in order to examine the effect of
manipulating different design features on user performance. Previous research has
identified lighting as one of the major factors that influence occupants’  performance  in  

2
©  ASCE
Computing  in  Civil  Engineering  2015 477

indoor environments (Boyce et al. 1989; Romm 1994). For instance, (Fisk and
Rosenfeld 1997) has shown that activities, such as reading speed and comprehension
(Smith and Rea 1982; Veitch 1990), locating and identifying objects, and writing are
highly affected by luminance, amount of glare, and spectrum of light. This stream of
research  suggests  that  designing  environments  based  on  people’s  lighting  preferences  
have the potential to affect their productivity and performance. Prior research has also
shown that personal   preferences   have   more   of   an   effect   on   people’s   choice   of   light  
source than the available daylight when occupants use their offices for a short period
of time (Correia da Silva et al. 2013). Being able to design environments with
satisfactory lighting levels based on the  occupants’  preferred  settings  not only could
improve user satisfaction but also could potentially reduce the total energy
consumption in buildings.
This paper presents an approach to collect data about end-users’ different
lighting preferences and performance in order to form profiles. The lighting preferences
(natural and artificial) of participants are measured within an IVE along with their
performance on doing office-related activities (reading and identifying objects) in
participant’s  preferred lighting settings. The paper presents the research methodology,
the IVE system for data acquisition, initial pilot profile data, and discussion on the
proposed approach and planned future work.
METHODOLOGY
The profiles are based on end-users’  lighting preferences and performance in
their preferred light settings. These parameters are measured based on the choices end-
users make to adjust the lighting levels and their performance on a set of assigned visual
tasks.
In order to create user profiles, a number of participants were recruited to
measure their preferred light settings in an office environment in order to perform a set
of office related tasks. Once participants chose their most preferred environment based
on the different lighting levels, the 3D models’ settings (artificial light settings,
geographical location, time of day, etc.) were imported into a simulation software in
order to collect the light maps and lux values of the entire office area. Along with the
lux values, their performance data (reading and comprehension) was collected.
Experiment Design. Although there are many design alternatives that could
affect the amount of lighting levels in an office space (e.g., number, type, and size of
windows, type of light bulbs, geometrical design of the room, reflective surfaces, etc.),
the authors designed their experimental environment similar to one of the office spaces
of an actual office building. As a result, a 150 square meter (10 m x 15 m x 3 m) office
space was modeled for this experiment. The designed office space consisted of three
windows (a set of blinds per window) and 12 light fixtures (three light bulbs per fixture)
with the possibility of having three artificial light settings (one light bulb on, two light
bulbs on, and three light bulbs on).
The modeled environment was used to measure  the  participants’  most  preferred  
light settings, as well as their performance on a set of assigned tasks (reading speed and
comprehension) in the same lighting environment. The participants were placed in a
dark  room  (Figure  2a)  and  were  instructed  to  setup  the  room’s  lighting  levels  based  on  
their most preferred settings (in terms of use of natural light, artificial light, or a
combination of both, as well as the amount of lighting). The participants had the options
3
©  ASCE
Computing  in  Civil  Engineering  2015 478

to open/close each set of blinds to increase/decrease the availability of natural light and
turn the light switches on/off to control the artificial light levels in the room.

Figure 1 – Process for data collection and processing


Model and Apparatus. The base structure and geometry of the office space
was designed in Revit© 2015. 3ds Max© was used to optimize the model, add materials,
furniture, artificial and natural lighting, texture, shadows, reflections to the office
space. The model was then rendered to texture (creating light and texture maps) in
order to achieve a photorealistic 3D model. Once all 32 different lighting possibilities
(different combinations of three windows and three artificial light settings) were
rendered and all lighting and texture maps were acquired, the model was imported to
Unity game engine as an FBX file. Figure 2 shows a sample of these models with
different light settings. In order to ensure the lighting levels in the office were realistic
and represent real-life environments, 32 different light maps were generated through
Ladybug and Honeybee plugins (open source plugins for Grasshopper3D – Rhino
plugin) (Roudsari et al. 2013). Ladybug allows for a full range of environmental
analysis in a single parametric platform and Honeybee connect Grasshopper3D to
EnergyPlus, Radiance, Daysim, and OpenStudio software for building energy and day
lighting simulation with an intend to provide parametric platform based on many
features of these simulation tools.
Unity was also used to connect (1) an Oculus DK2 Head-Mounted Display
(HMD) and a Xbox-360 controller together and link them to the imported 3D model;
(2) create different interactive options that a participant could have with the physical
model (e.g., turning the light switch on/off, opening/closing blinds, picking up objects);
and (3) script animation and changes in scenes (e.g., lighting reflections, changes in

4
©  ASCE
Computing  in  Civil  Engineering  2015 479

texture)  based  on  the  participants’  interactions with the virtual office space. To increase
the sense of presence and allow participants to realistically interact with the IVE, the
Oculus Rift DK2 positional tracker was used   to   track   the   participants’   neck
displacement (3 Degrees-of-Freedom - DoF), the HMD was used to track the head
rotation (3 DoF), and the Xbox-360 controller was used to navigate through the room,
providing 6 DoF. Figure 1 illustrates the modeling steps and the apparatus.

a. All blinds closed & all lights off b. All blinds open & all lights off c. All blinds closed & all lights on d. All blinds open & all lights on

Figure 2 – Sample of different lighting levels in the IVE


Experimental Procedure. In  order  to  collect  participants’  lighting  preferences
and performance in participants’   preferred environments, a pilot experiment (as a
proof-of-concept) was conducted with 15 participants. The participants were
undergraduate and graduate students at the University of Southern California between
the ages of 20 and 32. Prior to the experiment, the participants were provided with an
IRB (Institutional Review Board) approved consent form. They were briefly informed
about the experiment without disclosing any information that would potentially affect
their responses and behavior.
Once the participants reviewed and agreed to the consent form, they were
provided with a brief training on how to use the Xbox controller for navigating within
the space and interacting with the model. They were then asked to put the HMD on and
walk in the room; the initial lighting conditions in the room were set to be dark (Figure
2a), with just a minimum amount of light available (light coming through and around
the blinds) that would allow the participants to see the objects in the room. As part of
their training, in order to get more familiar with the office environment, learn how to
navigate within the space, and visualize the changes in light intensity levels by
opening/closing blinds and/or turning the light bulbs on/off, they were asked to take a
few minutes to interact with the blinds, the light switch, and the objects within the room
(e.g., how bright the room would get if one or two blinds were opened along with
having one of the light bulbs on). Once they felt comfortable with the space, they were
asked to set the lighting levels to their most preferred levels for performing office
related tasks such as writing or reading a paper. They were then provided with a short
passage (approximately 350 words) to read. Once completed, they were asked to
remove the HMD and answer a few questions related to the passage.
Additionally,   people’s   preferences for light sources and intensity levels may
differ depending on individual differences (e.g., mood, gender, personality, etc.). One
major individual difference is  one’s  concern  about  the  environment. The authors also
administered  a  questionnaire  that  assessed  the  participant’s  degree  of  environmentally-
friendly behavior. Upon completing the questionnaire, they were thanked and
dismissed. Figure 3 shows a participant navigating within the IVE and interacting with
different light settings.

5
©  ASCE
Computing  in  Civil  Engineering  2015 480

Figure 3 – A participant setting-up her most preferred lighting setting in IVE


END-USER LIGHTING PROFILES
Once the participants’ lighting preferences were collected, their preferred
settings were matched to the generated light maps. For instance, if participant A’s
preference was to have three blinds open and two light bulbs on, the already generated
light map that represented that setup was selected. It is important to note that these light
maps are generated based on the lighting intensity values from 3ds Max along with the
exact geographical location and time of day that were inputted into the honeybee plugin
(Figure 4). The light maps provide a set of lux values for the specific light settings of
the  room  based  on  the  participants’  preference.  

Figure 4 – Example of lux values and light maps based on participant profiles.
The light maps represent the room from the top view.
In   addition   to   these   light   maps,   the   participants’   reading   speed   and  
comprehension   were   measured   (Figure   4).   The   participants’   reading   comprehension  
was measured based on the number of questions they answered correctly; their reading
speed was measured in words read per seconds. Since there were only four
comprehension questions, the researches chose 75 percent (at least three out of four)
accuracy as a reasonably good performance level. Figure 4 shows the profiles for three
different participants in terms of comprehension, reading speed, preferred light setting,
and preferred lighting map.
CONCLUSION, LIMITATIONS AND FUTURE WORK
The ability to design buildings around the needs and comfort levels of
occupants can result in better interactions between buildings and their occupants,

6
©  ASCE
Computing  in  Civil  Engineering  2015 481

leading to higher end-user satisfaction and more importantly a reduction in building


energy consumption. The work presented in this paper focuses on understanding end-
users preference and performance data using IVEs with the long term goal of
integrating this data to the design phase of buildings. In this paper, an approach to
collect end-users preference and performance data was presented by allowing the
participants to choose their preferred light settings within IVEs.
The presented work in this paper is the first step to create end-user profiles
based on their lighting preferences through the use of IVEs. This work thus additionally
contributes to the already existing work on IVE, by suggesting that IVEs can be used
to study human preference and performance. This paper presents pilot data for a few
profiles. In order to develop geometries and improved office spaces based on end-user
preferences, there is a need for a large amount of data to create end-user lighting
profiles. In future work, the authors will collect this data in order to form more specific,
accurate profiles of a population. In addition, different architectural design options
(e.g., type and size of windows, materials, etc.) could affect the available lighting levels
in an office environment. In future work, the authors will evaluate different effects of
these  design  features  on  users’  preference  and  performance.  
Lastly, these end-user lighting profiles can be used as inputs into multi-agent
design systems to develop new alternatives and improved models that are based on end-
users needs and preferences. By being fed these profiles, the multi-agent system will
be able to design environments that fit the specific preferences of end-users. In addition
to collecting more data on the presented approach in this paper, the authors will also
evaluate  participants’  lighting  preferences  when  provided  with  initial  lighting  settings  
in order to examine the relationship between initial lighting setting and end-users
preferences. In this way, the authors can examine how initial lighting conditions
increase/decrease  people’s  propensity  to  change  the  light  as  well  as  their  performance  
in the chosen lighting conditions. By studying human behavior through IVEs, this
future research hopes to devise a methodology, through which energy consumption in
buildings can be reduced.
ACKNOWLEDGMENT
This project is part of the National Science Foundation funding under the
contract 1231001. Any discussion, procedure, results, and conclusions discussed in this
paper are   the   authors’   views   and   do   not   reflect   the   views   of   National   Science  
Foundation. Special thanks to all the participants and people that contributed to this
project; specifically Ye Tian for helping to setup the simulations within Rhino and
honeybee plugin and to Saba Khashe for her contribution on helping with preparing
and running the experiments.
REFERENCES
Barlow, S., and Fiala, D. (2007). "Occupant comfort in UK offices—How adaptive comfort
theories might influence future low energy office refurbishment strategies." Energy
and Buildings, 39(7), 837-846.
Boyce, P., Berman, S., Collins, B., Lewis, A., and Rea, M. (1989). "Lighting and human
performance: A review." Washington (DC): National Electrical Manufacturers
Association (NEMA) and Lighting Research Institute.

7
©  ASCE
Computing  in  Civil  Engineering  2015 482

Brandemuehl, M. J., and Braun, J. E. (1999). "The impact of demand-controlled and


economizer ventilation strategies on energy use in buildings." Univ. of Colorado,
Boulder, CO (US).
Bullinger, H.-J., Bauer, W., Wenzel, G., and Blach, R. (2010). "Towards user centred design
(UCD) in architecture based on immersive virtual environments." Computers in
Industry, 61(4), 372-379.
Correia da Silva, P., Leal, V., and Andersen, M. (2013). "Occupants interaction with electric
lighting and shading systems in real single-occupied offices: Results from a monitoring
campaign." Building and Environment, 64(0), 152-168.
EPA (2013). "National Awarness of Energy Star ",
<http://www.energystar.gov/sites/default/uploads/about/old/files/2013%2020CEE%2
020Report_2508%2020compliant.pdf>.
Fisk, W. J., and Rosenfeld, A. H. (1997). "Estimates of improved productivity and health from
better indoor environments." Indoor air, 7(3), 158-172.
Frontczak, M., and Wargocki, P. (2011). "Literature survey on how different factors influence
human comfort in indoor environments." Building and Environment, 46(4), 922-937.
Heydarian, A., Carneiro, J., Gerber, D., Becerik-Gerber, B., Hayes, T., and Wood, W. (2015).
"Immersive Virtual Environments versus Physical Built Environments: A
Benchmarking Study for Building Design and User-Built Environment Explorations."
Automation in Construction.
Hoes, P., Hensen, J. L. M., Loomans, M. G. L. C., de Vries, B., and Bourgeois, D. (2009).
"User behavior in whole building simulation." Energy and Buildings, 41(3), 295-302.
Janda, K. B. (2011). "Buildings don't use energy: people do." Architectural Science Review,
54(1), 15-22.
Jazizadeh, F., Kavulya, G., Kwak, J., Becerik-Gerber, B., Tambe, M., and Wood, W. (2012).
"Human-Building Interaction for Energy Conservation in Office Buildings."
Construction Research Congress, 1830-1839.
Klein, L., Kwak, J.-y., Kavulya, G., Jazizadeh, F., Becerik-Gerber, B., Varakantham, P., and
Tambe, M. (2012). "Coordinating occupant behavior for building energy and comfort
management using multi-agent systems." Automation in Construction, 22(0), 525-536.
Kozhevnikov, M. (2008). "The Role of Immersive 3D Environments in Three-Dimensional
Mental Rotation." Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 52(27), 2132-2136.
Reinhart, C. F. (2004). "Lightswitch-2002: a model for manual and automated control of
electric lighting and blinds." Solar Energy, 77(1), 15-28.
Romm, J. J. (1994). Lean and clean management: How to boost profits and productivity by
reducing pollution, Kodansha International New York/Tokyo/London.
Shiratuddin, M. F., Thabet, W., and Bowman, D. (2004). "Evaluating the effectiveness of
virtual environment displays for reviewing construction 3D models." CONVR 2004,
87-98.
Smith, S. W., and Rea, M. S. (1982). "Performance of a reading test under different levels of
illumination." Journal of the Illuminating Engineering Society, 12(1), 29-33.
Veitch, J. A. (1990). "Office noise and illumination effects on reading comprehension."
Journal of Environmental Psychology, 10(3), 209-217.

8
©  ASCE
Computing  in  Civil  Engineering  2015 483

Temporal and Spatial Information Integration for Construction Safety Planning

Sooyoung Choe1 and Fernanda Leite2


1
PhD Candidate, Construction Engineering and Project Management Program,
Department of Civil, Architectural and Environmental Engineering, The University of
Texas at Austin, 301 E Dean Keeton St. Stop C1752, Austin, TX 78712-1094; Ph
(510) 529-8674; email: [email protected]
2Assistant Professor, Construction Engineering and Project Management Program,
Department of Civil, Architectural and Environmental Engineering, The University of
Texas at Austin, 301 E Dean Keeton St. Stop C1752, Austin, TX 78712-1094; Ph
(512) 471-5957; Fax: (512) 471-3191; email: [email protected]

ABSTRACT

Since the Occupational Safety and Health Act of 1970 was established, which
places the responsibility of construction safety on the employer, various injury
prevention strategies have been developed and resulted in a significant improvement
of safety management in the construction industry. However, during the last decade,
construction safety improvement has decelerated and, due to the dynamic nature of
construction jobsites, most safety management activities have focused on safety
monitoring during the construction phase. Traditional safety planning approaches rely
mostly on static information, tacit knowledge, regulations, company safety policies,
and 2D drawings. As a result, site-specific dynamic information, temporal (e.g. when
and who will be exposed to potential hazards) and spatial (e.g. location of dangerous
zones) information are currently not specifically addressed. This paper presents a
formalized 4-dimensional (4D) construction safety planning process that addresses
site-specific temporal and spatial safety information. The safety data, which includes
general safety knowledge, site-specific temporal and spatial information, will be
integrated from a project schedule and a 3D model, respectively. The proposed safety
planning approach is expected to provide safety personnel with a site-specific
proactive safety planning tool that can be used to better manage jobsite safety. In
addition, visual safety materials can also aid in training workers on safety and,
consequently, being able to identify site-specific hazards and respond to them more
effectively.

INTRODUCTION

Construction remains the second most hazardous industry especially due to


the dangerous combination of pedestrian workers and heavy construction vehicles and
machinery, such as dump trucks, dozers, and rollers (Bureau of Labor Statistics,
2012). According to the Bureau of Labor Statistics, in 2012, 806 construction workers
were killed, which represents 17.4% of fatal work injuries in the United States
(Bureau of Labor Statistics, 2012). 806 fatalities indicate that the fatal work injury
rate is 9.9 for every 100,000 full-time equivalent construction workers, and U.S.
construction workers are approximately 2.75 times more likely to be killed compared

1
©  ASCE
Computing  in  Civil  Engineering  2015 484

to the average fatal work injury rate for all industries, which is 3.6. Of the 806
construction industry fatal injuries, 39.8% are construction vehicle and machinery-
related fatalities (Bureau of Labor Statistics, 2012).
Since the Occupational Safety and Health Act of 1970 was established, which
places the responsibility of construction safety on the employer, fatality and disabling
rate in the construction industry has dramatically decreased. After this federal law
came into effect, various injury prevention strategies have been developed and
resulted in a significant improvement of safety management in the construction
industry (Esmaeili and Hallowell, 2012). However, during the last decade,
construction safety improvement in terms of fatality and disabling rate has
decelerated (Esmaeili and Hallowell, 2012) and fatality rate in the construction
industry is still much higher than other industries (Bureau of Labor Statistics, 2012).
Therefore, innovative injury prevention practices such as integration of project
schedules and information technology can be leveraged to significantly improve
current construction safety management practices.
Construction safety management activities are typically categorized into
safety planning and execution processes. Despite the interdependent relationship
between safety planning and execution processes, current safety planning practices
lack a systematic approach to effectively identify and manage hazards prior to
construction because of limited safety data and the dynamic nature of construction.
Due to ineffective safety planning processes, safety planning and execution processes
are generally segregated and, consequently, most safety execution processes rely on
ad-hoc safety activities during construction. Given that the majority of hazards are
generated from dynamic conditions and activities in construction work zones,
developing dynamic safety plans is fundamental in order to improve the safety
execution process and, consequently, site-specific safety management at the jobsite.
The main objective of this paper is to systematically formalize the
construction safety planning process through a 4-dimensional (4D) environment,
which integrates 3D and time, to address site-specific temporal and spatial safety
information.

BACKGROUND RESEARCH

Construction Safety and Schedule Integration

Construction projects are dynamic (Bobick, 2004) and unique factors are
frequent work team rotation, weather, changes in topography, and different
concurrent activities including various combinations of workers and equipment
(Rozenfeld et al., 2010). Due to the dynamic characteristics of construction sites, the
construction schedule gained attention when combined with safety planning. Two
main safety sources, safety regulations and risk data, have been integrated with a
project schedule. Kartam (1997) attempted to integrate the Occupational Safety and
Health Administration (OSHA) regulations into schedules. Saurin et al. (2004) and
Cagno et al. (2001) attempted to link construction activities and safety control
measures prepared by safety experts. The main objective of integrating risk data into

2
©  ASCE
Computing  in  Civil  Engineering  2015 485

project schedules is to identify high risk work periods and minimize possible risks in
the identified periods prior to the start of the activities. Akinci et al. (2002) showed
the possibility of automatically detecting and avoiding hazardous situations by
integrating the project schedule into her time-space conflict analysis tool. Wang et al.
(2006) attempted to identify high risk periods by integrating activities and expected
injury cost data in a simulation-based model (SimSAFE). Yi and Langford (2006)
stated that hazardous situations vary according to different project progress and the
schedule should be considered for the hazard identification process. To address this
issue, Yi and Langford (2006) attempted to predict when and where risky situations
would occur by combining historical accident sources and developed ‘safety resource
scheduling’. Navon and Kolton (2006 and 2007) introduced an automated safety
monitoring and control model to identify fall hazards and possible locations.
Hallowell et al. (2011) considered task interaction risks and suggested an integrated
model of safety risk data into project schedules based on Yi and Langford’s (2009)
model. Esmaeili and Hallowell (2013) developed an integration model by identifying
common highway construction work tasks and quantifying risks of tasks in the
schedule.
Previous studies emphasized the importance of safety and schedule integration
to address when and where hazards are expected using third party applications.
However, there are little studies addressing how safety knowledge is dynamically
updated as a project schedule is updated. Since a project schedule is frequently
updated, this paper will focus on dynamic linkage of safety and schedule integration
with minimum efforts.

BIM for Construction Safety

According to Esmaeili and Hallowell (2012), the construction industry is


saturated by the current safety innovations and new injury prevention practices, such
as integrating information technologies to construction safety, need to be introduced
to improve construction safety. Since traditional 2D drawings and paper-based
sources for safety planning limit the capability to identify and analyze hazards prior
to construction, Building Information Modeling (BIM) has been widely studied. In
the Architecture, Engineering and Construction (AEC) industry, BIM has been widely
used for project planning, designing, scheduling, and estimating. Related to safety,
BIM has been studied in two main aspects, 4D visualization and application of rule
sets. Using 4D visualization enhances the detection of spatial conflict or congestion
prior to construction (Mallasi and Dawood, 2004; Leite et al. 2011). In addition, 4D
simulation can overcome the problem due to the conventional safety planning
practice using 2D drawings and helps safety personnel detect and analyze hazards
effectively (Chantawit et al., 2005). A series of studies (Rozenfeld et al., 2009 and
2010; Sacks et al., 2009) developed a spatial and temporal safety/schedule integration
model in the 4D environment. Their preconstruction safety planning tools considered
task interaction risk factors and used user-provided semi-automatic data to analyze

3
©  ASCE
Computing  in  Civil  Engineering  2015 486

hazardous activities during the 4D simulation. Another attempt of BIM for safety is
applying safety rule checking system to automatically detect hazards and generate
corresponding safety measures (Benjaoran and Bhokha, 2010; Zhang et al., 2013).
Previous studies showed the effectiveness of visualizing safety information
using BIM. However, there are little studies considering safety impacts of concurrent
activities in 4D environment, thus this paper will address how to represent this gap.

RESEARCH SCOPE

Even though all safety practices are important and interrelated, this paper will
focus on improving the macro level of construction safety planning practices, given
that subsequent safety practices can be significantly impacted by macro level plans.
This proposed safety tool will automatically address dynamic updates of construction
documents, especially project schedules, rather than dynamic changes of micro level
work situations such as uncertainties of workers, equipment, weather, or activity
delays which are not updated in a project schedule or 3D model. These kinds of micro
level uncertainties should be considered in the field of real-time jobsite monitoring
and may be integrated with the macro level safety planning process. In addition, the
proposed safety framework will identify risky work periods and risky work zones, but
specific risk controls will not be provided because it is believed that properly trained
safety experts should ultimately make risk control decisions.

RESEARCH METHODOLOGY

In order to improve construction vehicles and machinery-related safety, this


paper focuses on horizontal construction projects which involve construction vehicle
and machinery-intensive activities. A case study which was adapted from the Mopac
Roadway Improvement project in Austin was selected as a proof-of-concept to test
the overall research approach. A typical highway extension 3D model was created
and a project schedule with eight representative activities was generated. In addition,
the 3D model and the project schedule were integrated into a 4D simulation. Figure 1
illustrates the research framework.

4
©  ASCE
Computing  in  Civil  Engineering  2015 487

Figure 1. Research framework.

As shown in Figure 1, relative activity risks of eight activities typically used


in horizontal projects were quantified by safety experts using from 1 and 100 scales
For the temporal information integration, activity risks were integrated with a project
schedule and risky work periods were analyzed. In the spatial information integration,
concurrent activities were identified and a safety 4D simulation was generated by
linking the safety schedule and the 3D model.

Temporal Information Integration

The objective of integrating activity risk and project schedule is to identify


high risk work periods. In temporal information integration, activity risks were
dynamically integrated with a schedule. By making initial activity risk as a property
of each activity in a schedule, risky work periods were automatically estimated and
updated. As projects become more complex and schedule pressures increase, it is
common that multiple activities are planned for the same time. It is, thus, important to
predict high risk periods and prepare safety resources in advance. High risk periods
were calculated by summing initial risks of all activities performed at the same day.
Estimated risky work periods were presented in a graph-based format and, thus,
intuitively allow safety personnel to prepare safety resources ahead of time. Figure 2
shows a sample of a safety schedule as an outcome of temporal information
integration with the case study project. In Figure 2, the bar chart at the bottom right-
hand corner of the figure indicates safety risk profile of the project. The height of
each bar indicates the total amount of safety risk in a certain work day and unit of
safety risk is relative risk (1 to 100 scales) per day. In addition, this work period
graph is updated automatically as the schedule is updated.

5
©  ASCE
Computing  in  Civil  Engineering  2015 488

Figure 2. Safety schedule with case study.

Spatial Information Integration

Site-specific spatial information integration aims at identifying concurrent


activities and visualizing work zone risks to improve safety communication by
linking the safety schedule and a project 3D model. As Hallowell et al. (2011)
emphasized, simple summation of activities’ risks cannot fully address the safety
impacts of concurrent activities because this approach does not consider dangers
caused by other activities. For example, survey activity is assumed as having low
potential risk of a struck-by accident in a preliminary safety plan since this activity
only includes laborers without any construction equipment types. However, if an
activity for installing traffic control devices is scheduled at the same time and zone as
a survey activity, laborers in the survey activity will be exposed to the potential risk
of being struck by a mobile crane. Therefore it is important to predict and prepare
safety impacts of concurrent activities.
Figure 3 shows a sample of safety 4D simulation with colored work zone risks
(green: low risk, purple: medium risk, red: high risk).

Figure 3. Safety 4D simulation with case study.


6
©  ASCE
Computing  in  Civil  Engineering  2015 489

In this paper, a list of concurrent activities were presented in the safety 4D


simulation when multiple activities share the same work zone rather than quantifying
safety risk of concurrent activities. With an identified list of concurrent activities
which is also dynamically linked with an updated schedule, safety personnel can
provide proactive safety trainings for workers at the jobsite.

CONCLUSIONS

The objective of this research was to systematically formalize the construction


safety planning process in a 4-dimensional (4D) environment to address site-specific
temporal and spatial safety information. Traditional safety planning approaches
mostly rely on static information, tacit knowledge as well as regulations, company
safety policies, and 2D drawings. As a result, site-specific dynamic information,
temporal (e.g. when and who will be exposed to potential hazards) and spatial (e.g.
location of dangerous zones) information are currently not specifically addressed. The
proposed framework addressed this gap by assessing work period and work zone
safety risk by integrating an activity safety risk data with dynamic project information,
a project schedule and a 3D model. Moreover, the proposed safety planning approach
with temporal and spatial input can provide safety personnel with a site-specific
proactive safety planning tool and the proposed macro-level site-specific safety
planning framework can be integrated with micro-level safety practices such as real-
time safety monitoring to optimize jobsite safety management. In addition, visual
safety materials can also aid in training workers on safety and, consequently, being
able to identify site-specific hazards and respond to them more effectively.

REFERENCES

Akinci, B., Fischen, M., Levitt, R., and Carlson, R. (2002). "Formalization and
Automation of Time-Space Conflict Analysis." Journal of Computing in Civil
Engineering, 16(2), 124-134.
Benjaoran, V., and Bhokha, S. (2010). "An integrated safety management with
construction management using 4D CAD model." Safety Science, 48(3), 395-
403.
Bobick, T. G. (2004). "Falls through Roof and Floor Openings and Surfaces,
Including Skylights: 1992-2000." Journal of Construction Engineering and
Management, 130(6), 895-907.
Bureau of Labor Statistics (2012).   “Revisions   to   the   2012 Census of Fatal
Occupational Injuries   (CFOI)   counts.”   United   States   Department   of   Labor,  
Washington, DC.
Cagno, E., Giulio, A. D., and Trucco, P. (2001). "An algorithm for the
implementation of safety improvement programs." Safety Science, 37(2001),
59-75.
Chantawit, D., Hadikusumo, B. H. W., Charoenngam, C., and Rowlinson, S. (2005).
"4DCAD-Safety: visualizing project scheduling and safety planning."
Construction Innovation, 5(2), 99-114.
7
©  ASCE
Computing  in  Civil  Engineering  2015 490

Esmaeili, B., and Hallowell, M. (2013). "Integration of safety risk data with highway
construction schedules." Construction Management and Economics, 31(6),
528-541.
Esmaeili, B., and Hallowell, M. (2012). "Diffusion of Safety Innovations in the
Construction Industry" Journal of Construction Engineering and Management,
138(8), 955-963.
Hallowell, M., Esmaeili, B., and Chinowsky, P. (2011). "Safety risk interactions
among highway construction work tasks." Construction Management and
Economics, 29(4), 417-429
Kartam, N. A. (1997). "Integrating Safety and Health Performance into Construction
CPM." Journal of Construction Engineering and Management, 123(2), 121-
126.
Leite, F., Akcamete, A., Akinci, B., Atasoy, G., Kiziltas, S. (2011) "Analysis of
modeling effort and impact of different levels of detail in building information
models”.  Automation in Construction, 20(5), 601–609.
Mallasi, Z., and Dawood, N. (2004). "Workspace competition: assignment, and
quantification utilizing 4D visualization tools." Proc., Construction
Application of Virtual Reality, Lisbon, 13-22.
Navon, R., and Kolton, O. (2006). "Model for Automated Monitoring of Fall Hazards
in Building Construction." Journal of Construction Engineering and
Management, 132(7), 733-740.
Navon, R., and Kolton, O. (2007). "Algorithms for Automated Monitoring and
Control of Fall Hazards." Journal of Computing in Civil Engineering, 21(1),
21-28.
Rozenfeld, O., Sacks, R., and Rosenfeld, Y. (2009). "'CHASTE: construction hazard
assessment with spatial and temporal exposure." Construction Management
and Economics, 27(7), 625-638.
Rozenfeld, O., Sacks, R., Rosenfeld, Y., and Baum, H. (2010). "Construction Job
Safety Analysis." Safety Science, 48(2010), 491-498.
Sacks, R., Rozenfeld, O., and Rosenfeld, Y. (2009). "Spatial and Temporal Exposure
to Safety Hazards in Construction." Journal of Construction Engineering and
Management, 135(8), 726-736.
Saurin, T. A., Formoso, C. T., and Guimaraes, L. B. M. (2004). "Safety and
production: an integrated planning and control model." Construction
Management and Economics, 22(2), 159-169.
Wang, W.-C., Liu, J.-J., and Chou, S.-C. (2006). "Simulation-based safety evaluation
model integrated with network schedule." Automation in Construction,
15(2006), 341-354.
Yi, K.-J.,   and   Langford,   D.   (2006).   “Schedule-based risk estimation and safety
planning  for  construction  projects.”  Journal of Construction Engineering and
Management, 132(6), 626-635.
Zhang, S., Teizer, J., Lee, J.-K., Estman, C. M., and Venugopal, M. (2013). "Building
Information Modeling (BIM) and Safety: Automatic Safety Checking of
Construction Models and Schedules." Automation in Construction, 29(2013),
183-195.

8
©  ASCE
Computing  in  Civil  Engineering  2015 491

Modeling Construction Processes: A Structured Graphical Approach Compared


to Construction Simulation

I. Flood1
1
Rinker School of Construction Management, College of Design, Construction and
Planning, University of Florida, Gainesville, FL 32611-5703. E-mail: [email protected]

Abstract

An essential part of construction project planning and control is the


development of a model of the project’s construction processes. Discrete-event
simulation is the most versatile of all modeling methods, but it lacks the simplicity in
use of other modeling techniques and so has not been widely adopted in construction.
The Critical Path Method (CPM) is the most popular project modeling method in
construction since it is simple to use. However, it is limited to time-dependent
constraints and does not handle work repetition very efficiently. Many other modeling
techniques have been developed over the years but have a limited scope of
application, being aimed at specialized types of project. Linear scheduling is one such
example, being limited to work that progresses along a line, but is otherwise very
easy to use and provides powerful visual insight into the interactions between crews
and the performance of a project. Foresight is a new graphical constraint-based
method of modeling construction processes, designed to offer the simplicity in use of
CPM, the visual insight of linear scheduling, and the versatility of simulation. This
paper compares Foresight with discrete-event construction simulation in terms of the
simplicity of the resultant models for manufactured and on-site based produced
components. A qualitative comparison of the two approaches is also included
highlighting the superior visual insight provided by Foresight, a feature that facilitates
verifying a model and determining an optimal project plan.

INTRODUCTION

A genealogy of the alternative tools that have been developed for modeling
construction processes (Flood et al. 2006) suggests that they can be grouped into three
main categories: the Critical Path Methods (CPM); the linear scheduling techniques;
and discrete-event simulation. Most other tools are either hybrids of these approaches
or variants with non-structural enhancements. For example, 4D-CAD and nD-CAD
planning methods (Issa et al. 2003; Koo & Fischer 2000) that include time as a
dimension are strictly CPM based modeling tools hybridized with 3D-CAD for
visualization purposes.

©  ASCE 1
Computing  in  Civil  Engineering  2015 492

Each category of modelling method is, unfortunately, only relevant to a


restricted range of construction planning problems. CPM methods are suited to
modelling projects at a relatively general level of detail, but are limited by the types
of interactions they can consider between tasks (Harris and Ioannou 1998) and,
moreover, are cumbersome when used to model repetitive processes. Linear
scheduling is targeted at projects where there is repetition at the highest level, such as
high-rise, tunneling, and highway construction work (see, for example, Matilla and
Abraham (1998)). These models are very easy to understand and provide the modeler
with strong visual insight to the dependence between performance and system
constraints. This is particularly helpful in verifying a model and identifying more
optimal ways of achieving the project’s production goals. However, linear
scheduling is limited to modeling work that is both repetitive in nature and comprises
a single sequence of tasks. In contrast, discrete-event simulation (see, for example,
Halpin and Woodhead (1976); Sawhney et al. (1998); Hajjar and AbouRizk (2002)) is
very versatile in that it can in principle model any type of interaction between tasks
and any type of construction process (including repetitive and non-repetitive work).
However, the effort and expertise required to develop and validate a simulation model
has hindered its widespread adoption within construction.
Most construction projects include a variety of processes some of which may
be best modelled using CPM while others may be better represented by linear
scheduling or simulation. However, it is not normally practical for planners to
employ more than one modelling method to manage a project. In any case, using
several tools that are not fully compatible makes it impossible to seek a globally
optimal solution to a planning problem.
Foresight (Flood 2010) is a new method of process modeling that addresses
the above issues. It has been demonstrated to be a realistic alternative to CPM, linear
scheduling and discrete-event simulation (Flood 2010), and has been shown to have
greater simplicity in use than discrete-event simulation but without compromising
modeling versatility (Flood and Nowrouzian 2014). To date, Foresight has been
applied to in-place construction work, where the items under construction are at fixed
locations and productive resources are moved between processes. This paper
considers the alternative approach of manufacturing whereby the item under
construction is moved between processes and productive resources are kept at fixed
locations. Characteristics of these processes are task and job repetition, frequent
reorganization of work to account for design variances, and batch processing of
alternative components. Foresight is compared to the CYCLONE (Halpin and
Woodhead 1976) method of simulation using a prefabricated reinforced concrete
component factory as a case study.

FORESIGHT: A STRUCTURED GRAPHICAL MODELING APPROACH

The goal in developing the new approach to modeling was to attain the
simplicity of CPM, visual insight of linear scheduling, and the modelling versatility
of simulation. In addition, hierarchical structuring of a model (see, for example,
Huber et al. (1990); and Ceric (1994)) and interactive development of a model were
identified as requisite attributes of the new approach since they facilitate model

©  ASCE 2
Computing  in  Civil  Engineering  2015 493

development and aid understanding of the organization and behavior of a system.


The following is an outline of the three principle modeling concepts of Foresight and
should be read with reference to Figure 1:
• Attribute Space. This is the environment within which the model of the process
exists. Each dimension defining this space represents a different attribute
involved in the execution of the process, such as time, cost, excavators, skilled
labor, number of repetitions of an item of work, permits to perform work, and
materials. The attributes that make-up this space are those that are used to
measure performance and/or may have a significant impact on performance.
• Work Units. These are elements that represent specific items of work that need
to be completed as part of the project. They are represented by a bounded region
within the attribute space. A unit can represent work at a high level (such as
‘Construct a Batch of Components’), a low level (such as ‘Assemble Forms’) or
any intermediate level. Work units can be nested within other work units (such as
work unit ‘D’ in Figure 1), allowing the model to be understood at different levels
of abstraction, increasing its readability, reducing the likelihood of errors in the
design of the model, and reducing the amount of work required to define and
update a model. Work units can be repeated (such as work unit F in Figure 1) and
can be implemented at any level within the nesting hierarchy, thus minimizing the
amount of work required to define a model. Repetition of a work unit will include
a repetition of all relevant constraints and its nested work units and their
constraints.
• Constraints and Objectives. Constraints define the relationships between the
work units and the attribute space, either directly with the attribute space (such as
constraint ‘a’ in Figure 1) or indirectly via relationships with other work units
(such as constraints ‘b’ and ‘c’ in Figure 1). These constraints effectively define
the location of the edges of the work units. A constraint can be any functional
relationship between the borders of the work units and/or the space within which
they exist. Practical examples include: (i) ensuring that crews at different work
units maintain a safe working distance; (ii) ensuring that the demand for resources
never exceeds the number available; (iii) determining the duration for a task based
on the number of times it has already been repeated; and (iv) ensuring that idle
time for a task is kept to a minimum. The objectives are the specific goals of the
planning study, such as to maximize profits or to complete work by a deadline
(such as constraint ‘d’ in Figure 1). Fundamentally, they are the same thing as
constraints, albeit at a higher level of significance.
A specification of Foresight is that model development be implemented
interactively. That is, the visual presentation of a model is updated and all constraints
are resolved as the work units and constraints are either edited or added to the model.
This way, the modeler can see immediately the impact of any changes or additions
that are made. Another point to note is that these models are presented as a plot of
the work units within at least two dimensions of the attribute space. This form of
presentation allows the progress of work to be visualized within the model’s
functional structure. This is an extrapolation of the way in which linear scheduling
models are presented, and has the advantage of allowing the user to visualize directly
how the performance of the model is dependent on its structure.

©  ASCE 3
Computing  in  Civil  Engineering  2015 494

Figure 1. Schematic illustrating use of the primary Foresight modeling concepts.

PREFABRICATED RC COMPONENT PRODUCTION

This case study compares the performance of Foresight with conventional


simulation for modeling a manufacturing process. In this case, a prefabricated
reinforced concrete component system was considered comprising multi-level
repetition of work, batch production requirements, constraints on storage space for
components, and dependence on an external supply line. Figure 2 shows the
hierarchy of work units involved in the batch production of two types of prefabricated
reinforced concrete component, and the supply of rebar. At the third level in the
hierarchy are work units representing stations in the factory where tasks such as
setting-up forms are executed or temporary storage is provided such as for the curing
of the cast concrete components. At the fourth level are the individual repetitions of
these tasks.
Figure 3 shows this section of the model with all constraints added, and is
plotted for two attributes: Units (counting the number of components produced); and
Time. The constraints would be added as the work units are added. These include:
a) The durations of each third level work unit which are defined as the difference
between the start and end of a work unit measured in the time dimension.
b) Two batches of 10 and 3 units respectively for the Type A components,
interposed with a batch of 6 Type B components. The limits on each batch are
defined in a similar way to the durations, as difference between the limits of the
parent work unit.
c) The time dependences between the finishes and starts of Set-Up Forms, Cut & Fix
Rebar, Place Concrete, Cure Concrete, and Remove Forms.
d) Place Concrete precedes Cure Concrete for each component; and (e) Cure
Concrete precedes Remove Forms for each component.

©  ASCE 4
Computing  in  Civil  Engineering  2015 495

Prefabricated RC Component Production


level 1 work unit
Type A Component Fabrication
level 2 work unit
Set-up Cut & Fix Place Cure Remove
Forms Rebar Concrete Concrete Forms level 3 work unit

level 4 work unit


Type B Component Fabrication

Set-up Cut & Fix Place Cure Remove Rebar


Forms Rebar Concrete Concrete Forms Delivery

Figure 2. Foresight model of the manufacture of RC prefabricated components,


showing the work units and hierarchy, but not including any constraints.

19 Type A
rebar supply delay
18 Component
17
16 Type B
15 Component
Components (units)

14
13
12
11
10 Type A
9 Comp.
8
7
6
5
4
3
2
limited storage induced delay
1
Time
Figure 3. Foresight model with all constraints added.

e) A limit of 3 components within the curing room at any time (a high humidity
space designed to facilitate concrete hydration). This is implemented by
introducing a new attribute Curing Space Permits, assigning all fourth level work
units within Place Concrete and Cure Concrete a value of 1 in the Curing Space
Permits dimension, and setting the first level work unit for the system to a value
of 3 in this dimension. The impact of this limit can be seen in Figure 3 whereby
every 3rd component experiences a delay to Place Concrete.
a) The final constraint is concerned with the delivery of rebar. This may be
constrained in another dimension, measuring say weight of steel, although for
convenience here it is measured in components. The constraint limits the start of

©  ASCE 5
Computing  in  Civil  Engineering  2015 496

Cut & Fix Rebar and is shown in green in Figure 3. The impact of the scheduled
delivery is also indicated within this figure.

FORESIGHT VERSUS CYCLONE BASED SIMULATION

For comparison, a conventional discrete-event simulation model was


developed of the prefabricated RC component manufacturing system considered
above. Figure 4 shows the CYCLONE diagram representing the process logic and
resource assignments for this system. CYCLONE (Halpin and Woodhead 1976) was
chosen since it was developed specifically for modeling construction processes and is
the most widely used simulation tool in construction. The model as drawn does not
consider the second batch of Type A components nor does it consider the supply of
rebar. Although CYCLONE could have represented these features, it would have
required an extension to the model to ensure the logic. Such an extension could be
better handled using STROBOSCOPE (Martinez 1996), a derivative of CYCLONE.
However, although STROBOSCOPE is functionally more sophisticated than
CYCLONE, it requires considerably more expertise to use.

Cut & Type A Component Fabrication


Fix Rebar
Start 10
components
Place Cure Remove
Concrete Concrete Forms
Start 10
components
Set-Up
Forms

1 crew 1 crew 3 cure


1 crew 1 crew 1 crew
spaces

Set-Up
Forms
Start 6
components
Place Cure Remove
Concrete Concrete Forms
Start 6
components

Cut &
Fix Rebar
Type B Component Fabrication

Figure 4. CYCLONE model of manufacture of RC prefabricated components.

An important advantage of Foresight over CYCLONE is the relative


simplicity of the models. A total of 95 terms are required to define the CYCLONE
model shown in Figure 4 while just 30 terms are required to define the Foresight

©  ASCE 6
Computing  in  Civil  Engineering  2015 497

equivalent (that is the version without the second batch of Type A components and
the rebar delivery). This is similar to the findings made by Flood and Nowrouzian
(2014) where they made a direct comparison between Foresight and STROBOSCOPE
for construction operations and found that Foresight required around one third of the
number of terms to define a model. It was also shown that while STROBOSCOPE
may employ 25 or more modeling concepts for a relatively simple model, the number
of basic modeling concepts employed in Foresight will never exceed 5 (the work unit,
constraint, attribute, nesting, and repetition). This comparison is for deterministic
versions of both the CYCLONE and Foresight models; if stochastic factors were
considered then both models would require the input of additional information
describing the uncertainty. For CYCLONE these parameters would define uncertainty
in the activity durations, for Foresight they would define uncertainty in the value of a
constraint. This highlights another advantage of Foresight over CYCLONE that
uncertainty can be applied to any model parameter not just activity duration, although
simulation in general is also capable of this.
By comparing the model representations of Figure 3 and 4 several additional
important differences between CYCLONE and Foresight can be understood. First,
note that CYCLONE requires the complete logic of the model (as represented by the
CYCLONE diagram of Figure 4) to be finalized before the system’s performance can
be predicted in a simulation run. The Foresight model, on the other hand, integrates
the structure and logic of the model and the estimated performance of the system
within a single format as represented by Figure 3. As a consequence, as elements are
added to the Foresight model and its parameters altered, the impact of these edits on
the performance of the system are seen immediately, thus aiding verification and
validation of the model. A second advantage is that the way in which a model’s logic
and structure impact performance is directly visible, which in turn assists in the
optimization of the design of the system. For example, inspecting Figure 3 it can be
seen visually that delays in production due to limited curing room space could be
removed by expanding this facility to enable storage of an additional 4 components.

CONCLUSION

The paper has proposed a new approach, Foresight, for modeling construction
processes built on concepts relevant to contemporary project planning, and
demonstrated its application to manufacturing systems. The principles upon which
Foresight is based provide it with the versatility necessary to model the broad
spectrum of construction systems that until now have required the use of several
different modeling tools. The resultant models are highly visual in form, representing
the progress of work within the model structure. This facilitates model verification
and validation, provides insight into how the design of a process will impact its
performance, and suggests ways of optimizing project performance. Foresight is also
simpler to use than conventional simulation, employing fewer modeling concepts and
allowing models to be defined using a fraction of the number of terms.
Research is on-going developing detailed models using this method for a
variety of project types. The objective of these studies is to determine the successes

©  ASCE 7
Computing  in  Civil  Engineering  2015 498

and limitations of the proposed planning method in the real-world, and to determine
refinements that will increase its value as a modeling tool.

REFERENCES

Ceric, V. (1994). “Hierarchical Abilities of Diagrammatic Representations of


Discrete-Event Simulation Models.” Proceedings of the 1994 Winter
Simulation Conference, (Editors: J. D. Tew, S. Manivannan, D. A, Sadowski,
and A. F. Seila), Piscataway, New Jersey, Institute of Electrical and
Electronics Engineers, Inc., 589-594.
Flood, I. (2010). “Foresight: A Structured Graphical Approach to Constraint Based
Project Planning.” Proceedings of the 2nd International Conference on
Advances in System Simulation, Simul 2010, Nice, France, IEEE, 6 pp.
Flood, I., R.R.A. Issa, and W. Liu. (2006). “A New Modeling Paradigm for
Computer-Based Construction Project Planning.” Proceedings of the Joint
International Conference on Computing and Decision-Making in Civil and
Building Engineering, (Editor: Hugues Rivard), 1-11. Montreal, Canada,
ASCE, 3436-3445.
Flood, I., and V. Nowrouzian. (2014). “Discrete-Event Simulation versus Constrained
Graphic Modelling of Construction Processes.” Australasian Journal of
Construction Economics and Building, 2 (1), 13-22.
Hajjar, D., and S.M. AbouRizk. (2002). “Unified Modeling Methodology for
Construction Simulation.” Journal of Construction Engineering and
Management, ASCE, 128(2), 174-185.
Halpin, D.W., and R.W. Woodhead. (1976). Design of Construction and Process
Operations. New York, NY, John Wiley and Sons, Inc.
Harris, R.B., and P.G. Ioannou. (1998). “Scheduling Projects with Repeating
Activities.” Journal of Construction Engineering and Management, ASCE,
124(4), 269-276.
Huber, P., K. Jensen, and R.M. Shapiro. (1990). “Hierarchies of Coloured Petri Nets.”
Proceedings of the 10th Int. Conf. on Application and Theory of Petri Nets,
(Editor: Grzegorz Rozenberg), Bonn, Germany, Springer-Verlag, 313-341.
Issa, R.A., I. Flood, and W. O’Brien (Editors). (2003). 4D CAD and Visualization in
Construction: Developments and Applications, Steenwijk, Netherlands, A. A.
Balkema.
Koo, B., and M. Fischer. (2000). “Feasibility Study of 4D CAD in Commercial
Construction.” Journal of Construction Engineering and Management, ASCE,
126(4), 251-260.
Martinez, J.C. (1996). “STROBOSCOPE, State and Resource Based Simulation of
Construction Processes.” PhD. Dissertation, University of Michigan.
Matilla, K.G., and D.M. Abraham. (1998). “Linear-Scheduling: past research efforts
and future directions.” Journal of Engineering, Construction, and
Architectural Management, Blackwell Science Ltd, 5(3), 294-303.
Sawhney, A., S.M. AbouRizk, and D.W. Halpin. (1998). “Construction Project
Simulation using CYCLONE”, Canadian Journal of Civil Engineering, 25(1),
16-25.

©  ASCE 8
Computing  in  Civil  Engineering  2015 499

A Hybrid Control Mechanism for Stabilizing a Crane Load under


Environmental Wind on a Construction Site

Bin Ren1,2; A. Y. T. Leung1; Jiayu Chen1; and Xiaowei Luo1


1
Department of Architecture and Civil Engineering, City University of Hong Kong,
Tat Chee Avenue, Kowloon, Hong Kong SAR. E-mail: [email protected]
2
State Key Laboratory of Fluid Power Transmission and Control, School of
Mechanical Engineering, Zhejiang University, Hangzhou, 310027, China.

Abstract
Tower cranes are widely used in construction jobsite for their efficiency.
However, tower cranes and construction workers themselves suffer a significant
safety hazards from natural sway of payloads. Besides, the external disturbance of
wind leads to additional sway and intensifies the oscillation amplitude of crane load
on construction site. Therefore, we propose a hybrid control mechanism that combine
electronic and mechanical gyroscopes to produce a balancing torque, keeping crane
load stable. We assume the crane load as in the case of the inverted pendulum and
simulate the oscillation movement under continuous wind. A hybrid control
mechanism is designed and developed, with the electronic gyroscope to track the real-
time position and orientation of payload, with the mechanical gyroscope as an
actuator of collected feedback to control the oscillation amplitudes. A wind tunnel
test has been conducted to validate the developed hybrid mechanism.
Keywords: Hybrid control mechanism; Tower crane; Oscillation amplitude;
Wind tunnel test

INTRODUCTION
Tall and flexible tower cranes are widely utilized on construction site, but
natural sway of payloads is nearly unavoidable. Besides, environmental wind is the
major external disturbance source of crane loads, which leads to additional sway.
Supporting structures and hoisting lines of a tower crane sometimes are too
maneuverable to withstand strong wind, over torques and heavy load. Especially,
prefabricated building components and materials require tower cranes to lift heavier
load to more precise locations at specified heights. In addition, strong wind in
construction site put construction workers in danger. For safety reasons, tower crane
operation must be halted when the wind speed exceeds 20 m/s (Winn et al., 2005).
Therefore, any undesirable movement of crane load will prolong construction
schedule and increase construction risk.
According to the passive control of convey-cranes (Collado et al., 2000), the
oscillation of crane load and hoisting system is assumed as an inverted pendulum.
Then the hoisting system is divided into two sections. One section is the rigging

©  ASCE 1
Computing  in  Civil  Engineering  2015 500

connection between trolley and hook assumed as flexible, while the other section is
the cable link between hook and payload assumed as rigid. Considering piece-wise
connection among trolley, hook and payload will have a significant impact on
stabilization.
In this manuscript, we try to explore the four major parameters (swing angular
velocity, swing angular acceleration, payload position velocity, and payload position
acceleration) and understand the relationship between crane load and external wind
through simulation. Based on the simulation model, a hybrid control mechanism with
electronic and mechanical gyroscopes is developed, and a wind tunnel test is
conducted to validate the hybrid control mechanism.

ASSUMPTIONS OF PENDULUM-TYPE HOISTING SYSTEM


Tower cranes are the central component of many construction operations and
associate with a large fraction of construction deaths (Neitzel et al., 2001). One major
potential safety hazard is payload oscillation. Crane motion and wind load often lead
to large payload oscillations. These payload oscillations have many detrimental
effects, including degrading payload positioning accuracy, increasing task completion
time, and deteriorating construction safety.
A crane operator could suppress the motion by moving the trolley in small
increments, but it will degrade efficiency and throughput. Besides, the uncertain
disturbance such as environmental wind always degrades the control performance of
the crane system. To resolve this problem, many research focus on improving design
algorithm in control systems of tower cranes, such as neural-networks-based anti-
sway control approach (Abe, 2011), motion planning-based adaptive control method
for cranes (Fang et al., 2012) and a Takagi-Sugeno(T-S) fuzzy-model-based state-
feedback controller (Zhao and Gao, 2012).
The hoisting system of a tower crane consists of trolley, hook and payload.
These three parts are connected with each other by riggings. In modeling crane
dynamics, the lumped-mass and distributed parameter system approach are used
(Hong and Ngo, 2009), where the hoisting rope is modeled as a mass-less rigid rod
and the payload is modeled as a lumped point mass. In distributed parameter system,
the hoisting rope is considered as a string. Under the mass-point assumption, a
simplified overhead crane model can be drawn (Chang et al., 2012), and the overhead
crane behaves like a pendulum.
Under natural sway, there is no obvious difference between payload mass and
hook mass. When the extreme conditions occur, such as the riggings between hook
and payload is not very short, tow cascaded payloads are carried together, the tower
crane behaves like a double-pendulum motion. The double-pendulum dynamics
include two kinds of pendulum motions with different natural frequencies, so the
double-pendulum effects can degrade the effectiveness of controllers if the controllers
are only designed to resist the simplified single-pendulum oscillation. Compared with
a great diversity of control method for single pendulum type cranes, there exists
strong non-linearity and dynamic coupling issues in double pendulum type cranes. So
the dynamic model of a practical double pendulum type crane is difficult to find.
Considering the passive control, the model of hoisting system is treated as an
inverted pendulum. The control objective of inverted pendulum on a cart is to stable

©  ASCE 2
Computing  in  Civil  Engineering  2015 501

dynamic equilibrium point. So the inverted pendulum-type hoisting system is


assumed having the following characteristics:
Assumption 1: Comparing to the riggings between trolley and hook, the
riggings between hook and payload is short. The payload and the hook are connected
by a rigid hollow rod.
Assumption 2: The swing angular velocity and acceleration of rod and the
position velocity and acceleration of payload are measurable.
Assumption 3: The mass and the length of the connecting rod are known.
Assumption 4: The ball joint that connects the payload link to the rod is
frictionless and does not rotate, namely the payload does not rotate around the rod
axis.
Assumption 5: The movement of the payload is restricted to one direction.

SIMULATION THE EXTERNAL WIND IMPACT ON CRANE LOAD


From above assumptions of the tower crane model, the dynamic of hoisting
system can exhibit significant pendulum dynamics under given crane payloads and
rigging configurations. Under the environmental wind load, payload sway behaves as
an inverted pendulum-type motion. There is no numerical solution of oscillation
position and orientation under continuous wind force.
Therefore, a visual simulation model is composed to measure the four major
parameters (swing angular velocity, swing angular acceleration, position velocity and
position acceleration) under external wind disturbance. The simulation model of
pendulum-type hoisting system is created by using MSC-ADAMS, shown in Figure 1,
in which rotary inertia is set 0.006kg•m2. The mass of the rod is 0.2kg, and the length
of rod is 400mm. The mass of the device is 0.5kg, and its dimension is
240mm 220mm 140mm.

Figure 1. Simulation model of the pendulum-type hoisting system.

Without wind load, the trend of velocity and acceleration of swaying angle is
periodicity; while with wind load, the velocity and acceleration of swaying angle is
gradually rising. So it is difficult to stabilize the payload oscillation when considering
the environmental wind. In order to further compare simulation and wind tunnel test,

©  ASCE 3
Computing  in  Civil  Engineering  2015 502

the simulation model mimics the scalar wind speed and wind direction in Baker City
in US (Wilde, 2015). From the dynamic simulation of pendulum type hoisting system,
the angular velocity and angular acceleration of oscillation according to the
continuous wind speed can be calculated out, shown in Table 1. Then the position and
orientation of payload can be identified through the expression of state space of the
inverted pendulum system (Precup et al., 2012). Therefore, the relationship between
oscillation position and environmental wind force in pendulum-type hoisting system
is established.

Table 1. Environmental Wind and Position Orientation of Payload.


Swing
Scalar Swing Payload Payload
Wind Angular
Wind Angular Position Position
Time(sec) Direction accelerati
Speed velocity velocity acceleratio
(deg) on
(m/s) (deg/s) _x(m/s) n _x(m/s2)
(deg/s2)
0 3.6 134 -5.36 -10.15 0.81 -6.10
5 3.8 133 -1.31 -5.25 0.62 0.65
10 3 131 -0.96 3.66 1.09 0.32
15 2.7 138 -5.15 11.88 1.98 7.55
20 1.5 85 2.68 10.91 2.24 1.57

DEVELOPMENT OF A HYBRID CONTROL MECHANISM


Monitoring gyroscopic moments has been proven an effective approach to
control the payload rotation. A suspender device with mechanical gyroscope could
control load rotation under active and passive modes (Inoue et al., 1997). Although
gyroscopes are valid tools for stabilization, a mechanical gyroscopic stabilizer is
normally too heavy to carry. Therefore, great interest has turned up in the production
of a low-cost, small size, and high accuracy electronic gyroscope (Nasir and Roth,
2012).
The developed gyroscopic stabilizer in this research is a light weight device
which includes mechanical and electronic gyroscopes, motor of mechanical
gyroscope, gyroscope bracket, a rod to link the suspender part, and a black cloud
platform to support fixture for the electronic and mechanical gyroscope, shown in
Figure 2. Among them, electronic gyroscope tracks the real-time amplitude of
payload and mechanical gyroscope actuates the control scheme based on feedback
data. The details of hybrid control scheme is shown in Figure 3.

©  ASCE 4
Computing  in  Civil  Engineering  2015 503

Figure 2. Platform of developed gyroscope stabilizer device.

Figure 3. Hybrid control scheme of the developed device.

There are two mechanical configurations in the experiment: one with a single
gyroscope; the other with dual gyroscopes. With one mechanical gyroscope, the
device could stabilize the oscillation in one direction, which along the fitting gimbal
direction, namely roll direction. The dual gyroscopes configuration is a cross-shaped
layout along roll and pitch directions, respectively. The hybrid mechanism with dual
gyroscope could stabilize the oscillation in roll and pitch directions, as shown in
Figure 4.

©  ASCE 5
Computing  in  Civil  Engineering  2015 504

Figure 4. Layout of two mechanical configurations.

WIND TUNNEL TESTS


In practice, the wind flow around a tower crane exhibits unexpected and
complex aerodynamic phenomena. Therefore, we conduct an experiment with a
scaled construction crane model under controllable wind to demonstrate the
effectiveness of gyroscopic stabilizers in construction cranes. The height of
experiment set between gyroscope and jib is 0.4m, and the payload weight is 0.5kg.
The apparatus of wind tunnel test is with a cross section of 0.4m 0.4m for air outlet,
and its length is 4m, as shown in Figure 5.

Figure 5. The apparatus of wind tunnel test.

In the beginning, possible positions to place the gyroscope is tested to


maximize the stabilization efficiency. The average wind speed keeps 20.11m/s, and
the test positions include 90°, 75°, 45°, and 15°. The responding time to stop the
swaying is 30s, 20s, 15s, and 20s. The result shows that both angle of gyroscope

©  ASCE 6
Computing  in  Civil  Engineering  2015 505

position and environmental wind speed have a certain influence on time to stabilize
the system; when angle position of mechanical gyroscope is 45 degree with a rotor
spinning speed of 10230rpm, it has a dramatic effect on payload stabilization.
Therefore, the developed stabilizer could reduce the oscillatory amplitude of the
payload with hybrid control mechanism, and the optimal installation angle of
mechanical gyroscope is 45 degree.
Then, the wind tunnel is testified the performance of the hybrid control
mechanism under different wind speed disturbances. The real-time environmental
wind data comes from Earth System Research Laboratory in National Oceanic &
Atmospheric Administration. The sample environmental wind is set based the wind
on September 21, 2014, in Baker US (Wilde, 2015). The environmental temperature
is 36.2 , wind pressure is 972.9mb, and relative humidity is 18.6%. The total weight
of gyroscope is 345g. The wind direction is 134°and last 35s, at beginning with wind
speed 3.6m/s, 5 second with wind speed 3.8m/s, 10 second with wind speed 3m/s, 15
second with wind speed 2.7m/s, 20 second with wind speed 1.5m/s, respectively. The
swing angle of payload could be stabilized at 28 seconds later, shown in Figure 6.

60

40

20

0
Swing angle (deg)

-20

-40

-60

-80

-100
-5 0 5 10 15 20 25 30 35 40

Time (Sec)

Figure 6. Result of wind tunnel test with different wind speed disturbances

CONCLUSION
It is extremely difficult to stabilize the hook and payload due to their suspend
status under environmental wind disturbance. In this research, the hoisting system of
tower crane is treated as a pendulum, which allows oscillation to occur during crane
motion. Computer-based simulation is created in this paper to optimize the payload
positon and orientation under the environmental wind. The simulation proves that the
pendulum system can reduce the oscillatory amplitude of the payload with the
proposed hybrid control mechanism of combining mechanical and electronic
gyroscope. In the hybrid system, the electronic gyroscope tracks the real-time
amplitude of payload, and a mechanical gyroscope was utilized as the actuator to
produce a balancing torque and keep crane payload in a stable state. An experiment
has been conducted to validate the integrated device with wind tunnel tests. The
experimental results suggest that comparing to traditional control system our

©  ASCE 7
Computing  in  Civil  Engineering  2015 506

proposed hybrid control mechanism is a more efficient method which can stabilize
the payload faster and optimize installation angle of mechanical gyroscope.

ACKNOWLEDGEMENT
This paper is based in part upon work supported by Construction Industry
Council of Hong Kong, National Natural Science Foundation of China (Grant No.
51205350), Zhejiang Provincial Research Program of Public Welfare Technology
Application of China (Grant No. 2013C31027), and Hong Kong Scholars Program of
China (Grant No. XJ2013015).

REFERENCES
Abe, A. (2011). “Anti-sway control for overhead cranes using neural networks.”
Journal of Innovative Computing, Information and Control, 7(7).
Chang, Y.-C., Hung, W.-H., Kang, S.-C. (2012). “A fast path planning method for
single and dual crane erections.” Automation in Construction, 22: 468-480.
Collado, J., Lozano, R., Fantoni, I. (2000). “Control of convey-crane based on
passivity.” American Control Conference, 2000. Proceedings of the 2000.
IEEE, pp. 1260-1264.
Fang, Y., Ma, B., Wang, P., Zhang, X. (2012). “A motion planning-based adaptive
control method for an underactuated crane system.” Control Systems
Technology, IEEE Transactions on, 20(1): 241-248.
Hong, K.-S., Ngo, Q.H. (2009). “Port Automation: modeling and control of container
cranes.” Inter. Conf. on Instrumentation, Control and Automation, pp. 19-26.
Inoue, F. et al. (1997). “A practical development of the suspender device that controls
load rotation by gyroscipic moments.” Proceedings of the 14th International
Symposium on Automation and Robotics, pp. 8.
Nasir, A.K., Roth, H. (2012). “Pose Estimation by Multisensor Data Fusion of Wheel
Encoders, Gyroscope, Accelerometer and Electronic Compass, Embedded
Systems.” Computational Intelligence and Telematics in Control, pp. 49-54.
Neitzel, R.L., Seixas, N.S., Ren, K.K. (2001). “A review of crane safety in the
construction industry.” Applied Occupational and Environmental Hygiene,
16(12): 1106-1117.
Precup, R. et al. (2012). “Signal processing in iterative improvement of inverted
pendulum crane mode control system performance.” Instrumentation and
Measurement Technology Conference (I2MTC), 2012 IEEE International.
IEEE, pp. 812-815.
Wilde, N. (2015). “Physical Sciences Division Profiler Data.” Earth System Research
Laboratory.http://www.esrl.noaa.gov/psd/data/obs/datadisplay/.html>(Sep. 21,
2014).
Winn, R.C., Slane, J.H., Morris, S.L. (2005). “Aerodynamic Effects in the Milwaukee
Baseball Stadium Heavy-Lift Crane Collapse.” American Institute of
Aeronautics and Astronautics, 10-13.
Zhao, Y., Gao, H. (2012). “Fuzzy-model-based control of an overhead crane with
input delay and actuator saturation.” Fuzzy Systems, IEEE Transactions on,
20(1): 181-186.

©  ASCE 8
Computing  in  Civil  Engineering  2015 507

Regional Seismic Damage Simulation of Buildings:


A Case Study of the Tsinghua Campus in China

Xiang Zeng; Zhen Xu; and Xinzheng Lu*


Department of Civil Engineering, Tsinghua University, Beijing 100084, China.
E-mail: [email protected]

Abstract

A seismic damage simulation of regional buildings can provide valuable


information for earthquake disaster prevention and mitigation. A critical issue for
such a simulation is to select an appropriate numerical model that balances accuracy
and efficiency for an individual building. Therefore, a moderate-fidelity model,
named the multi-story concentrated-mass shear (MCS) model, and an associated
parameter determination approach based on the HAZUS performance database are
proposed for the simulation, with which the seismic responses of each story can be
predicted through a nonlinear time-history analysis (THA). The reliability of this
model is validated by comparison with the results of refined finite element (FE)
models. Furthermore, the seismic damage simulation of the Tsinghua campus, which
includes 619 buildings, is performed. A 40 s nonlinear time-history analysis of the
619 buildings can be completed in 15 s on a desktop computer by using the proposed
method. Finally, the seismic responses of the whole area are vividly displayed
through a visualization interface. The different damage states for each story are
clearly identified for every individual building. The research outcome will assist in
providing a useful reference and an effective tool for quick damage assessment and
emergency management.

Keywords:

Regional seismic damage simulation; Multi-story concentrated-mass shear model;


Nonlinear time-history analysis; Visualization of seismic damage simulation

INTRODUCTION

A seismic damage simulation of regional buildings can provide useful


information for earthquake disaster prevention and mitigation. The most influential
software in the world for such simulation is HAZUS, which was developed by the
Federal Emergency Management Agency (FEMA) of the United States (US) (FEMA
1999). In HAZUS, the seismic damage of each building is determined by the
intersection of the building capacity curve and the demand spectrum (FEMA 1999).
The building capacity curve is obtained through nonlinear static analysis (i.e., first-
mode pushover analysis), while the demand spectrum is derived from the 5%
damped response spectrum. However, the effects of high-order vibration modes and
some characteristics of ground motions (e.g., the near-fault impulse) are difficult to
be taken into consideration when using the HAZUS method (Lu et al. 2014). To
overcome the abovementioned problems, the Integrated Earthquake Simulation (IES)

©  ASCE
Computing  in  Civil  Engineering  2015 508

by Tokyo
T Univeersity (Sobhaaninejad et aal. 2011) proovides a morre reasonablle method
of seeismic damagge simulatioon for regionnal buildings, which adoppts a multi-ddegree-of-
freeddom (MDOF F) model an nd nonlinearr THA. How wever, this method sign nificantly
increeases the com mputational workload. H Hence, only supercompuuters have th he ability
to peerform the simulation
s with
w IES. Inn addition, Sobhanineja
S ad’s work fo ocuses on
consttructing a frramework off IES, and thhe method usedu to deterrmine the paarameters
for building
b moddels is not prrovided.
To overcomme those probblems, a mooderate-fidellity model, nnamed the multi-story
m
conccentrated-maass shear (M MCS) model, and the asssociated paraameter deterrmination
approoach based on the HAZ ZUS perform mance databaase are propposed, with which
w the
seismmic responsses of each story can be predictted through nonlinear THA. A
comp puter prograam is develo oped to impplement the simulation.. With this program,
userss can constrruct numericcal models of buildingss automaticaally, select a ground
motion of interesst, perform nonlinear
n THHA, and obsserve the simmulation resuults of the
wholle region. TheT Tsinghuua campus in China has h 619 buildings with h various
strucctural types. The information of the buildings caan be easily accessed. Therefore,
T
the Tsinghua
T caampus is seelected to bbe the case study areaa to demonsstrate the
proposed seismic damage simulation. T The case stuudy shows tthat a 40 s nonlinear
THA A of 619 builldings can bee completedd in 15 s on a desktop com mputer. Thee outcome
of thhis study pro ovides a seiismic damagge simulatioon method fo for regional buildings
that balances acccuracy and efficiency, which
w is verry importantt for seismicc damage
assesssment and emergency
e m
management of an urban
n area.

COM
MPUTATIO
ONAL MOD
DELS

Multti-Story Con ncentrated-MMass Shear Model


M
There are nuumerous buiildings in ann urban regioon, so it is very difficult to obtain
detaiiled structuraal information for everyy building, such
s as the reinforcemeent design
of beeams, colum mns and sheaar walls. Theerefore, it iss of great immportance to establish
numeerical buildinng models according
a to some basic and easily accessible infformation.
Meannwhile, the numerical models used should have a reasonable acccuracy to
simuulate the seissmic responnse of buildinngs. The MCS
M model is a proper numerical
n
modeel that satisffies these req
quirements. In the MCS S model, eacch story of a building
is treeated as a dynamic
d deggree of freeedom (DOF)), so the nuumber of DOFs of a
build
ding is equall to the numb ber of stories, as shown in Figure 1. The MCS modelm is a
type of MDOF model thatt can more accurately simulate thhe nonlinearr seismic
behaavior of low w-rise and middle-rise
m bbuildings (Lu et al. 20114), comparred to the
tradittional singlee DOF (SDO OF) model inn HAZUS.

Fiigure 1. Multi-story con


ncentrated-mass shear model

©  ASCE
Computing  in  Civil  Engineering  2015 509

The inter-story mechanical behavior of the MCS model can be defined by the
backbone curve and the hysteretic curve. The tri-linear backbone curve is adopted as
the backbone curve of the MCS model. For the hysteretic curve, three different types
of models are adopted to describe the hysteretic behaviors of different types of
structures. Specifically, the bilinear elasto-plastic model is suitable for describing
steel moment frames (Miranda and Ruiz-Garcia 2002), while the modified-Clough
model (Mahin and Lin 1984) is generally used for reinforced concrete (RC) frames
for which the flexural failure is significant. For other structures that easily fail by
shear, the pinching model (Steelman and Hajjar 2009) is adopted to describe their
hysteretic behavior.

Parameter Determination Approach


In the MCS model, the tri-linear backbone curve can be determined by five
parameters (Lu et al. 2014). A parameter determination approach is suggested in Lu
et al.’s work. In this approach, these five parameters can be determined according to
the seismic performance data of typical buildings provided in HAZUS if the
structural type, story height, number of stories, construction time and floor area of a
building is known. In other words, the numerical model of each building can be
easily constructed if the above five basic building properties are available. This
method can quickly establish the numerical model for a region with hundreds or
thousands of buildings. Detailed parameter determination steps can be accessed in Lu
et al. (2014).

Model Validation
A cyclic pushover analysis and a nonlinear THA for a six-story RC frame
building are performed to validate the MCS model (Xu et al. 2014). The analysis
results (e.g., the time histories of the lateral hysteretic behavior and the top
displacement) of the MCS model are compared with those of a refined FE model for
the same building, as shown in Figure 2 (Xu et al. 2014). The comparison shows a
good agreement between the MCS model and the refined FE model.

(a) Comparison of the lateral hysteretic properties between the refined FE model and
the MCS model on the bottom story

©  ASCE
Computing  in  Civil  Engineering  2015 510

(b) Comparison of the top displacement versus time histories between MCS model
and the refined FE model
Figure 2. Validation of the MCS model (Xu et al. 2014)

CASE STUDY: SEISMIC DAMAGE SIMULATION OF BUILDINGS IN


TSINGHUA CAMPUS

The Framework of the Seismic Damage Simulation


A computer program is developed to implement the pre-processing, analysis,
and post-processing of the regional seismic damage simulation. The program consists
of four function modules: (1) determination and editing of the building information,
(2) visual selection of ground motion, (3) nonlinear THA of buildings, and (4)
visualization of the result of seismic damage simulation, as shown in Figure 3.

Figure 3. Framework of the seismic damage simulation program

Numerical Model Establishment of Buildings


The simulation area is the campus of Tsinghua University, Beijing, China, as
shown in Figure 4. According to 3D and 2D maps (Tsinghua University 2014), the
geometry properties of the buildings, such as the building shape, floor area and
number of stories, can be obtained easily. In addition, to calculate the mechanical
properties of the MCS model, a survey of the entire campus is conducted to obtain
the structural type, story height and construction time of each building.
Once the above information is obtained, an MCS model for each building can
be established through the proposed parameter determination approach. When the

©  ASCE
Computing  in  Civil  Engineering  2015 511

textbboxes of the basic buildding informaation are fillled-in, as shhown in Figuure 5, the
inter-story hysterretic parameeters as well as the criterria of story ddrift ratio rep
presenting
diffeerent damage states willl be autom matically callculated by the program m. These
autom matically geenerated paraameters mayy be modifiedd manually. When the paarameters
are changed,
c thee corresponnding hysterretic curve shown
s in thhe program dialogue
(Figuure 5) will be
b updated accordingly,
a , which cleaarly illustratees how the hysteretic
h
curvee is influennced by thesse parameteers. Furtherm more, the dyynamic propperties of
buildding (e.g., the free viibration freequencies an nd modes) are also calculated
c
autom matically byy the programm to assist in determinin ng whether thet parameteers of the
buildding are corrrect or not. For example, if the fun ndamental peeriod of a thhree-story
masoonry structuure is greateer than 1 s,, there may y be somethhing wrong with the
param meters of thhe building because thiis type of building
b gennerally cannnot be so
flexibble.

Figuree 4. 3D map
p of the Tsin
nghua Univeersity campu
us (source:
htttp://map.tsiinghua.edu.cn/3d/)

Figure 5. The
T parametters of the M
MCS modell for an indiividual build
ding

Selecction of Gro
ound Motion ns
In the moddule for the selection off ground mo otions (Figurre 6), several widely
used ground mottion records (e.g., the Ell Centro recoord of Imperrial Valley, California,
C
19400) are readilyy provided inn the prograam. In additiion, user-deffined groundd motions

©  ASCE
Computing  in  Civil  Engineering  2015 512

can be
b added innto the proggram by prooviding the file paths of o the correesponding
grounnd motion records.
r Thee ground mootions and th heir acceleraation responsse spectra
can be
b drawn in different collors so that users
u can cleearly select aand comparee different
grounnd motions.

Figure 6.
6 Visual sellector for seeismic wavees

Regiional Seismiic Damage Simulation


S oof Buildingss
According to the abovve MCS moodels and thhe selected ground mootion, the
nonliinear THA of o the entire campus is performed. A 40 s groundd motion meeasured in
19999 Taiwan (Chi-Chi) earth hquake is seelected for demonstratio
d on in this secction, and
the time
t step is set to a connstant, 0.0055 s (thus, thhe total numbber of comp putational
stepss is 8000). There
T are a total
t of 619 buildings ono the Tsingghua campuss. It takes
only 15 s to com mplete the analysis
a on a desktop computer
c wiith a 2.60 GHz
G Intel
G620 0 CPU, whhich shows a high com mputing efficciency of thhe proposed d seismic
damaage simulation method. Therefore, thhis program m can serve aas a useful toool for the
quickk evaluationn of regional seismic dam mage.
In the visuaalization module, users ccan observe the simulattion results, including
the displacemen
d nt time-histoory of each story and th he detail daamage resultt of each
build
ding on the entire
e campuus, as shownn in Figure 7. 7 The damaage of each story can
be predicted
p byy using the MCS modeel, which iss important in some cases. For
exammple, for som me of the claassrooms andd student apaartments, thee ground storry is used
for bicycle
b parkiing; thereforre, the popullation of othher stories is much largerr than the
grounnd story, which
w impliees that a coollapse of th he other stoories will haave more
casuaalties than thhe ground sttory. Thereffore, comparred to the SD DOF model,, which is

©  ASCE
Computing  in  Civil  Engineering  2015 513

only able to caalculate the damage staate of the entire


e buildinng, the MC
CS model
provides more vaaluable detaiiled informaation for seissmic emergeency manageement and
loss assessment.
a

(a) Displacemeent time-histo ory of the (b) Detaailed damage informatio


on of a
buuildings in thhe Tsinghua campus buildding

(c) Damagee and collapsse state of th


he entire cam
mpus
Figure 7. Visualization
V n of the seismic damagee simulation
n results forr the
Tsinghhua campus

CON
NCLUSION
NS

To balancee the efficiiency and aaccuracy off the regionnal seismicc damage
simuulation for bu
uildings, a moderate-fid
m elity numeriical model fo
for building structures
s
and the associaated param meter determmination appproach baseed on the HAZUS
perfoormance dattabase are proposed.
p A numerical model for eeach buildinng can be

©  ASCE
Computing  in  Civil  Engineering  2015 514

easily constructed if five basic building properties are available. The seismic
response of buildings is simulated through nonlinear THA, and the displacement
time-history and damage states of each story can be observed in a visualization
module, which provides more detailed information than the results using SDOF
models. The case study shows that a 40 s seismic response simulation for the 619
buildings of the Tsinghua campus can be completed in 15 s on a desktop computer,
so this method can serve as a useful tool for the quick evaluation of regional seismic
damage.

ACKNOWLEDGEMENTS

The authors are grateful for the financial support received from the National
Key Technology R&D Program (No. 2013BAJ08B02) and the National Natural
Science Foundation of China (No. 51178249).

REFERENCES

FEMA. (1999). “Earthquake loss estimation methodology - HAZUS99. technical


manual”. Washington, D.C., Federal Emergency Management Agency
National Institute of Building Sciences.
Lu, X. Z., Han, B., Hori, M., Xiong, C. and Xu, Z. (2014). “A coarse-grained parallel
approach for seismic damage simulations of urban areas based on refined
models and GPU/CPU cooperative computing”. Advances in Engineering
Software, 70, 90-103.
Mahin, S. A. and Lin, J. (1984). “Construction of inelastic response spectra for
single-degree-of-freedom systems, computer program and applications”.
Report No. UCB/EERC-83/17. Berkeley, University of California, Earthquake
Engineering Research Center.
Miranda, E. and Ruiz-Garcia, J. (2002). “Influence of stiffness degradation on
strength demands of structures built on soft soil sites”. Engineering Structures,
24, 1271-1281.
Sobhaninejad, G., Hori, M. and Kabeyasawa, T. (2011). “Enhancing integrated
earthquake simulation with high performance computing”. Advances in
Engineering Software, 42, 286-292.
Steelman, J. S. and Hajjar, J. F. (2009). “Influence of inelastic seismic response
modeling on regional loss estimation”. Engineering Structures, 31, 2976-2987.
Tsinghua University. (2014). “Maps of Tsinghua University”, http://map.tsinghua.ed
u.cn/2d/, Beijing, China.
Xu, Z., Lu, X. Z., Guan, H., Han, B. and Ren, AZ. (2014). “Seismic damage
simulation in urban areas based on a high-fidelity structural model and a
physics engine”. Natural Hazards, 71, 1679-1693.

©  ASCE
Computing  in  Civil  Engineering  2015 515

Construction Operation Simulation Reflecting Workers’ Muscle Fatigue


J. Seo1; M. Moon2; and S. Lee3
1
Ph.D. Student, Tishman Construction Management Program, Dept. of Civil and
Environmental Engineering, University of Michigan, 2350 Hayward St., Ann Arbor,
MI 48109. E-mail: [email protected]
2
Ph.D. Student, Tishman Construction Management Program, Dept. of Civil and
Environmental Engineering, University of Michigan, 2350 Hayward St., Ann Arbor,
MI 48109. E-mail: [email protected]
3
Associate Professor, Tishman Construction Management Program, Dept. of Civil
and Environmental Engineering, University of Michigan, 2350 Hayward St., Ann
Arbor, MI 48109. E-mail: [email protected]

Abstract
Discrete event simulation (DES) is widely regarded as an effective tool for
modeling, analyzing and establishing the correct design of construction operations,
including the proper quantity and sequencing of resources within the context of a
selected field construction method. However, a gap exists in current construction
operation studies with respect to human factors, specifically, workers’ physical
aspects. Construction workers, one of the most important resources in construction
operation, have limited physical capabilities due to fatigue depending on individual-
and task-related factors. However, less attention to fatigue has been paid in DES
studies in construction, despite its significant impacts on construction performance.
To understand dynamic impacts of workers’ physical constraints on construction
performance, we propose worker-oriented construction operation simulation
integrating a fatigue model into discrete event simulation. Specifically, workers’
activities during construction operations are modeled using DES in an elemental task
level. Then, physical demands from the operation are estimated through
biomechanical analysis on elemental tasks. Fatigue, which refers to diminished
physical capabilities under given physical demands, is predicted using a fatigue
model, and its impact is subsequently reflected in DES. As a preliminary study, we
tested the feasibility of the proposed approach on masonry work by varying crew
size. The results indicate that excessive physical demands beyond workers’
capabilities result in productivity losses. Ultimately, the proposed approach has the
potential to decide alternatives for construction operation planning to secure high
productivity without compromising workers’ health.

INTRODUCTION
Discrete Event Simulation (DES) has been a useful technique for modeling
and analyzing construction operations, which helps to develop better project plans,
optimize resource usage, reduce costs and duration, or improve overall project
performance (Martinez and Ioannou 1999; AbouRizk 2010). For successful
applications of DES, it is essential to build accurate models which represent
construction operations (Shi and AbouRizk 1997). Especially, modeling of resources

©  ASCE
Computing  in  Civil  Engineering  2015 516

and their state is one of the important elements for construction operation modeling
because resources are the predominant requirement for activities and activities also
affect the state of resources (Martinez and Ioannou 1999).
Resources in construction generally refer to materials, equipment and labor,
all of which has a set of constant attributes in DES, for example, an amount of
materials required for one cycle of an activity, working capacity of equipment or
labor. However, unlike other resources, there is significant variability in human
physical capabilities which are affected by workloads (Chaffin et al. 2006). For
example, the increased workload may result in reduced performance (e.g.,
productivity) due to fatigue (Keller 2002; Alvanchi et al. 2011). However, human
factors such as fatigue and their impact have been rarely addressed in DES (Perez et
al. 2014).
To address this issue, this paper proposes construction operation simulation
that reflects dynamic interactions between construction operations and human factors.
To represent these interactions, we combine a DES model with a biomechanical
model for estimating workloads from construction operations and a fatigue model for
estimating changes in workers’ physical capabilities, which in turns affect workers’
performance. This method is demonstrated through a pilot study on masonry work.
Based on the pilot study, we discuss the benefits of the proposed approach for
understanding how workers’ physical capabilities under given workloads affect
construction operations, and suggest the future direction of research.

IMPACT OF PHYSICAL FATIGUE ON CONSTRUCTION OPERATION


Human physical capability is generally used to describe a person’s ability to
do the physical tasks of everyday living (Cooper et al. 2010). Among diverse aspects
of physical capability, muscular strength and endurance dominate physical work
performance (Chan et al. 2000). Muscular strength and endurance refer to the
maximum and sustained force-producing capabilities, respectively (Chaffin et al.
2006).
However, it is hard to maintain muscular strength while performing physical
tasks because sustained force exertions without sufficient recovery generate muscle
fatigue that causes the decline in muscle power output (Chaffin et al. 2006). Figure 1
illustrates the relationship between physical demands, muscular capabilities (i.e.,
strength and endurance) and fatigue. To perform a physical task (e.g., lifting heavy
objects), a worker need to exert forces on muscles. The required forces should be less
than a worker’s muscular strength. As a worker performs the task repeatedly over
time, muscles become fatigued, resulting in the reduction of muscle strength. If
appropriate recovery time (e.g., rest time) is not provided, the forces required to
perform the task become same with or even higher than the decreased muscle
strength at some point. This is called ‘fatigue failure’ (McGill 1997), and the time to
fatigue failure refers to the endurance time (Chaffin et al. 2006).

©  ASCE
Computing  in  Civil  Engineering  2015 517

Figure 1. Relationship between physical demand and fatigue


The reduction of muscular strength due to sustained force exertion, which is
generally referred to as muscle fatigue, has been associated with not only
performance loss, but also workers’ health issues (Perez et al. 2014). Excessive
physical demands generally cause discomfort or pain on muscles, indicating the need
for a break. In the long term, if a muscle becomes fatigued repeatedly without
sufficient recovery, the possibility of work-related musculoskeletal disorders
increases (Armstrong et al. 1993). A worker who is experiencing muscle fatigue
generally tries to cope with manifestation of muscular fatigue by applying diverse
strategies such as reducing workloads (e.g., lifting less material) or taking a rest
during tasks, which affects work performance (Durand et al. 2009).

FRAMEWORK OF WORKER-ORIENTED SIMULATION


We propose a construction operation simulation approach reflecting dynamic
interactions between physical demands and capabilities and their impact on
operations which are described in an earlier section (Figure 2).

Figure 2. Research framework


The DES model is to represent construction operations involving workers’
manual tasks. The level of details of representation relies on the purpose of modeling
(Halpin and Woodhead 1976). In this framework, tasks for construction operations
should be divided into work elements at a basic task level such as lifting drywall,
screwing, sawing wood or laying brick (Everett and Slocum 1994) because workloads
vary depending on types of basic tasks. Each basic task then becomes the elemental
model-building component. Types, sequence and duration of basic tasks for DES

©  ASCE
Computing  in  Civil  Engineering  2015 518

modeling can be derived from a time and motion study (e.g. direct and continuous
observation of construction operations).
Once basic tasks are determined, a biomechanical model estimates physical
workloads by simulating the tasks. A biomechanical model estimates musculoskeletal
stresses required to perform a task (e.g., joint moments, muscle forces) as a function
of postures, external loads and anthropometric data (Chaffin et al. 2006).
Biomechanical models provide an effective means to understand physical workloads
during construction tasks (Seo et al. 2013; Seo et al. 2014). One example of
computerized biomechanical models is 3D Static Strength Prediction Program
(3DSSPPTM) (Center for Ergonomics, University of Michigan 2011). The program is
applicable to worker motions in three dimensional space. For example, tasks can be
evaluated by breaking the activity down into a sequence of static postures and
analyzing each individual posture. Therefore, representative postures for each basic
task can be simulated in this program, estimating joint moments required to perform
the task. The results of biomechanical models for basic tasks are combined with the
sequence and duration data from the DES model, creating workloads required for
whole construction operations (e.g., work-rest time, repetitions) as shown in Figure 3.

Figure 3. Workloads during construction operations


The fatigue model aims to estimate muscle fatigue (i.e., changes in muscle
strength), given the workloads from the DES and biomechanical model. The fatigue
model consists of a fatigue generation model to estimate the reduction of muscle
strength during performing the tasks and a fatigue recovery model to predict how
much muscle fatigue can be recovered during non-working time (e.g., rest or idle
time). The mathematical model which was developed by Ma et al. (2009) is used for
the fatigue generation model in this study (See Eq. (1). The model is based on
metabolic mechanisms of force generation, which was validated with comparison to
existing static and dynamic muscle fatigue models(Ma et al. 2009). Eq. (1) can be
explained as follows:

(1)
• MVC : Maximum voluntary contraction (maximum capacity of muscle)
• : Current exertable maximum force (current muscle strength)
• : Forces required for the task (e.g., workloads)
• : current time (seconds)
The equation indicates that the current capacity of muscle strength can be
determined by the negative exponential function of cumulative workloads.
Maximum capacity of muscle varies depending individuals, and thus we assumed the

©  ASCE
Computing  in  Civil  Engineering  2015 519

value of muscle strength for 50th percentile male population. However, it can be
adjusted based on workers’ strength tests.
However, one of the limitations of Ma et al. (2009)’s model is that the model
does not consider the recovery process during rest time. Thus, we suggest a recovery
model that estimates the amount of current exertable maximum force recovered
during non-working time based on the physiological recovery rate (See Eq. 2).
Previous studies on recovery rate of muscle strength identified that 5% to 20% of
muscle strength was recovered per one minute (Kuorinka 1988; Shin and Kim 2007).
We assumed 5% of recovery rate per one minute during non-working time.
(2)
• : Current exertable maximum force at start time a of non-working time)
• : Current exertable maximum force at finish time b of non-working time
According to the workloads, the current exertable maximum forces are
calculated using the fatigue generation or recovery model. The workloads beyond the
current exertable maximum forces indicate that potential performance issues may
occur due to fatigue.

PILOT STUDY
A pilot study was performed for masonry work to demonstrate the feasibility
and potential of the proposed approach. The DES model was built based on a time
and motion study by field observations (Figure 4). Figure 4(b) shows basic tasks and
their sequence for block laying tasks. While three masons lay blocks on the wall
repeatedly, one helper moves mortar and blocks near the wall. It took about 54
minutes to build the concrete block wall with 7 courses. Durations for basic tasks are
determined by average observed durations during repetitions.

(a) field observation (b) procedures


Figure 4. Overview of pilot study
For DES modeling, EZStrobe was used (Martinez 2001). The tasks by
masonry workers and a helper are modeled separately as shown in Figure 5. To test
the model’s validation, the simulation was run under same conditions with the
observation (e.g., 3 masons + 1 helper, quantities of blocks required for one course,
the number of courses in a wall). The total simulation time to build the wall was 48
minutes while the actual time from the observation was 54 minutes. However, about 4

©  ASCE
Computing  in  Civil  Engineering  2015 520

minutes of idle time were found from the observation, which was not considered in
the model. Without idling time, the model performed well to represent this masonry
work, showing 4% of the difference in working time.

Figure 5. DES model for masonry work


After building the DES model, biomechanical analyses were performed to
estimate workloads for each basic task. The posture for each task was manually
simulated in 3DSSPPTM, and then the workloads from the analyses were converted
into percentages of MVC (%MVC) by dividing forces (Nm) required for the task by
MVC (Nm) to normalize the workloads. Table 1 shows %MVC for each task from
biomechanical analyses.
Table 1. %MVC required to perform basic tasks
Spreading laying a Scraping Putting
Task Preparing Finishing
mortar block &Leveling mortar

%MVC 3% 12% 30% 3% 6% 12%

Using the workloads in Table 1, physical capabilities are estimated using the
fatigue model as shown in Figure 6. The blue line means forces required for the task
(%MVC) while the red line indicates current exertable maximum forces (%MVC)
(i.e., current muscle strength). As workers perform the tasks, the current exertable
maximum forces decrease due to muscle fatigue. During non-working time, the
current exertable forces are recovered according to recovery rate. The result indicates
that the workloads of masons for this work are appropriate without any fatigue issues.
However, the helper may experience fatigue failure before finishing the work (about
at 25 minutes). In the short term, the fatigue failure can directly result in performance
loss when the helper wants to task a rest to be recovered from fatigue. When fatigue
failure is repeated without enough recovery, it may lead to work-related
musculoskeletal disorders such strain, myalgia or tendonitis in the long term.

©  ASCE
Computing  in  Civil  Engineering  2015 521

(a) a mason (b) a helper


Figure 6. Comparison of physical demands and capabilities

DISCUSSION AND CONCLUSION


This paper demonstrates the feasibility of construction operation simulation
that reflects interactions between workloads and physical capabilities to understand
the impact of workers’ fatigue on construction operations. Specifically, construction
operations are modeled using DES at a basic task level, and the workloads for each
basic task are estimated from biomechanical analysis. Given the workloads, the
fatigue generation and recovery model estimates changes in physical capabilities (e.g.,
muscle strength), allowing us to compare physical demands and capabilities to
identify fatigue failure. There is, however, remarkable variability in workers’
capabilities (i.e., maximum muscle strength). To reflect the variability in the model,
physical demands (i.e., required forces) are normalized by dividing required forces by
individual muscle strength.
The pilot study indicates that the proposed approach helps to understand the
relationship between workloads and physical capabilities, which existing DES
modeling approaches have not been addressed due to the assumption that workers’
physical capabilities are unchanged during tasks. Even though the pilot study tested
the feasibility of the proposed approach, there remain several methodological
challenges. For example, the workload for each basic task was derived from one
representative posture. However, tasks involve a series of motions with changing
postures. In addition, because the fatigue model by Ma et al. (2009) is based on
negative exponential function, the reduction of physical capabilities tends to be more
severe at the early stage. Further research is needed to address these challenges.
The proposed modeling approach has a great potential as an effective tools to
manage construction workers’ workloads prior to work during the design phase. If the
effort to evaluate muscle fatigue from planned operations is successful, how to
systematically design workplaces and processes as well as how to achieve an
effective, safe, and healthy workforce will be enabled.

ACKNOWLEDGEMENT
The work presented in this paper was supported financially with a National
Science Foundation Award (No. CMMI-1161123). Any opinions, findings, and

©  ASCE
Computing  in  Civil  Engineering  2015 522

conclusions or recommendations expressed in this paper are those of the authors and
do not necessarily reflect the views of the National Science Foundation.

REFERENCES
AbouRizk, S. (2010). Role of simulation in construction engineering and management.
Journal of Construction Engineering and Management, 136(10), 1140-1153.
Alvanchi, A., Lee, S., & AbouRizk, S. (2011). Dynamics of working hours in construction.
Journal of Construction Engineering and Management, 138(1), 66-77.
Armstrong, T. J., Buckle, P., Fine, L. J., Hagberg, M., Jonsson, B., Kilbom, A., ... & Viikari-
Juntura, E. R. (1993). A conceptual model for work-related neck and upper-limb
musculoskeletal disorders. Scandinavian journal of work, environment & health, 73-
84.
Center for Ergonomics, University of Michigan (2011). 3D Static Strength Prediction
Program: User’s Manual. University of Michigan, MI.
Chaffin, D. B., Andersson, G., & Martin, B. J. (2006). Occupational biomechanics. New York:
Wiley.
Cooper, R., Kuh, D., & Hardy, R. (2010). Objectively measured physical capability levels
and mortality: systematic review and meta-analysis. Bmj, 341.
Durand, M. J., Vézina, N., Baril, R., Loisel, P., Richard, M. C., & Ngomo, S. (2009). Margin
of manoeuvre indicators in the workplace during the rehabilitation process: a
qualitative analysis. Journal of occupational rehabilitation, 19(2), 194-202.
Everett, J. G., & Slocum, A. H. (1994). Automation and robotics opportunities: construction
versus manufacturing. Journal of construction engineering and management, 120(2),
443-452.
Halpin, D. W., & Woodhead, R. W. (1976). Design of construction and process operations.
John Wiley & Sons, Inc..
Keller, J. (2002). Human performance modeling for discrete-event simulation: workload. In
Simulation Conference, 2002. Proceedings of the Winter (Vol. 1, pp. 157-162). IEEE.
Kuorinka, I. (1988). Restitution of EMG spectrum after muscular fatigue. European journal
of applied physiology and occupational physiology, 57(3), 311-315.
Ma, L., Chablat, D., Bennis, F., & Zhang, W. (2009). A new simple dynamic muscle fatigue
model and its validation. International Journal of Industrial Ergonomics, 39(1), 211-
220.
Martinez, J. C. (2001, December). EZStrobe: general-purpose simulation system based on
activity cycle diagrams. In Proceedings of the 33nd conference on Winter simulation
(pp. 1556-1564). IEEE Computer Society.
Martinez, J. C., & Ioannou, P. G. (1999). General-purpose systems for effective construction
simulation. Journal of construction engineering and management, 125(4), 265-276.
McGill, S. M. (1997). The biomechanics of low back injury: implications on current practice
in industry and the clinic. Journal of biomechanics, 30(5), 465-475.
Perez, J., de Looze, M. P., Bosch, T., & Neumann, W. P. (2014). Discrete event simulation as
an ergonomic tool to predict workload exposures during systems design.
International Journal of Industrial Ergonomics, 44(2), 298-306.
Seo, J., Han, S., Lee, S., and Armstrong, T. J. (2013). Motion-Data–driven Unsafe Pose
Identification through Biomechanical Analysis.” Proceeding of 2013 ASCE
International Workshop on Computing in Civil Engineering, Los Angeles, CA.
Seo, J., Starbuck, R., Han, S., Lee, S., & Armstrong, T. J. (2014). Motion Data-Driven
Biomechanical Analysis during Construction Tasks on Sites. Journal of Computing in
Civil Engineering.
Shi, J., & AbouRizk, S. M. (1997). Resource-based modeling for construction simulation.
Journal of construction engineering and management, 123(1), 26-33.
Shin, H. J., & Kim, J. Y. (2007). Measurement of trunk muscle fatigue during dynamic lifting
and lowering as recovery time changes. International journal of industrial ergonomics,
37(6), 545-551.

©  ASCE
Computing  in  Civil  Engineering  2015 523

Construction Productivity Impacts of Forecasted Global Warming Trends


Utilizing an Integrated Information Modeling Approach

Yongwei Shan1; Paul M. Goodrum2; and M. Phil Lewis3


1
School of Civil and Environmental Engineering, Oklahoma State University, 207
Engineering South, Stillwater, OK 74078. E-mail: [email protected]
2
Department of Civil, Environmental, and Architectural Engineering, University of
Colorado Boulder, 428 UCB, 1111 Engineering Drive, Boulder, CO 80309-0428.
E-mail: [email protected]
3
School of Civil and Environmental Engineering, Oklahoma State University, 207
Engineering South, Stillwater, OK 74078. E-mail: [email protected]

Abstract

The objective of this study is to develop a proof-of-concept framework to


quantify the impact of the anticipated rise in temperature and humidity change on
construction productivity at a project level as a result of global warming. The
developed framework applies building information modeling (BIM) integrated with
labor productivity information, critical path method (CPM) schedule, and a
temperature-and-humidity-versus-labor-productivity model to simulate the global
warming effect on a structural steel erection model project. To demonstrate the
application of the developed framework, one domestic and six international cities'
2090-2099 climate projection data, derived from existing global climate models with
different greenhouse gas (GHG) emissions scenarios, were used as input data for the
simulation to quantify the workhours required to build the model project. The results
for the selected locations show that the impact of global warming on construction
productivity varies with geographical location and that different climate models
produce varied results.

INTRODUCTION

The temperature increases that happened over the last three decades account
for two thirds of the change that occurred over the last 100 years (Committee on
America's Climate Choices 2011). Global warming has been observed as a trend.
Despite the dispute over the theory of global warming, 70% of Americans have
accepted it as a fact (Leiserowitz et al. 2011). Global warming could cause climate
changes, including temperature increases, climate pattern shifts, and extreme weather
events such as, droughts, heavy rain fall, and excessive heat waves (Lu et al. 2007).
If global warming continues to develop at the currently observed magnitude, more

©  ASCE 1
Computing  in  Civil  Engineering  2015 524

noticeable climate change would appear sooner. This will eventually affect human
activity, including industry productivity.
The construction industry is susceptible to the environment, since almost 50%
of the construction activities are subject to weather influences (Benjamin and
Greenwald 1973). Hot working environments have always been a concern for the
construction industry. Psychologically, unpleasant working conditions that arise from
hot environments can invoke workers’ apathy to work, and physiologically,
construction workers can suffer from heat stress or stroke (Koehn and Brown 1985).
The immediate effects of global warming are broad scales of temperature increases.
It is very likely that prolonged warmer seasons would appear. This implies that
construction workers will have more working days under hot environments, hence
decreased productivity. For contractors, productivity correlates with a project's
profitability, while for owners, productivity determines the final cost of a project.
Therefore, understanding the impact of global warming on construction labor
productivity is instrumental for the construction industry to understand the challenge
that the industry will face. To accomplish this objective, this study aims to develop a
framework that is able to assess the impact of global warming at a project level.
SCOPE OF RESEARCH

The objective of this paper is to formulate a framework that can be used to


quantify the forecasted global warming impact on construction labor productivity at a
project level in different regions around the world. Forecasted temperature and
humidity are the two factors of interest that affect craft workers’ productivity. The
authors readily acknowledge that global warming could possibly shift other weather
patterns, such as wind and precipitation. However, this research limited its scope to
the temperature and humidity effects, and other climate impacts on schedule were not
considered, which also acted as a control for inter-regional comparison of the
temperature and humidity’s effect subject to the influence of global warming. This
investigation is a proof of concept study and limits its scope to structural steel
construction. The study is exclusively focused on steel trades for the following
reasons: 1) structural steel installation activities are outdoor activities subject to
temperature and humidity effects; 2) structural steel installation activities are usually
on the critical path of a project and affect the overall project performance; and 3)
structural steel installation activities are up front activities prior to other trades' work,
therefore it is less likely to be affected by other activities which otherwise would
complicate the research by introducing "noise" into the study.

RESEARCH METHODS

Framework
The framework integrates BIM with unit rate productivity data, temperature
and humidity projection data, CPM schedules, and labor productivity models. Figure
1 describes the process and information flow of the developed framework.
Throughout this process, BIM shows its advantages in 1) rich information, 2)
versatility in interacting with a back-end database, 3) enabled automated-data
processing, and 4) 4D schedule analyses (not presented in paper due to page limits).

©  ASCE 2
Computing  in  Civil  Engineering  2015 525

Legend Data Temperature &


Humidity Projection
Process Flow Proces Data
Information Flow Database

CIS/2 Steel Design Temperature-Humidity vs.


Model Virtual Construction Productivity Model
Model w/
Geometric Rrq.Workhours
Information
Baseline Adjusted
Schedule Schedule
Unit Rate
Attributes Productivity
Data Manpower
Data

Figure 1. Process and Information Flow of the Framework

The first step in the framework is to assign the required workhour information
to individual steel members, including beam, column, bracing, and miscellaneous
steel in three weight classes (light, medium, and extra heavy). In this study, this
process was carried out in ConstructSimTM, one of Bentley's BIM software suites,
which cross-referenced the steel members’ basic attributes (such as size, function,
length, weight class) in the design model and unit rate productivity table through
structured query language (SQL) relational database tables to compute the workhour
information for individual steel members. The unit rate labor productivity was
obtained from Richardson's Construction Estimating Standards (Richardson 2013).
The labor productivity herein is defined as work hours per unit of installed quantity;
therefore a lower number is better. According to Richardson’s description of
environments where the unit rate productivity was collected, the productivity rate
should be deemed as the optimal baseline productivity without negative effects of
temperature and humidity.
Second, a critical path method baseline schedule is developed. A CPM
schedule can be developed based on the logic sequence of activities and available
labor resources. The task units of the schedule for this study are work packages.
Since each work package has the information of total workhours and allocated labor
resources, the duration for each work package can be calculated. By referring to the
sequence of the activities, a baseline schedule can be created. The scheduling process
is an integrated BIM-based process.
Third, the temperature and humidity’s impact on productivity throughout the
course of construction is assessed through a simulation process. The detailed
description of the simulation process is documented in the author’s previous work
(Shan and Goodrum 2014). The simulation utilizes a model developed by Koehn and
Brown (1985) that describes the relationship between labor productivity and
temperature and humidity. The relationship between the productivity factor (see table
footnote for definition) and temperature and humidity is tabulated in Table 1. The
simulation interface developed in this study takes the computed productivity factor
and labor resources and adjusts the baseline schedule to reflect the productivity
impact due to temperature and humidity. Each simulation generates durations and
workhours required to build the project.

©  ASCE 3
Computing  in  Civil  Engineering  2015 526

Table 1. Construction Productivity Factors for Temperature and Relative


Humidity
Relative Humidity (%)
Temp.(ºF) 5 15 25 35 45 55 65 75 85 95
-20 0.28 0.27 0.25 0.22 0.18 0.13 0.05 - - -
-10 0.44 0.43 0.42 0.40 0.38 0.34 0.29 0.21 0.10 -
0 0.59 0.58 0.57 0.56 0.54 0.52 0.49 0.44 0.36 0.23
10 0.71 0.71 0.70 0.70 0.69 0.67 0.65 0.62 0.58 0.50
20 0.81 0.81 0.81 0.81 0.81 0.80 0.79 0.77 0.75 0.71
30 0.90 0.90 0.90 0.90 0.90 0.89 0.89 0.89 0.88 0.87
40 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96
50 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
60 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
70 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
80 1.00 1.00 1.00 1.00 1.00 0.99 0.98 0.96 0.95 0.93
90 0.95 0.95 0.94 0.93 0.92 0.90 0.88 0.85 0.82 0.78
100 0.81 0.81 0.80 0.79 0.77 0.74 0.71 0.67 0.61 0.54
110 0.58 0.58 0.58 0.57 0.55 0.51 0.47 0.41 0.32 0.21
120 - 0.28 0.28 0.28 0.25 0.21 0.15 0.07 - -
Note: The productivity factor is defined as the ratio of baseline productivity to productivity subject to
temperature and humidity effects.

APPLICATION OF THE FRAMEWORK


For the purpose of demonstrating the application of the developed framework,
a model project was adopted and situated in seven locations around the world
assuming various global warming scenarios.

Model Project
This study used a structural steel model of the first phase of the University of
Kentucky’s Albert B. Chandler Hospital Pavilion project as the prototypical model to
develop the described framework. The project's construction started in 2009 and
completed in spring 2011, with a total area of 1.2 million square feet and a total cost
of $532 million. The building has a five-story podium plus one story of basement
constructed with reinforced concrete and two eight-story towers built with structural
steel. The baseline workhours for the structural steel erection are 54,338 workhours.

Collection of the Climate Data


In order to quantify the global warming impact on the model project, current
and projected climate data on temperature and humidity needed to be acquired.
Emissions Scenarios. Greenhouse gas (GHG) emissions have been
recognized as a major cause of global warming. Nevertheless, the pattern of the future
world’s development is unknown. To address the variability in GHG emissions, the
Intergovernmental Panel on Climate Change (IPCC) established a series of standard
emissions scenarios described in the Special Report on Emissions Scenarios (SRES)
that illustrates the possible development of the future world. In this paper, three
prevalent SRES scenarios (SRES A1B, A2, and B1) were included. A1B describes a
future world that operates in a more integrated and homogenous fashion, with rapid
economic growth and technology adoption and low population growth. A2 describes
the scenario of a more fragmented world, with rapid population growth, slower
technological change, and slower economic growth. B1 scenario describes a world

©  ASCE 4
Computing  in  Civil  Engineering  2015 527

that operates in a sustainable way but without extra climate initiatives, having lower
population growth, rapid economic structure change toward service and IT. The
detailed descriptions and corresponding parameters can be found in IPCC's Fourth
Assessment Report (AR4) (Pachauri and Reisinger 2007).
Climate Model Selection. To control the scope of this research, two out of the
26 climate models archived by the Coupled Model Intercomparison Project Phase 3
(CMIP3) from various climate research centers around the world were picked to
obtain the climate projection data. This study used National Oceanic and Atmospheric
Administration's (NOAA's) Climate Model CM2.1 and Hadley Center’s Climate
Prediction's Coupled Model Version 3 (HADCM3). Both models can be considered
as robust models and are widely cited by a great number of research studies in
different areas (Abolhasani et al. 2008; Kjellstrom et al. 2009; Lewis et al. 2009). In
addition, the use of two models instead of relying on one model allows the
researchers to perform comparisons and cross-validation of the results.
Selected Locations for Study. Since the developed framework is suitable for
project-level simulation, project-location-specific climate data are needed. The
research effort selected a list of major cities around the world. This selection gave
special consideration to the countries that are most vulnerable to global warming,
including countries from Africa and Asia (EPA 2012), and BRIC (Brazil, Russia,
India, and China) countries that will experience the largest economic growth in the
21st century (Global Sherpa 2013). Finally, we selected Khartoum, Sudan; Delhi,
India; Melbourne, Australia; Brasilia, Brazil; Chongqing, China; Moscow, Russia;
and we also included a domestic city, Houston, Texas.
Climate Data Collection. All the projection data for scenarios A1B, A2, and
B1 between years 2090 to 2099 used for this study were obtained from the World
Climate Research Program (WCRP 2014) CMIP3's data portal. Historical
temperature and humidity data for the selected cities from 2001 to 2010 were
collected from Weather Underground (WU 2014) and were used as current climate
scenarios. Monthly average temperature and humidity data were collected for both
current and future scenarios.
Simulation of Temperature and Humidity Impact on Schedule
The simulation interface described earlier was used to simulate the
temperature and humidity effect on labor productivity and automatically adjust the
CPM schedule. By inputting the temperature and humidity data and manpower
information and setting project start dates for simulation, the interface can generate
start and finish dates for each task as well as the duration, and the man-hours required
for each task and the entire model project. For this study, we set the simulation
starting dates on the first day of each quarter of a year, i.e. January 1st, April 1st, July
1st, and October 1st, for each project location. This allows the model project to have
an equal probability of exposure to different temperature and humidity scenarios due
to seasonal changes.
RESULTS

Table 2 illustrates the average workhours required to build the model project
under future and current climate scenarios across the seven cities using two climate
models and shows the mean difference in workhours and statistical significance.

©  ASCE 5
Computing  in  Civil  Engineering  2015 528

Table 2. The Comparison of Mean Required Workhours between 2090s’ and


Current Climate Conditions
Model Work Hour (Mean) A1b-Current A2-Current B1-Current
Location P- P- P-
A1B A2 B1 Current Diff. Diff. Diff.
value value value
Khartoum 59,638 61,246 58,007 57,852 1,786 0.003 3,394 0.001 155 0.695a
Delhi 61,167 61,460 60,385 57,600 3,567 0.001 3,861 0.001 2,786 0.005
HADCM3

Houston 60,349 60,456 59,052 56,196 4,153 0.001 4,259 0.001 2,855 0.001
Melbourne 54,338 54,338 54,338 54,504 (166) 0.001 (166) 0.001 (166) 0.001
Brasilia 57,043 56,942 56,359 54,634 2,409 0.001 2,308 0.001 1,725 0.001
Chongqing 55,275 55,221 55,231 55,615 (340) 0.025 (394) 0.014 (384) 0.001
Moscow 60,885 62,595 63,692 60,972 (86) 0.940a 1,623 0.272a 2,720 0.064a
Khartoum 60,638 62,324 58,709 57,852 2,786 0.001 4,473 0.001 858 0.063
NOAA CM2.1

Delhi 60,960 61,908 58,968 57,600 3,361 0.001 4,309 0.001 1,369 0.030
Houston 56,703 57,077 56,122 56,196 507 0.103a 881 0.010 (75) 0.779a
Melbourne 54,338 54,341 54,338 54,504 (166) 0.001 (163) 0.001 (166) 0.001
Brasilia 55,697 56,129 55,189 54,634 1,063 0.001 1,495 0.001 555 0.001
Chongqing 55,342 55,480 55,456 55,615 (273) 0.065a (134) 0.309a (158) 0.297a
Moscow 58,437 58,296 60,807 60,972 (2,535) 0.012 (2,676) 0.014 (165) 0.889a
a. Result is not statistically significant at the 95% confidence level

Figure 2 better visualizes the mean difference in required workhours between


future (2090s) and current scenarios considering both climate models, and the results
are shown in the percentage. By and large, both climate models produced a result
with a similar trend. Khartoum, Delhi, Houston, and Brasilia might experience

Difference in % Difference in %
-20% -15% -10% -5% 0% 5% 10% 15% -20% -15% -10% -5% 0% 5% 10% 15%

3.1% 4.8%
Khartoum A1b-Current (%) 5.9% Khartoum A1b-Current (%) 7.7%
0.3% 1.5%
A2-Current(%) A2-Current(%)
6.2% 5.8%
Delhi B1-Current(%) 6.7% Delhi B1-Current(%) 7.5%
4.8% 2.4%
7.4% 0.9%
Houston 7.6% Houston 1.6%
5.1% -0.1%
-0.3% -0.3%
Melbourne -0.3% Melbourne -0.3%
-0.3% -0.3%
4.4% 1.9%
Brasilia 4.2% Brasilia 2.7%
3.2% 1.0%
-0.6% -0.5%
Chongqing -0.7% Chongqing -0.2%
-0.7% -0.3%
-0.1% -4.2%
Moscow 2.7% Moscow -4.4%
4.5% -0.3%
a. Model HADCM3 b. Model NOAA CM2.1
Figure 2. Mean Workhour Difference in Percentage between Future and
Current Conditions

growth in workhours required to build the model project. Melbourne and Chongqing
might not experience much difference. Climate model differences also accounted for
the discrepancy in simulation results. The largest discrepancy was observed for
Houston. With the projection data of the Model HADCM3, Houston would require
5.1 to 7.4% more man-hours to build the model project by the 2090s compared to
current conditions. As opposed to the case of using projections of Model NOAA
CM2.1, Houston could experience a change from -0.1 to 1.57% depending on the
emission scenario. The opposite trend was observed for Moscow as a result of using

©  ASCE 6
Computing  in  Civil  Engineering  2015 529

two different climate models. For the HADCM3 model, on average the workhours
required to build the model project decreased by 0.1% under A1B scenario, and
increased by 2.7% under A2 and 4.5% under B1 scenario. However, those
differences are not statistically significant. Regarding the Model NOAA CM2.1,
Moscow could experience productivity gain, ranging from 0.3 to 4.4%; however, the
result with emission scenario B1 is not statistically significant at the 95% confidence
level.
CONCLUSIONS AND RECOMMENDATIONS

This paper contributes to the overall body of knowledge by providing a


framework that integrates BIM with a CPM schedule to simulate the impact of
forecasted global warming trends on construction productivity. Through the
demonstration of the use of the framework on a model project situated in seven cities
around the world in the 2090s, this paper generated knowledge about the forecasted
global warming impact on construction productivity at various global locations.
These findings can be summarized as follows:
1) Different climate models produce different estimates in global warming
impact on construction productivity;
2) Khartoum and Delhi are anticipated to experience productivity losses ranging
from 3.1 to 6.2% under emission scenario A1b, 5.9 to 7.7% under scenario A2, and
0.3 to 4.8% under scenario B1; and
3) Melbourne and Chongqing will be the least impacted with less than 0.7%
productivity gains.
4) Moscow may have productivity gain depending on the climate model.
The findings obtained from this research have several implications for the
scientific research community, construction industry, and society as a whole. First,,
the authors recommend that when climate project data are used, data derived from
multiple climate models should be cross-referenced to avoid one-sided conclusions.
Second, this research more or less revealed the negative impact of global warming on
construction productivity in most parts of the world. For some regions where
construction productivity is expected to decrease, construction costs will go up if
there are no major innovations in the construction processes. Third, the whole world
should jointly take initiatives to reduce the GHG emissions through innovations and
technologies; and regulators should institute proper policies to guide society as a
whole to conduct itself in sustainable way.
There are limitations with this study since this is a proof-of-concept study.
The results of the analyses are only as good as the existing knowledge since this study
is built upon a prior work’s productivity factor model. Moreover, this research effort
excluded precipitation; potential impacts of climate change may shift the current
weather pattern, which could result in more rainfall in certain areas of the world, thus
affecting project schedule. This research did not consider technology changes and
human adaption to climate change as well. Future research should include
precipitation and wind effects as well as other trades in the study. In addition, future
research should examine the economic impact of climate change at the regional
construction industry level by encompassing representative types of construction
from other sectors, such as industrial, infrastructure, and residential.

©  ASCE 7
Computing  in  Civil  Engineering  2015 530

REFERENCES
Abolhasani, S., Frey, H. C., Kim, K., Rasdorf, W., Lewis, P., and Pang, S.-h. (2008).
"Real-world in-use activity, fuel use, and emissions for nonroad construction
vehicles: a case study for excavators." Journal of the Air & Waste
Management Association, 58(8), 1033-1046.
Benjamin, N. B. H., and Greenwald, T. W. (1973). "Simulating Effects of Weather on
Construction." Journal of the Construction Division, 99(1), 175-190.
Committee on America's Climate Choices (2011). America's Climate Choices,
National Academy Press.
Environmental Protection Agency (2012). " International Impacts & Adaptation:
Climate Change." <http://www.epa.gov/climatechange/impacts-
adaptation/international.html>. (Mar. 22, 2013).
Global Sherpa (2013). "BRIC Countries – Background, Latest News, Statistics and
Original Articles." <http://www.globalsherpa.org/bric-countries-brics>. (April
8th, 2013).
Kjellstrom, T., Kovats, R. S., Lloyd, S. J., Holt, T., and Tol, R. S. (2009). "The direct
impact of climate change on regional labor productivity." Archives of
Environmental & Occupational Health, 64(4), 217-227.
Koehn, E., and Brown, G. (1985). "Climatic Effects on Construction." Journal of
Construction Engineering and Management, 111(2), 129-137.
Leiserowitz, A., Maibach, E., Roser-Renouf, C., and Smith, N. (2011). "Climate
change in the American mind: Americans’ global warming beliefs and
attitudes in May 2011." Yale University and George Mason University. New
Haven, CT: Yale Project on Climate Change Communication. Accessed
December, 9, 2012.
Lewis, P., Rasdorf, W., Frey, H. C., Pang, S.-H., and Kim, K. (2009). "Requirements
and incentives for reducing construction vehicle emissions and comparison of
nonroad diesel engine emissions data sources." Journal of Construction
Engineering and management, 135(5), 341-351.
Lu, J., Vecchi, G. A., and Reichler, T. (2007). "Expansion of the Hadley cell under
global warming." Geophysical Research Letters, 34(6), L06805.
National Electrical Contractors Association (1974). "The effect of Temperature
Productivity." National Electrical Contractor Association, Inc. , Washington,
D.C., 1974.
Pachauri, R., and Reisinger, A. (2007). "IPCC fourth assessment report." IPCC
Fourth Assessment Report.
Richardson (2013). "Richardson Construction Estimating Standards."
<http://www.costdataonline.com/Richardson.htm>. (Mar. 2nd, 2013).
Shan, Y., and Goodrum, P. (2014). "Integration of Building Information Modeling
and Critical Path Method Schedules to Simulate the Impact of Temperature
and Humidity at the Project Level." Buildings, 4(3), 295-319.
World Cimate Research Program (2014). "WCRP CMIP Multi-Model Data ",
<https://esgcet.llnl.gov:8443/index.jsp>. (Dec. 8, 2014).
Weather Underground (2014). "About Our Data."
<http://www.wunderground.com/about/data.asp>. (Dec. 6, 2014).

©  ASCE 8
Computing  in  Civil  Engineering  2015 531

Towards Automated Constructability Checking: A Case Study of Aligning


Design Information with Formwork Decisions

Li Jiang1; Robert M. Leicht, Ph.D.2; and John I. Messner, Ph.D.3


1, 2, 3
Department of Architectural Engineering, Pennsylvania State University,
104 Engineering Unit A, University Park, PA 16802.

Abstract

This research aims to define an approach to identify construction-related


design information at early design stages. Those information can be extracted from a
building information model and used to support an automated, rule-based
constructability checking in a proactive manner. Focusing on formwork construction
of cast-in-place concrete building structures, this paper uses a case study and applies
the object-oriented paradigm inherent in a BIM model as the opportunity and
mechanism for leveraging constructability reasoning that is captured from the
structural design process and construction expert’s knowledge. The hierarchical
model of product information for the case study building is presented and then
aligned with the formwork decision-making for product realization. The accessibility
of the construction-related design information, in terms of level of definition as well
as representation within in the design model, is discussed, indicating the needs for
system information progression since early design stages to support the BIM-enabled,
automated review.

INTRODUCTION

To meet client objectives and requirements, designers often aggregate


information from other parties, such as contractors, to provide feasible product
solutions between what is desired and what is realized (Lawson, 2006). However,
problems such as change orders and rework occur and often impair project schedule
and cost performance. One of the main causes was found to be the lack of
construction knowledge in early design analysis (Fischer and Tatum, 1997).
Introducing construction information after associated design decisions have been
made was considered to be of little value, even obstructed constructability
implementation (O’Connor and Miller, 1994). Therefore, incorporating construction
information into early design thinking is important for product realization.
In addition to consistent efforts from project stakeholders, technological
development greatly helps the integration of design and construction. Building
Information Modeling (BIM) has been introduced to facilitate information sharing
among project stakeholders. Regardless, there are opportunities to better leverage the

©  ASCE
Computing  in  Civil  Engineering  2015 532

information embedded in building models at early design stages to facilitate


constructability implementation (Jiang et al., 2013). A framework of an automated,
rule-based constructability review was established to automatically check building
information models for potential constructability issues throughout the design process,
with corresponding level of detail of BIM contents, enabling more consistent and
timely constructability feedback (Jiang and Leicht, 2014).
To achieve the rule-based approach, the goal of this paper is to define the
construction-related design information and examines its accessibility as represented
during the design process within a building information model. Focusing on a cast-in-
place concrete building structure, a hierarchical model of product design information
of a case project is presented and then aligned with the formwork constraints,
indicating the integrated design constraints for an automated review. The accessibility
of the construction-related design information within in the design model will then be
discussed, along with the benefits and challenges of the automated review to
integrated design and construction.

BACKGROUND

Structural engineers require the knowledge of construction means and


methods to help identify the optimum design, construction concepts, details, and to
define the form of the engineering system that maximizes the quality and value to the
owner (Tatum and Luth, 2012). Driven by available information, design is considered
an iterative process of analysis, synthesis, appraisal, and decision-making activities to
generate product solutions at increasingly detailed levels (Markus, 1969; Maver,
1970). In the structural design process, schematic design (SD) for example, designers
often explore structural alternatives, and obtain the information from other disciplines
to better understand the design problem, refine the system configurations, and
generate more constructible solutions (Solnosky, 2013). In return, the progression of
product design information allows the contractor to narrow the construction methods
and means as well as develop the construction plan that can best meet schedule and
budget requirements (Figure 1). In practice, however, the fragmentation of the
Architecture, Engineering, and Construction (AEC) industry impairs the links
between design and construction, causing challenges for incorporating required
construction information in early design thinking (Evbuomwan and Anumba, 1998).
Design errors, rework, and change orders occur correspondingly, and can impose
disruptive impacts on project performance (Friedrich et al., 1987).
To better integrate design and construction, the concept of constructability
was raised to emphasize the interdependencies between design and construction (CII
1986). Previous research applied the approach of process modeling to map design
and/or construction processes with high level information exchanges for precast
concrete (Messner and Sanvido, 1992), structural steel (Crowley and Watson, 2003),
technical systems (Liu et al., 2012), and building structures in general (Solnosky,
2013), indicating the requirement of construction information for specific design
tasks; such as the determination of system optimization requirements. Sanvido et al.
(1995) created a process-based information architecture that identified and classified
information required to manage, plan, design, construct, and operate a facility.

©  ASCE
Computing  in  Civil  Engineering  2015 533

Regardless, little detail regarding the interdependencies between design and


construction information was revealed. Focusing on case-in-place concrete structure,
Fischer (1991) studied and represented the design-related constructability knowledge
with product information, and attempted to provide automated constructability
reasoning at SD phase. Whereas the interdependences between design and
construction information as design progresses were not clearly presented, and the
technological capabilities at that time limited the implementation of the automated
constructability reasoning in a real project design.

Design Progression

Construction Options

Design Process

Figure 1: Interdependencies between design and construction

The technological challenge has been overcome as BIM has been adopted in
the AEC industry. The advanced technology, as one of integrated design methods,
relies on the data-rich, object-oriented modeling technique and allows building
product information to be easily shared among project stakeholders, forming more
reliable decisions and enhancing project value delivered (Smith, 2007). It also opened
the opportunity of an automated, rule-based constructability checking of a design
model to push construction information towards early design thinking (Jiang and
Leicht, 2014). A preliminary experiment review of an educational facility design has
shown the structural design model can be automatically evaluated with significant
construction constraints for construction methods, formwork systems selection.
Building on previous efforts, the current work examines the interdependencies
between product design information and construction information by aligning
construction information with associated design attributes to support the automated
constructability checking.

METHODOLOGY

Focusing on cast-in-place (CIP) concrete building structures, a case study was


used to identify the hidden links between design and construction information. Unlike

©  ASCE
Computing  in  Civil  Engineering  2015 534

other structural materials, CIP concrete is often delivered to the construction site in an
unfinished state and subsequently formed into the desired shape. Thus, concrete
designers and contractors have ultimate control in determining how a structure is
formed and built. Considering that formwork costs accounts for 40% - 60% of the
structural frame cost (Hanna, 1998), the constructability of a CIP concrete building
design is fairly important.
In the case study, an on-going CIP concrete building project was selected as
the case project, enabling easier access to project resources. Both the structural
engineer for record and the project manager for concrete construction were
interviewed, to capture the structural design process with information progression and
the process of narrowing down the construction options based on design
configurations and construction constraints. The hierarchical model of product
information for the case project is then presented and aligned with the formwork
options for product realization, indicating opportunities of using associated modeling
information to perform an automated constructability review.

CASE STUDY

Case Study Project. The case project is a new educational facility at Coppin
State University in Baltimore, MD. It is a four-story building plus a basement and a
penthouse, with an approximate total area of 135,000 ft2. Typical spread footings and
foundation walls are structurally designed to support the four-story CIP concrete
superstructure. The penthouse applies braced structural steel frame with metal
decking instead. The contractor was awarded a GMP (a.k.a. guaranteed maximum
price) contract to construct the building, indicating the importance of constructability
and construction planning in this project. BIM was leveraged for both structural
engineering and construction management.

Production-related Product Information Progression Model. Through


interviewing both design and construction personnel of the case project, the impacts
of design information progression on construction options was captured. Taking CIP
floor system design as an example, Figure 2 illustrates the narrow-down of horizontal
formwork (or slab formwork) options depending on available information from
schematic design (SD) to construction documents (CD).
Using the classification of structural systems and elements (Solnosky, 2013),
the upper part of the diagram shows the development of design across different
phases. At SD phase, the structural engineer primarily focused on the selection of
structural systems, floor system, in particular. Information, such as column grid and
contractor’s preference with the consideration of schedule and budget, from other
project participants such as architect and contractor were incorporated into early
design analysis and synthesis activities. Starting with a couple of floors and
preliminary component sizing, design alternatives in building material or floor system
type were investigated, reviewed, and appraised by project stakeholders. Figure 2
uses dashed boxes and dotted line to illustrate the iterative process of preliminary
system design, and also indicates “Two-way Slab with Drop Panels” was selected as
the more acceptable solution for this project at the end of SD. Then, design

©  ASCE
Computing  in  Civil  Engineering  2015 535

information of structural components and elements were developed into more details
as design moves to later stages (Figure 2).

©  ASCE
Computing  in  Civil  Engineering  2015 536

Schematic Design (SD) Design Development (DD) Construction Documents (CD)

Gravity
Column Location Material Dimension Reinforcing Section Details
Columns
Structural Super- Gravity
System Structure System
Floor
Design Information Progression

Slab Footprint Material Dimension Reinforcing Section Details


System

Beam Location Material Dimension Reinforcing Section Details

Drop
Panel Location Material Dimension Reinforcing Section Details

Misc.

Misc. Members
Reinforcing

& Items
CIP Concrete Bldg. Area Misc. Steel/Concrete
Two-way Slab
Load Assump. Bldg. Height w/ drop panels for Façade Support

Available Horizontal Feasible Horizontal Feasible Horizontal Horizontal


Formwork Options Building Formwork Options Formwork Options Formwork Decisions
CIP Floor
Construction Size/ “Advanced” Component Conventional
Concrete “Advanced” “Advanced” System
Methods Concrete Aluminum Config.? Metal System
? Aluminum Aluminum Type.?
Scale? Forming (PERI (Aluminum)
Forming (PERI Forming (PERI
SKYDECK) SKYDECK)
SKYDECK)
Conventional Conventional
Conventional
Metal System Metal System
Narrow-down of Formwork Options

Metal System
(Aluminum) (Aluminum)
(Aluminum)
Conventional Conventional
Conventional
Metal System Metal System
Metal System
(Steel) (Steel)
(Steel)
Wood Wood
Wood
Forming Forming
Forming
Joist-Slab Joist-Slab
Forming Forming
Pan/Dome Pan/Dome
Forming Forming
Flying Flying LEGEND:
Structural System,
Form Form Sub-system,
Component, and
Element
Column- Column- Structural

mounted mounted Design Information

Shoring Shoring Design-related


Constraint for

System System Narrowing Down


Formwork Options

Tunnel Tunnel Formwork Options

Forming Forming
Design Progression
Steel Panel Design Iteration

Forming Interdependencies
between Design
Information and
Construction Options

Figure 2: Narrowing down construction options by aligning design information with formwork decision-making

©  ASCE
Computing  in  Civil  Engineering  2015 537

The lower part of the diagram shows the formwork options to build the project
(Figure 2). Compared with the expanding pattern of design information, the
construction options are reduced based on available design information. For example,
as the type of floor system is determined, the initial ten available horizontal formwork
options would be narrowed to four feasible options; formwork systems, which are
used for other system types such as one-way joist slab and waffle slab, are ruled out
at this point (Figure 2). As the system is designed with more detailed component
information, such as slab depth and column spacing at DD phase, contractors can
calculate and plan the formwork layout, labor and equipment use, and construction
cost accordingly (Hurd, 2005), in order to determine the more cost-efficient approach.
In this case a conventional aluminum formwork system, which consists of plywood
panels and aluminum beams and scaffolding-type shoring system, was selected for
the case project (Figure 2). In this way, the construction-related design information is
identified and aligned with the decision-making of construction means and methods.

DISCUSSION

Model Information Access. With the construction-related design information


identified and aligned with the narrowing of construction options, Figure 2 indicates
the opportunity of using the design information embedded in a building information
model to support an automated constructability review throughout the design process.
Applied in this case, as Figure 2 illustrates, the feedback regarding feasible formwork
options would be generated as design progresses and reviewed for early construction
planning as early as the SD phase.
However, in practice, the associated model information may not be accessible
at the expected timing, resulting in “lagging” decisions. First, the limited value of
design modeling at early design stages can impact the information accessibility. Due
to the technological capabilities and interoperability, designers often apply different
software packages to perform structural analysis, develop design model and produce
documents. In this case, the structural model for construction team was not developed
in Autodesk Revit until the design development stage, when the system type was
determined; other programs such as RAM structural system and RAM concrete were
used instead for early structural analysis. Secondly, design information such as
system type may not be directly accessible until component information is developed.
In this case, the decision of “Two-Way Slab with Drop Panel” was made after early
design analysis; however, the reasoning of model attributes, including the layer of
slab and drop panel and slab dimensions will be needed to perform an automated
review of a design model. Lastly, instead of developing a building design, some firms
merely use BIM as a tool to produce design documents, which can also cause the late
generation of required design information for an automated review.

Towards Integrated Design and Delivery. One of the benefits of the


automated review is to push the construction considerations into early design thinking
and to help designers to generate more constructible solutions (Jiang and Leicht,
2014). Similarly to the impacts of design information on construction options, the
decisions on construction options can generate constraints and feedback to help

©  ASCE
Computing  in  Civil  Engineering  2015 538

designers refine the system configurations and improve the constructability of the
design. The mapping of the interdependencies between design and construction
information (Figure 2) helps to understand the integrated design constraints, leading
to the thinking of not only “what” to design but “how” to realize throughout design
process the construction process, and facilitating the decision-making on a “best for
project” basis ("Integrated Project Delivery", 2007).

CONCLUSION

With a case study, this paper presents a hierarchical model of product


information and aligns the design information progression with the narrowing of
horizontal formwork options. The mapping of the interdependencies between design
and construction information leads to an understanding of the integrated design
constraints and provides support to an automated constructability review. The
challenges in the accessibility of the construction-related design information within
the design model is discussed, indicating the need of the progression of system
information for the automated review. In particular, the earlier creation of rough
geometry would lead to discussion of design assumptions that could facilitate
proactive, rather than reactive, decision-making with regard to the production systems.

REFERENCES

CII, University of Texas at Austin. Bureau of Engineering. 1986. Constructability: A


Primer. Construction Industry Institute.
Crowley, A. J., and A. S. Watson. 2000. CIMsteel Integration Standards, Release 2:
Overview. Steel Construction Institute.
Evbuomwan, N.F.O, and C.J Anumba. 1998. “An Integrated Framework for
Concurrent Life-Cycle Design and Construction.” Advances in Engineering
Software 29 (7–9): 587–97.
Fischer, Martin. 1991. “Constructibility Input to Preliminary Design of Reinforced
Concrete Structures.”
Fischer, Martin, and C. B Tatum. 1997. “Characteristics of Design-Relevant
Constructability Knowledge.” Journal of Construction Engineering and
Management 123 (3): 253–60.
Friedrich, D. R., J. P. Daly, and W. G. Dick. 1987. “Revisions, Repairs, and Rework
on Large Projects.” Journal of Construction Engineering and Management
113 (3): 488.
Hanna, A. S. (1998). Concrete formwork systems. CRC Press.
Hurd, M. K. (2005). Formwork for concrete. American Concrete Institute.
Jiang, L., and R. Leicht. (2014). “Automated Rule-Based Constructability Checking:
Case Study of Formwork.” Journal of Management in Engineering
Jiang, Li, Ryan L. Solnosky, and Robert. M. Leicht. 2013. “Virtual Prototyping for
Constructability Review.” In Proceedings of the 4th Construction Specialty
Conference of the CSCE - 2013. Montreal, Canada.
Lawson, Bryan. 2006. How Designers Think: The Design Process Demystified.
Routledge.

©  ASCE
Computing  in  Civil  Engineering  2015 539

Liu, Y., R. Leicht, and J. Messner. 2014. “Identify Information Exchanges by


Mapping and Analyzing the Integrated Heating, Ventilating, and Air
Conditioning (HVAC) Design Process.” In Computing in Civil Engineering
(2012), 618–25. American Society of Civil Engineers. Accessed December 7.
Markus, Thomas A. 1969. “The Role of Building Performance Measurement and
Appraisal in Design Method.” Design Methods in Architecture, 109–17.
Maver, Thomas W. 1970. “Appraisal in the Building Design Process.” Emerging
Methods in Environmental Design and Planning, 195–202.
Messner, J.I. and Sanvido, V.E. (1992). "A survey of Precast Concrete Systems used
By Fujita" Technical Report No. 27, February, Computer Integrated
Construction Research Program, Penn State University, University Park, PA
O’Connor, James T, and Steven J Miller. 1994. “Barriers to Constructability
Implementation.” Journal of Performance of Constructed Facilities 8 (2):
110–28.
Sanvido, V. E., G. Anzola, S. Bennett, D. Cummings, E. Hanlon, K. Kuntz, T. Lynch
et al. (1995) “A process based information architecture.” Technical Report No.
36, December, Computer Integrated Construction Research Program, Penn
State University, University Park, PA.
Smith, Deck. 2007. “An Introduction to Building Information Modeling.” Journal of
Building Information Modeling, October, 12–14.
Solnosky, Ryan L. 2013. “Integrated Structural Process Model: An Inclusive Non-
Material Specific Approach to Determining the Required Tasks and
Information Exchanges for Structural Building Information Modeling”. The
Pennsylvania State University.
Tatum, Clyde B., and Gregory P. Luth. 2012. “Integrating Structural and
Construction Engineering.” In Construction Research Congress 2012, 1301–
10.

©  ASCE
Computing  in  Civil  Engineering  2015 540

A Hierarchical Computer Vision Approach to Infrastructure


Inspection
Ali Khaloo1and David Lattanzi2
1
Graduate Research Assistant, Department of Civil, Environmental, and Infrastructure
Engineering, George Mason University, Fairfax, VA 22030. E-mail:
[email protected]
2
Assistant Professor, Department of Civil, Environmental, and Infrastructure
Engineering, George Mason University, Fairfax, VA 22030. E-mail:
[email protected]

Abstract

Currently, most infrastructure inspection standards require inspectors to


visually assess structural integrity and log findings for comparison during future
inspections. This process is qualitative and often inconsistent. Furthermore, changes
in inspection protocols over time can create discontinuities in understanding the time-
history of a structure. This paper presents a systematic technique for capturing and
representing inspection data, leveraging a newly developed hierarchical computer
vision methodology. This technique can be used to improve the level of accuracy in
condition assessment procedures by allowing inspectors to recreate the structural
inspection scenario on a computer through a high-fidelity virtual environment. The
methodology presented herein utilizes adaptations of the Dense Structure from
Motion (DSfM) algorithm, which reconstructs three-dimensional (3D) scenes from
two-dimensional (2D) digital images. In order to produce highly-accurate and
photorealistic 3D reconstructions there are four core stages: (i) an image capture plan
that covers all views of the structure and emphasizes critical details, (ii)
reconstruction of an initial dense point cloud to generate a geometrically accurate 3D
model, (iii) reconstruction of separate, higher density point clouds of critical details or
suspected deficiencies, (iv) application of robust computer vision algorithms to
hierarchically register and merge the point clouds. The result of this approach is a
virtual 3D model of the structure with accurate geometry and high-fidelity
representation of fine details. The accuracy and adaptability of the developed
technique was compared to both conventional DSfM reconstruction methods and
terrestrial 3D laser scanning (TLS). The experimental validation indicates that the
hierarchical technique produces denser and more comprehensive models with an
accuracy of one tenth of a millimeter, an order of magnitude improvement over either
conventional DSfM or TLS.
Keywords: Infrastructure inspection; Computer vision; Structure from motion;
3D reconstruction; Photogrammetry; Terrestrial laser scanning

INTRODUCTION

Civil infrastructure represents a significant portion of any nation’s assets, and


deficiencies in inspection and maintenance can negatively impact gross domestic

©  ASCE 1
Computing  in  Civil  Engineering  2015 541

product (GDP) (Chong et al., 2003). In the United States, the Federal Highway
Administration estimates the value of U.S. transportation infrastructure to be on the
order of trillions of dollars (FHWA 2013). Visual inspection is the predominant
method for evaluating the structural integrity of almost all infrastructure systems, and
often results in subjective and highly variable inspection reports (Phares et al. 2004).
Making the inspection process more effective in regards to the type and quality of
data collected is a major research need.
Modern remote sensing technologies have enabled the generation of accurate
3D geometric models that can be used to augment and improve the inspection process
by reconstructing the inspection environment as a photorealistic virtual model. While
there are many methods for creating such models, terrestrial laser scanner (TLS) is
the most widely used. Photogrammetric reconstruction via the Dense Structure-from-
Motion (DSfM) algorithm, a process that converts 2D images into 3D point clouds, is
often considered a low cost alternative to TLS. In conventional DSfM, a naïve
reconstruction is performed in which all 2D images are processed simultaneously.
Many variables affect DSfM model accuracy, including camera imaging parameters,
camera placement, scene complexity, surface texture, environmental conditions and
scene size. Another limitation of DSfM is the relatively high amount of noise along
poorly textured and flat surfaces (Musialski et al. 2013).
The field of research on the use of these technologies for infrastructure
inspection is growing rapidly. Jáuregui et al. (2006) evaluated the feasibility of
photogrammetric techniques and determined that these methods provided more
accurate measurements of bridge geometry in comparison with traditional hand
measurements. In Riveiro et al. (2012), a photogrammetric methodology for
calculating the minimum bridge vertical underclearance was developed. Researchers
have also compared the capabilities of photogrammetric and TLS methods for 3D
reconstruction of civil infrastructure (Golparvar-Fard et al. 2011; Dai et al, 2012;
Riveiro et al. 2013; Klein et al. 2012). In these studies, TLS methods provided denser
and more accurate point clouds in comparison with conventional DSfM methods.
To date, neither TLS nor conventional DSfM has been shown to be capable of
producing point clouds accurate enough to resolve structural flaws on the order of
0.1mm, the minimum dimension of a hairline crack, while simultaneously capturing
the overall geometry and scale of a structure. The main objective of this study was to
develop a hierarchical, multi-stage DSfM reconstruction approach that improved
upon conventional DSfM methodologies to produce point clouds with the necessary
resolution for fine-scale infrastructure inspection. The accuracy, adaptability and
feasibility of the developed method were compared to both TLS and conventional
DSfM methods.
The following comparative criteria were used: (i) localized point cloud density,
(ii) model discrepancies via Nearest Neighbor point cloud comparison, and (iii) the
rendering of pre-defined and fine-scale inspection targets. Experimental findings
indicate that the developed hierarchical technique produces point clouds capable of
resolving 0.1mm details, an order of magnitude improvement over existing methods,
while also generating denser point clouds then either TLS or conventional DSfM.

©  ASCE 2
Computing  in  Civil  Engineering  2015 542

METHODOLOGY

Imaging Network

In this work, clusters of captured images are referred to as “imaging


networks”. The configuration of these networks in terms of camera positions, the
number of captured images, adjacent image overlapping, and image quality are the
dominant factors on the resulting model accuracy. The total number of required
images is dependent on the camera lens focal length and the size of the features of
interest. In this study, a three level network system is presented:
• The Geometry network is designed to obtain the overall geometry of the
targeted structure. The minimum overlap between adjacent images should
be about 80% and the axes of the captured images should converge at the
center of the structure.
• Intermediate networks document features such as planar surfaces and
corners of a structure. They are also used to link the Geometry network to
small, high-resolution networks by providing recognizable feature points
for matching algorithms. Each network should maintain a consistent
camera-structure distance and an image overlap higher than 80%.
• High-fidelity networks are designed to document critical details of a
structure with a high degree of accuracy. Each network should consist of a
minimum of 5 images that frame the critical detail tightly. The
overlapping area of each image should be about 90%.

Hierarchical Point Cloud Generation

Initial Dense Point Cloud: Initially, images from both the Geometry and
Intermediate networks are used to generate an initial dense point cloud. This point
cloud is not designed for representing high-resolution information. Therefore, image
resolution can be reduced to minimize the computational cost of dense point cloud
generation. A 25% reduction is the suggested maximum reduction. The resulting
cloud captures the overall geometry of the structure.
Individual Denser Point Clouds: High-resolution point clouds are then
generated for each individual High-fidelity network using the maximum resolution of
the images. For example in this study, the localized point cloud density of each of the
individually generated point clouds was at least 4 times larger than the initial dense
point cloud.
Global Registration and merging: After all point clouds are generated, the
individual clouds are merged using a combination of the Iterative Closest Point (ICP)
algorithm (Besl and McKay 1992) and Generalized Procrustes Analysis (GPA).
Embedding GPA in an ICP framework efficiently minimizes the alignment error and
provides a robust approach (Toldo et al. 2010). The result is a unique final point
cloud that represents the overall scale of the structure while maintaining high-fidelity
representations of fine details.
By utilizing this hierarchical reconstruction approach, the proposed method is
able to orient and match large datasets of images in the dense reconstruction

©  ASCE 3
Computing  in  Civil  Engineering  2015 543

procedure. This is distinguished from conventional DSfM algorithms that often


discard images with low correlation with the overall dataset.

EXPERIMENTAL RESULTS AND DISCUSSION

In order to evaluate the performance and potential of the proposed


methodology under various conditions, three test structures were reconstructed and
the results were validated by comparisons with both TLS and conventional DSfM
methods. A Nikon D-800E (36.3 megapixel resolution) was used to obtain images,
along with two different lenses. For the Geometry and Intermediate networks, a
Nikon AF-S 50mm lens was used. For High-fidelity networks the camera was
equipped with a Nikon AF-S 105mm Micro-Nikkor lens. The images were taken with
a sensitivity (ISO) of 200 and the aperture set to f/8. A Faro Focus3D laser scanner
was the selected TLS system. Generated TLS point clouds were registered using Faro
Scene software. The accuracy level of the TLS system is ±2 mm within a 25 m
distance, as reported by the manufacturer (Faro Technologies 2010). In order to
compare the performance of the proposed hierarchical approach with conventional
DSfM algorithms, the PMVS (patched-based multi-view stereo) algorithm was used
(Furukawa and Ponce 2010). The same image datasets were used for both the
hierarchical point cloud generation (HPCG) and PMVS methods.
In this study, three representative test structures were selected: the glass and
brick façade of the Nguyen Engineering Building, a pedestrian timber bridge, and a
planar steel sculpture. These structures were chosen due to the 3D reconstruction
challenges posed by low texture and planar surfaces, as well as repetitive visual
patterns. All structures are located on the George Mason University campus. All TLS
scan data was taken within a range of 15 m from targeted objects. Table 1 presents
the chosen structures, as well as the resulting generated 3D point clouds.
Table 1 Sample images and generated 3D point clouds
Engineering Building
Site Timber Bridge Steel Sculpture
(west façade)
Image
PMVS
TLS

©  ASCE 4
Computing  in  Civil  Engineering  2015 544

HPCG

Table 2 summarizes the results for the conducted experiments using TLS and
DSfM, respectively. Localized point cloud density was measured by computing the
number of points inside a 30 mm sphere centered on each point.
The comparative results illustrate the improvement in generating point clouds
based using the HPCG method even on datasets with challenging features. In the
Engineering Building experiment, possibly due to the non-conventional image
capture strategy, the PMVS method failed to reconstruct the corners of the façade. In
addition, the 3D point clouds based on TLS and PMVS contained excessive noise
near surfaces such as windows and steel framed doors due to object reflectivity or a
lack of surface texture. However, HPCG successfully captured the entire west façade
including building elements such as windows, doors, and façade corners.
Table 2 Details of resulting point clouds
Method PMVS TLS HPCG
No. of
Engineering images/scans 232 5 232
School
No. of points 964,131 32,304,139 74,097,645
Building(west
Avg. local
façade) 6x104 1x106 1.7x106
point density
No. of
646 7 646
images/scans
Timber
No. of points 13,124,457 52,527,209 111,610,341
Bridge
Avg. local
2.1x106 5.4x106 7.2x106
point density
No. of
656 6 656
images/scans
Steel
No. of points 16,939,806 1,755,472 123,072,551
Sculpture
Avg. local
2.7x107 1.7x106 4.5x108
point density
With regards to the timber bridge, a lack of adequate point density near
critical details such as fungus decay, crushing, delamination, and loose connections
was observed in the TLS and PMVS based point clouds. These details were resolved
in HPCG point clouds. Similar results were observed for the steel sculpture.
Localized point cloud density for the Engineering building and timber bridge
are compared in Figure 1 and Figure 2. Figure 1 illustrates the significantly higher
point cloud density available through the HPCG approach. For the timber bridge, TLS
provided a more homogeneous point cloud density, but provided an overall less dense
cloud. The HPCG method created a model with high-fidelity details and local point
density up to 107 points near critical locations.

©  ASCE 5
Computing  in  Civil  Engineering  2015 545

Figure 1 Local point density for Engineering Building. Low to high density ranges from
blue to red (from left to right: PMVS, TLS and HPCG).

Figure 2 Local point density for timber bridge. Low to high density ranges from blue to
red (from left to right: PMVS, TLS and HPCG).
In order to compare the HPCG method to conventional DSfM algorithms, the
deviations between image-based point clouds and TLS-based models were calculated.
While TLS data is not an absolute reference, it provided a standardized reference for
relative comparisons. The highly dense point clouds allowed deviations from the TLS
models to be measured using Nearest Neighbor Distance (NND). Figures 3 and 4
show the cloud to cloud distances for the Engineering Building and timber bridge
tests. The results for the steel sculpture were similar. Overall, the HPCG method
indicated a higher level of agreement with the TLS models when compared against
the PMVS algorithm.

Figure 3 Comparison between the TLS and photogrammetric point clouds. Low to high
deviations ranges from blue to red (left: PMVS, right: HPCG)

Figure 4 Comparison between the TLS and photogrammetric point clouds Low to high
deviations ranges from blue to red (left: PMVS, right: HPCG)

©  ASCE 6
Computing  in  Civil  Engineering  2015 546

Figure 5 illustrates the rendering of the pre-defined inspection targets that


were placed on the steel sculpture at the data collection stage. While it was not
possible to read information from the installed crack gages in the PMVS and TLS
based point clouds, the HPCG based cloud was able to resolve 0.1mm measurements.
This is an order of magnitude improvement over the reported accuracy of the TLS
system (±2mm).

Figure 5 Snapshot of the rendered point cloud at the location of the crack gage placed
on the steel sculpture (left: PMVS, right: HPCG)

CONCLUSIONS

This paper presents a newly developed hierarchical computer vision approach


designed to generate precise, detailed, and photorealistic reconstructed 3D models
suitable for infrastructure inspection applications. The accuracy, adaptability and
feasibility of the developed method were compared to both TLS and conventional
DSfM methods. The results demonstrate that the hierarchical technique produces
denser and more comprehensive 3D models, and achieves the resolution necessary to
be utilized in structural inspection. Future studies are planned that will assess the
HPCG method in other field scenarios such as highway bridges. There is also ongoing
algorithm development into the minimization of noise in dense point clouds, along
with automated segmentation and extraction of structural components.

ACKNOWLEDGEMENTS

This material is based upon the work supported by the National Science
Foundation under Grant No. CMMI-1433765. The authors would also like to
acknowledge the support made by NVIDIA Corporation with the donation of a Tesla
K40 GPU used in this research. Any opinions, findings, and conclusions or
recommendations expressed in this publication are those of the authors and do not
necessarily reflect the views of the NSF or NVIDIA.

REFERENCES

Besl, Paul J.; N.D. McKay (1992). "A Method for Registration of 3-D Shapes". IEEE
Trans. on Pattern Analysis and Machine Intelligence (Los Alamitos, CA, USA:
IEEE Computer Society) 14 (2): 239–256.
Chong, K. P., Carino, N. J., & Washer, G. (2003). “Health monitoring of civil
infrastructures.” Smart Materials and structures, 12(3), 483.
Dai, F., Rashidi, A., Brilakis, I., & Vela, P. (2012). “Comparison of image-based and
time-of-flight-based technologies for three-dimensional reconstruction of

©  ASCE 7
Computing  in  Civil  Engineering  2015 547

infrastructure.” Journal of construction engineering and management, 139(1),


69-79.
Faro Technologies (2010). “Faro Focus3D specification sheet”
http://www2.faro.com/site/resources/share/944 (accessed 12.02.2014).
Federal Highway Administration (2013). “The status of the nation’s highways,
bridges, and transit: conditions and performance”. Federal Highway
Administration Report to Congress (C&P report).
Furukawa, Y., & Ponce, J. (2010). “Accurate, dense, and robust multiview stereopsis.”
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(8),
1362-1376.
Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., & Peña-Mora, F. (2011).
“Evaluation of image-based modeling and laser scanning accuracy for
emerging automated performance monitoring techniques.” Automation in
Construction, 20(8), 1143-1155.
Jáuregui, D. V., Tian, Y., & Jiang, R. (2006). “Photogrammetry applications in
routine bridge inspection and historic bridge documentation.” Transportation
Research Record: Journal of the Transportation Research Board, 1958(1), 24-
32.
Klein, L., Li, N., & Becerik-Gerber, B. (2012). “Imaged-based verification of as-built
documentation of operational buildings.” Automation in Construction, 21,
161-171.
Musialski, P., Wonka, P., Aliaga, D. G., Wimmer, M., Gool, L., & Purgathofer, W.
(2013). “A survey of urban reconstruction.” In Computer Graphics Forum
(Vol. 32, No. 6, pp. 146-177).
Phares, B. M., Washer, G. A., Rolander, D. D., Graybeal, B. A., & Moore, M. (2004).
“Routine highway bridge inspection condition documentation accuracy and
reliability.” Journal of Bridge Engineering, 9(4), 403-413.
Toldo, R., Beinat, A., & Crosilla, F. (2010). “Global registration of multiple point
clouds embedding the Generalized Procrustes Analysis into an ICP
framework.” In 3DPVT 2010 Conference.
Riveiro, B., Jauregui, D. V., Arias, P., Armesto, J., & Jiang, R. (2012). “An
innovative method for remote measurement of minimum vertical
underclearance in routine bridge inspection.” Automation in Construction, 25,
34-40.
Riveiro, B., González-Jorge, H., Varela, M., & Jauregui, D. V. (2013). “Validation of
terrestrial laser scanning and photogrammetry techniques for the measurement
of vertical underclearance and beam geometry in structural inspection of
bridges.” Measurement, 46(1), 784-794.

©  ASCE 8
Computing  in  Civil  Engineering  2015 548

Advancing in Object-Based Landscape Information Modeling: Challenges and Future Needs

Hamid Abdirad1 and Ken-Yu Lin, Ph.D.2


1
Ph.D. Student, College of Built Environments, University of Washington, Box 355740, Seattle,
WA 98195-5740. E-mail: [email protected]
2
Assistant Professor, Department of Construction Management, University of Washington, 120
Architecture Hall, Box 351610, Seattle, WA 98195-1610. E-mail: [email protected]

Abstract

The AEC/FM industry has benefited from the innovative integration of information
technologies and industry-wide processes in different lifecycle stages of facilities. Building
Information Modeling (BIM), as one of these innovations, is fast becoming a key approach to
virtually integrate the required information for facility design, construction, and management. So
far, applications and benefits of using BIM tools and processes in building design and
construction have been documented in research. However, landscape design and construction
practice is underrated in current BIM developments and in integrated design-construction
practices, and it has not benefited from the advantages BIM provides to the industry at different
scales. This could result in a critical challenge, as BIM implementation and information
modeling are becoming mandatory in many projects in public and private sectors, and the gap
still exists in the processes of collaboration and information exchange between the landscape
design and construction practice and other disciplines. As an early step to mitigate this challenge,
this study shows that recent advances in BIM, COBie, information-exchange schemas (e.g. IFC),
and taxonomies such as OmniClass have shortcomings in addressing landscape and hardscape
elements and attributes. This challenge limits asset-management capabilities, and leads the
practice to inefficient operations, more manual processes, and costly knowledge development
and exchange. These findings have important implications for revising and updating existing
taxonomies to support more automated information development and exchange processes.

Keywords: Landscape information modeling; LIM; BIM; IFC; COBie; Exchange

INTRODUCTION

The AEC industry has benefited from innovations in the integration of information
technologies and design/construction processes to mitigate the industry-wide challenges,
including low productivity, huge waste, and inefficiencies in facilities operation (Smith and
Tardif, 2009). Building Information Modeling (BIM), as one of these innovations, is fast
becoming a key approach to virtually integrate the required information for design, construction,
operation, and management of facilities (Aouad et al., 2011; Eastman et al., 2011). So far, many
researchers have documented applications and benefits of BIM tools and processes in building
design and construction (Eastman et al., 2011; Giel and Issa, 2013). However, landscape design

©  ASCE
Computing  in  Civil  Engineering  2015 549

practice is underrated in current BIM implementations, and it does not benefit from the
advantages BIM has provided to the industry (Ahmad and Aliyu, 2012; Flohr, 2011; Nessel,
2013; Zají ková and Achten, 2013). Nessel (2013) indicated that the landscape architecture
profession has acted passively towards information modeling, and it has a slow movement to
develop, improve, and adopt information modeling tools. Lu and Wang (2013) confirmed that
the industry has rarely used BIM for Landscape Information Modeling (LIM). Furthermore,
Ahmad and Aliyu (2012) highlighted that these challenges may “remove landscape architects
from the supply chain.” This could be an extensive gap in the industry-wide practice and
information exchange, as BIM implementation is becoming mandatory in many countries.
Moreover, Zají ková and Achten (2013) reported that LIM would play an indispensable role in
the practice as it facilitates development of comprehensive information models, and integration
of BIM models, urban models, and landscape models for an inclusive built environment
simulation. Hence, the extension of BIM concepts to the landscape and hardscape design practice
is critical. Similarly as building design and construction practices, this could facilitate smooth
information exchange through information models, and enhance value-adding collaboration
among project participants from different disciplines and trades (Lu and Wang, 2013; Zají ková
and Achten, 2013). However, Flohr (2011) stated that existing BIM platforms are incompatible
with the workflow of landscape architects, and they have drawbacks in parametric landscape
modeling. Consequently, there is a strong need in the industry for new developments in BIM-
related standards and platforms as well as laws and regulation to resolve the current challenges.

Although some research has brought up the need for LIM platforms in the AEC/FM
industry, there has been little theoretical or empirical research on developing comprehensive
schemas for defining required objects, information, and attributes for LIM. To fill this gap, this
paper intends to take a fundamental step to highlight the areas where existing industry taxonomy
systems and BIM interoperability standards (e.g. Industry Foundation Classes-IFC) have
shortcomings to support landscape and hardscape elements. In the second step, through a case
study research, this paper reveals how best practices in capital facilities management deal with
these challenges in exchanging landscaping information. The findings will clarify the current
status of advances/shortcomings in landscape information modeling. The research presented in
this paper is expected to set the stage for improvements in LIM in both conceptual and
technological aspects.

OBJECT-BASED INFORMATION MODELING

The evolution in CAD tools and information technologies has resulted in two important
advances in the industry, including (1) parametric modeling and (2) object-based information
modeling (Hubers, 2010). The parametric aspect of BIM addresses quantifiable characteristics of
graphical entities in a model to control physical behaviors and relationships between
components. Model generation using algorithms and parameters, real-time auto-adjust features of
model elements, intelligent generation of views and documents, and defining and maintaining
physical (e.g. hosting) or virtual relationships are several parametric modeling capabilities
(Autodesk, 2005; Eastman et al., 2011). The object-based aspect of BIM goes beyond intelligent
objects, as it deals with attributes and information required for design, construction, and
management of facilities. Hence, the major concern in BIM implementation is to create and
exchange these information between different trades at different stages (Hubers, 2010).

©  ASCE
Computing  in  Civil  Engineering  2015 550

Industry Foundation Class (IFC) is a schema developed to facilitate the data exchange on
building models with a smooth interoperability among BIM tools. By using a neutral exchange
format, IFC supports the concept of “Platform-to-Tool” exchange as the most promising type of
interoperability (Eastman et al., 2011). IFC consists of predefined concepts of object classes,
types, and information entities (properties). This information is not limited to geometric
properties; it addresses data required for design, analysis, construction, operation, and
management of the building components (Eastman et al., 2011; Jianping et al., 2014). Accuracy,
sufficiency, and reliability of information exchange is essential because poor quality models
would negatively impact all project processes and facility operation stages (Crotty, 2012;
Eastman et al., 2011). Therefore, it is essential to study whether IFC, as a standard for
information exchange in the industry, supports landscape and hardscape elements or not.

REQUIRED INFORMATION IN A LIM

Zají ková and Achten (2013) indicated that a landscape information model should consist
of (1) information about the site, and (2) information about landscape objects. In landscape
architecture, information on climate, land and soil, water, plant species, animal species,
topography, landscape character, and species assemblages builds up the required data for
landscape planning (Özyavuz, 2012; Waterman, 2009). In addition to natural characteristics of
the context, landscape information modeling has to address the “hardscape,” which includes land
uses, exterior spaces, and built landscape function areas such as street and roads, parking spaces,
pedestrian paths, infrastructure/irrigation systems, and green/water/roof areas (Gómez et al.,
2013). Landscape objects and exterior furnishings include any equipment that facilitates exterior
activities of people and transportation systems. These categories of objects are essential for
information modeling but they should be systematically developed for information exchange.

As prior research suggests, the first step in developing knowledge sharing platforms is to
create an ontology consisting of standardized and formalized taxonomy of concepts, their
attributes, and the definition of the relationships or associations between them (Corcho et al.,
2007). National BIM Standard (NBIMS) has addressed this requirement in attempts to identify
required information and exchanges for IFC implementation (Hietanen and Final, 2006). Thus,
OmniClass as one of the most commonly used taxonomy systems is encouraged for BIM-based
information exchange, as it provides highly detailed classifications of the elements and their
properties in the AEC industry (National Institute of Building Sciences, 2007).

RESEARCH METHODOLOGY

This paper intends to investigate the existing taxonomy and information exchange schemas
in regard to their level of support for landscape information modeling. First, the authors review
the landscape architecture literature for identification of technical taxonomies for landscaping
elements/information. Then, OmniClass classifications and IFC schema will be analyzed to show
how they address landscape-related taxonomies and information, and to demonstrate the gaps in
their developments. In the second step, through a case study research, the authors investigate
how current practitioners deal with landscape and hardscape elements in information exchanges.

©  ASCE
Computing  in  Civil  Engineering  2015 551

STEP ONE: LITERATURE AND DOCUMENT ANALYSIS

OmniClass-Table 23 lists landscaping related products and elements under the “Site
Products” classification (OCCS Development Committee Secretariat, 2012). This table shows
that OmniClass classifications in the landscape domain are not as detailed as classifications of
building components. In this paper, due to page limits, only few categories are presented for
clarification. For instance, in the case of different window types, Omniclass Table 23 provides a
long list of different window types, including fixed windows, single hung, double hung, triple
hung, awning, casement, etc. However, for landscaping elements like seating and table furniture,
OmniClass only provides headlines of ‘Exterior Seating and Exterior Tables’ without any
detailed classification. This is in contrast to the classifications landscape design literature
suggests. Different landscape spaces and zones require different types of exterior seating and
furniture as they could impact the landscape functions. The ideal classification in this specific
example should be more detailed (e.g. functionally), and it may include benches, seat wall,
movable/attached chairs, eating tables, low tables, etc., as Main and Hannah (2010) suggest.

As another classification suggested by NBIMS, IFC documentation lacks the means to


address landscape information in its specifications. On the one hand, IFC supports demonstration
needs of project structure, physical components, and spatial components in architecture, facility
management, and other disciplines. On the other hand, IFC documentation states that project
structure and component breakdown structures “outside of building engineering” are not within
its scope (buildingSMART International, 2013b). This raises a question whether the IFC
developers consider landscape and hardscape design, construction, and management as parts of
facility construction and management in this industry or not, considering the dream of a fully
integrated practice and information models. To discuss how IFC deals with landscape elements,
it is necessary to highlight the order of spatial structure in an IFC model, which consists of
“IfcProject, IfcSite, IfcBuilding, IfcBuildingStorey, IfcSpace” from high to low level in the order
(buildingSMART International, 2013b). IfcSite is an abstract 2D footprint concept to report an
area of a land, and it does not address any geometry or terrain of a site. This was an issue in IFC
until a recent development in IFC v.4 included site terrain and physical information of a site in
IfcGeographicElement (buildingSMART International, 2013a). In IFC v.4, all geographical
elements in the landscape would go under IfcGeographicElement. These elements include not
only the land and the geometry of constructed landscape, but also vegetation and furnishing such
as trees, light posts, seats, signs, bus shelters, and fences. However, the only predefined
enumeration type for this concept is for “terrain,” and other categories should be defined as IFC
extensions by developers at the project level. Consequently, no pre-defined taxonomy for
landscape and hardscape objects can be found in IFC (buildingSMART International, 2013b;
City Futures Research Centre, 2010). This increases the risk of compatibility and transaction
issues due to custom information developments at the project level. Taken together, IFC support
for landscaping elements and site functions is very limited, as the taxonomies and required
information/properties are not defined in the schema. Table 1 shows a few areas in which
OmniClass and IFC taxonomies have gaps or inconsistencies with landscape design literature. It
is essential to investigate these gaps in real-world projects, where exchanging information on
landscape and hardscape is a requirement. For instance, the University of Georgia (UGA) Design
and Construction Standards explicitly indicate that landscaping elements in BIM models “shall
include planted areas, beds and berms, hardscape, site paving and storm water management

©  ASCE
Computing  in  Civil  Engineering  2015 552

structures or systems” (UGA, 2013). Additionally, it lists some landscape and hardscape
elements for tracking/assigning in Construction Operations Building information exchange
(COBie), including “Area Wells/Grating, Equipment Curbs, Building Pads, Planting, Sidewalks,
Parking Stripes, Roads, Property lines, and Topography.”

Table 1. Taxonomies in Landscape Design Literature vs. OmniClass and IFC


Categories Landscape Design Literature OmniClass Support IFC support
Pedestrian Mall; Urban Streetscape; 13-25 11 00: Primary Circulation Spaces IfcSpace (for both
Plazas; Urban Parks; Recreational 13-69 25 00: Pedestrian Travel Spaces internal and external
Park; Trails and linear parks; Theme 13-69 25 13: Pedestrian Way spaces)
parks; Children’s Park; Retail Centers 13-69 25 19: Trail Available
Outdoors; Healthcare Outdoors; 13-25 11 15: Mall Enumerations related to
Landscape
Corporate Campuses; Transit and 13-69 13 00: Roof Terrace landscape domain:
Spaces
Transportation Centers Outdoors; 13-37 13 15: Sculpture Garden SPACE,
Transit stops; Cafes and outdoor food 13-61 15 11: Park Shelter PARKING,
courts; College campuses; K-12 13-33 15 00: Non-Athletic Recreation EXTERNAL,
Schools Outdoors; Office Entries; 13-33 15 13: Pleasure Garden *No support for
Roof tops (Main and Hannah, 2010) 13-33 15 11: Park landscape spaces
Benches; Seat Wall; Movable Chairs; 23.40.10.11.14: Exterior Seating IfcGeographicElement
Eating Tables; Low Tables 23.40.10.11.17: Exterior Tables *No definition for
Tables with Attached Seats 23.40.10.17.11:Garden/Patio Seating landscape element
Trash cans, Litter Receptacles; 23.40.10.11.21: Trash Receptors types
Smoking Receptacles; Recycling
Receptacles. * NOTE: IfcFurniture,
Furnishing Leaning Rail N/A IfcFurnitureType
(Gómez et Umbrellas or Shade Structure 23.40.10.17.14: Garden Umbrellas IfcFurnitureTypeEnum
al., 2013; Bike Racks; Bike Garages/Lockers 23.40.10.11.11: Bicycle Racks “are concerned only
Main and Planters 23.40.05.21.17: Planters with internal furniture”
Hannah, 23.40.95.14.11: Decorative Planters (buildingSMART
2010) Transit Shelters N/A International, 2013b)
Bollards 23.40.10.14.27: Bollards
Newspaper Boxes N/A
Grills 23.40.40.14.17.21: Grills
Signage and Way-finding Elements 23.40.10.14.24: Exterior Signs
23.15.10.14.11: Roadway Signage
Tree Grates; Tree Guards 23.40.05.21.21: Tree Grates

The ideal and productive workflow for developing COBie information is to create data in a
physical file format supported by IFC, and to extract (export) the required information in a
spreadsheet format to use at the operation level (National Institute of Building Sciences).
However, in many cases, the spreadsheet should be filled manually due to the aforementioned
shortcomings in the taxonomies and technology issues. This raises a question how the project
participants meet such requirements when the existing standards and developments have
shortcomings in addressing landscape and hardscape elements.

STEP 2: CASE STUDY ANALYSIS

We conducted a case study research at University of Washington (UW) using document


analysis and interview data collection methods, and we present a summary of findings in this
section. UW manages its assets through the facilities maintenance and campus engineering and
operations offices. Since 2011, it has attempted to implement and improve COBie in the
information exchange and asset-management processes. A Computerized Maintenance
Management System (CMMS) is used to support facilities management processes and operating

©  ASCE
Computing  in  Civil  Engineering  2015 553

building systems. However, as of late 2014, the information for managing campus landscape and
hardscape elements is still manually collected, formatted, and added to a GIS platform. The GIS
platform represents a 2D top-view layout of the campus in which landscape and hardscape
elements, their locations, and attributes are traceable. As a result, no 3D model is available for
site elements to inform users about geometric characteristics of these elements, although a photo
snapshot is linked to each element in the GIS system. The taxonomy UW uses for asset
classification in the CMMS does not support landscape and hardscape elements, although a few
MEP elements including light poles, manholes, fuel tanks, and trash compactors on the site are
supported. For COBie implementation, OmniClass has been used as the basis for classifying and
coding elements, and linking information to the CMMS. However, information on landscape and
hardscape elements is not requested in COBie handover process, and the campus engineering and
operations offices still manually collect data on these elements. Although the baseline taxonomy
for these elements is OmniClass, as we discovered through Step 1 and interviews, the taxonomy
has shortcomings in addressing site elements. The attributes for each element are mostly defined
by the in-house team, and they do not follow any pre-defined taxonomy. Table 2 presents the
landscape elements UW manages through its GIS platform. This table shows that some elements
and space types suggested by Table 1 are not supported by this system.

Table 2: Landscape and Hardscape Elements and Attributes in the GIS platform
Elements Attributes
Benches • Memorial (yes, no) • Material (undefined, concrete, metal, stone, wood, other)
• Art (yes, no) • Condition (undefined, excellent, good, fair, poor)
Bike Racks • Building Served (Name of Building) • Surface (undefined, asphalt, brick, concrete, dirt, gravel)
• Floor (First, Second, …) • Type (undefined, bike room, custom, DeroSS, DS-BB, Hanger)
• Capacity
Bollards • Ownership • Sleeve condition (undefined, excellent, good, fair, poor)
• Maintained By • Required Maintenance (undefined, paint, replace, straighten)
• Removable (yes, no) • Condition (undefined, excellent, good, fair, poor)
• Fixed in ground (yes, no) • Type (aluminum, concrete, log, PVC, rock, steel, tree, wood)
• Maintenance area • Location (general description of location)
Trees • Diameter at breast height (inch) • Condition (Score between 0 and 100)
• Number of stems • Diameter at breast height of multi-branch tree (inch)
• Height of the tree (ft) • Location
• Tree Type • Grow Space (building, path, street, tree grate, unrestricted)
• Deciduous or Coniferous • Land Use (container, lawn, parking lot, patio/deck, planting,
• Species Type roof garden, street tree, unmanaged)
• Memorial Tree (Yes, No) • Inoculated (Yes, No)
• Monetary value of tree

DISCUSSION AND CONCLUSION

The present study was designed to clarify challenges to address landscape and hardscape
elements in information exchange, facility operations, and asset-management. This study found
that systematic collection of information on landscape, hardscape, and site elements for object-
based asset-management is relatively new and growing in the AEC/FM industry (UGA and UW
cases). Capital facilities owners have attempted to improve their asset-management practice
through these knowledge development processes. However, a few challenges are currently
dominant. First, the existing taxonomies, such as OmniClass, have shortcomings in addressing
landscape elements, their types, and attributes. As a result, owner organizations have to custom-
define classifications of elements and attributes through time-consuming internal processes.

©  ASCE
Computing  in  Civil  Engineering  2015 554

Second, automated information exchange is still dependent on these taxonomies (e.g. IFC,
OmniClass), and for elements/attributes that are not supported by them, data formatting and
exchange requires cumbersome manual processes. Third, a comparison between UGA
requirements (cited in Step1) and UW BIM and COBie implementation (Step 2) shows that there
is no consensus among the owners on the information type and exchange format (models,
spreadsheet, etc.) required for landscape and hardscape design, construction, and operation. UGA
requires some landscape and hardscape elements modeled in BIM; in contrast, UW does not
have such a requirement and non-modeled information in GIS satisfies UW requirements.

These findings corroborates the fact that landscape and hardscape information modeling is
underrated in technology developments (Ahmad and Aliyu, 2012; Flohr, 2011; Nessel, 2013;
Zají ková and Achten, 2013). We have used a different approach from previous research to show
that recent advances in BIM, COBie, information exchange schemas (e.g. IFC), and taxonomies
have shortcomings in addressing landscape and hardscape design, construction, operation and
management. This limits asset-management capabilities, and leads the practice to inefficient
operations, more manual processes, and costly knowledge development and exchange. Further
research should therefore be conducted to develop inclusive taxonomies for classifying
landscape and hardscape elements and their attributes. The existing taxonomies, which are used
as the basis for information exchange, should be revised and updated to support more automated
information exchange processes (e.g. to insert, extract, update, modify, and observe data).

ACKNOWLEDGEMENTS

We gratefully acknowledge the valuable information provided to us by Campus


Engineering Office at University of Washington, Seattle. We would like to give special thanks to
James Dahlstrom, Campus Engineering Specialist, who compiled the data we requested.

REFERENCES

Ahmad, A. M., and Aliyu, A. A. (2012). The Need for Landscape Information Modelling (LIM)
in Landscape Architecture. Paper presented at the Digital Landscape Architecture Conference,
Bernburg, Germany.
Aouad, G., Wu, S., and Lee, A. (2011). Architecture Engineering and Construction. Florence,
KY, USA: Routledge.
Autodesk. (2005). Parametric Building Modeling: BIM's Foundation. (Accesses Sep 18 2014)
buildingSMART International. (2013a). IfcGeographicElement type for phisycal feature of 3D
terrain. Sep 27 2014, from http://jira.buildingsmart.org/browse/IFR-891
buildingSMART International. (2013b). Industry Foundation Classes Release 4 (IFC4).
Retrieved Sep 21, 2014, from http://www.buildingsmart-
tech.org/ifc/IFC4/final/html/index.htm
City Futures Research Centre. (2010). A Note on Cadastre; UrbanIT Research Project. AU: FBE,
UNSW.
Corcho, O., Fernández-López, M., and Gómez-Pérez, A. (2007). Ontological Engineering: What
are Ontologies and How Can We Build Them? In C. Jorge (Ed.), Semantic Web Services:
Theory, Tools and Applications (pp. 44-70). Hershey, PA, USA: IGI Global.

©  ASCE
Computing  in  Civil  Engineering  2015 555

Crotty, R. (2012). The Impact of Building Information Modelling: Transforming Construction:


SPON Press.
Eastman, C., Teicholz, P., Sacks, R., and Liston, K. (2011). BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers and Contractors: Wiley.
Flohr, T. (2011). A Landscape Architect’s Review of Building Information Modeling
Technology. Landscape Journal, 30(1), 169-170.
Giel, B., and Issa, R. (2013). Return on Investment Analysis of Using Building Information
Modeling in Construction. Journal of Computing in Civil Engineering, 27(5), 511-521. doi:
10.1061/(ASCE)CP.1943-5487.0000164
Gómez, P., Shaw, J., Swarts, M., MacDaniel, J., Soza, P., and Moore, D. (2013). Campus
Landscape Information Modeling: Intermediate Scale Model that Embeds Information and
Multidisciplinary Knowledge for Landscape Planning. Paper presented at the XVII Congreso
de la Sociedad Iberoamericana de Grafica Digital, Chile.
Hietanen, J., and Final, S. (2006). IFC model view definition format. International Alliance for
Interoperability.
Hubers, J. (2010). IFC based BIM or parametric design? Paper presented at the Proc., Int. Conf.
on Computing in Civil and Building Engineering (ICCCBE), Nottingham University Press,
Nottingham, UK.
Jianping, Z., Fangqiang, Y., Ding, L., and Zhenzhong, H. (2014). Development and
Implementation of an Industry Foundation Classes Based Graphic Information Model for
Virtual Construction. Computer Aided Civil and Infrastructure Engineering, 29(1), 60-74.
doi: 10.1111/j.1467-8667.2012.00800.x
Lu, S., and Wang, F. (2013). Some Issues of BIM Application in Landscape Architecture.
Applied Mechanics and Materials, 368-370, 92-98. doi:
10.4028/www.scientific.net/AMM.368-370.92
Main, B., and Hannah, G. G. (2010). Site Furnishings: A Complete Guide to the Planning,
Selection and Use of Landscape Furniture and Amenities: Wiley.
National Institute of Building Sciences. COBieLite: A lightweight XML format for COBie data.
Retrieved Dec 01, 2014, from http://www.nibs.org/?page=bsa_cobielite
National Institute of Building Sciences. (2007). National Building Information Modeling
Standard. U.S: National Institute of Building Sciences.
Nessel, A. (2013). The Place for Information Models in Landscape Architecture, or a Place for
Landscape Architects in Information Models. Paper presented at the Digital Landscape
Architecture 2013 – Connectivity and Collaboration in Planning and Design, Germany.
OCCS Development Committee Secretariat. (2012). OmniClass: A Strategy for Classifying the
Built Environment. http://www.omniclass.org(Accesses Sep 18, 2014)
Özyavuz, M. (2012). Landscape Planning. Croatia: InTech.
Smith, D. K., and Tardif, M. (2009). Building Information Modeling: A Strategic Implementation
Guide for Architects, Engineers, Constructors, and Real Estate Asset Managers: John Wiley
& Sons.
UGA. (2013). The University of Georgia Design And Construction Standards U.S: University of
Georgia.
Waterman, T. (2009). The Fundamentals of Landscape Architecture. U.S: AVA Publishing.
Zají ková, V., and Achten, H. (2013). Landscape Information Modeling. Paper presented at the
eCAADe 2013 computation and performance, Delft University of Technology, Netherlands.

©  ASCE
Computing  in  Civil  Engineering  2015 556

BIM and QR Code for Operation and Maintenance

Pavan Meadati1 and Javier Irizarry2

1
Associate Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA 30060. E-mail:
[email protected]
2
Associate Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA 30332. E-mail:
[email protected]

Abstract
Challenges encountered by facility managers include developing visual models in
their mind to corroborate the relationship between two dimensional (2D) as-built
drawings with the real world three dimensional (3D) elements, searching and
validating information. These challenges can be addressed by using Building
Information Model (BIM) in the O&M phase. Accessing the information using BIM
is a two-step process. The first step includes the identification and selection of the
appropriate 3D element from the digital model and second step includes the retrieval
of the information. Currently, the first step is accomplished by navigating the model
manually. This step becomes tedious depending upon the size and complexity of the
model. This identification selection process can be automated by integrating Quick
Response (QR) code technology with BIM. This paper discusses the feasibility of
developing an integrated BIM and QR code environment. It also discusses the pilot
study conducted to deploy this environment.
Keywords: BIM; 3D; QR code; O&M

INTRODUCTION
Traditionally information such as 2D as-built drawings, submittals, operation
manuals, and warranties that are transferred to owners for the operation and
maintenance (O&M) phase are unlinked and they exist as independent entities. Some
of the challenges encountered by facility managers during the O&M process include
interpretation of two dimensional (2D) drawings and information retrieval. Typically
a facility manger interprets the 2D as-built drawings and develops visual models in
their mind to corroborate the relationship between 2D elements with the real world
three dimensional (3D) elements. Once the elements are identified and validated more
2D as-built drawings and multiple documents are to be searched to retrieve the
required location and other information about the elements. In this process most of
the facility manager’s time is spent on non-value added tasks such as developing
visual models and searching and validating information. This unlinked information
reduces the facility manager’s ability to address the needs of the facility which in turn
increases O&M cost and time. Additionally, the possibility of misinterpretation of 2D

©  ASCE
Computing  in  Civil  Engineering  2015 557

as-built drawings can also cause delays in retrieving the information. The
interpretation and model development processing time during the O&M phase can be
reduced by using 3D models instead of 2D as-built drawings. However information
retrieval time cannot be reduced by just using 3D models since the retrieval time is
dependent on information management practices. The interpretation and information
retrieval challenges faced by facility managers can be addressed by providing access
to linked information through 3D models. This can be accomplished by using
Building Information Model (BIM) in the O&M phase. Accessing the information
using BIM is a two-step process. The first step includes the identification and
selection of the appropriate 3D element from the digital model and second step
includes the retrieval of the information. Currently, the first step is accomplished by
navigating the model manually. This step becomes tedious depending upon the size
and complexity of the model. Additionally manual errors in element’s selection
negatively affect the information retrieval process. Due to this manual selection
process, the effectiveness of BIM is compromised and is not being used to its full
potential. This identification and selection process can be automated by integrating
Quick Response (QR) code technology with BIM. Previous research work
demonstrated the applications of one dimensional barcode and radio frequency
identification (RFID) in various areas of construction industry (Navon & Berkovich,
2005; Bell & McCullouch, 1988; Li & Becerik-Gerber, 2011). With the increased
usage of smart devices, potential of QR code technology in construction industry is
being explored. For example, QR code was used for developing automated facility
management system (Lin et al. 2014); for developing BIM based shop drawing
automated system (Su et al. 2013); and for tracking and control of engineering
drawings, reports and specifications (Shehab and Moselhi, 2005). Saeed et al. 2010,
illustrated the use of QR code technology to access information about buildings and
other artifacts for pedestrians. This paper augments potential applications of QR code
technology in operation and maintenance phase by integrating with BIM. The goal of
the study is to develop BIM+QR environment that provides a seamless flow of
information from real world object to BIM. The following sections discuss about
BIM+QR environment components and automated information flow among them.

BIM+QR ENVIRONMENT
In this environment, when the user reads the QR code through smart device, the
respective 3D element will be highlighted in BIM automatically and provides access
to retrieve the information. It provides dynamic linkage between the real world object
and BIM by synchronizing with the user input. The integrated QR code and BIM
environment provides a seamless flow of information reducing manual errors and
improving information retrieval efficiency. The different components of BIM+QR
environment includes: BIM repository, Object hyperlinking, and Interactive display
unit.

BIM Repository

©  ASCE
Computing  in  Civil  Engineering  2015 558

This section discusses the methodology adopted for the development of a repository
through BIM. The two steps in the development process include: (a) three
dimensional (3D) model development and (b) integration of information to the 3D
model elements. The ease of integration depends on the availability and type of
parameters in the BIM software (Goedert & Meadati, 2011). The information
associated with the 3D model elements can be retrieved through parameters of the
elements. These parameters establish the links between respective files and elements
in digital format. The information needed can be collected through paper format and
digital format from various sources. Since BIM needs the information in digital
format, the paper-based information has to be converted into digital format by
scanning.

Object Hyperlinking
The process of extending the internet to real world objects is called object
hyperlinking (Wikimedia, 2014). This can be achieved by attaching tag with URL to
the real world object. The QR code with URL can be used as a tag. QR code stands
for Quick Response Code. It was invented by Denso Wave in 1994 (Wikimedia,
2014). It is a two dimensional bar code. It can be read by using smartphone or touch
pad or computer camera. QR code is used to store text, URL, and contact
information. When the user reads the QR code depending upon the type of
information stored, it may display text, opens URL, and saves the contact information
to address book. QR codes are now used for wide range of applications such as
commercial tracking, product marketing, product labeling, and storing organizational
and personal information. QR code can be static or dynamic. In static QR code the
initially created information cannot be changed, whereas in dynamic QR code the
information can be edited after creating the code. With the advent of smart phones
QR codes became popular as each smartphone can read the QR code with appropriate
QR reader app. An object hyperlinking system involves four components shown in
the Figure 1. QR code tagged Object: QR code with URL is tagged to the object;
Smart device: A device which has means to read the QR code and display the
information; Open Wireless network: An open wireless network such as 3G or 4G
network for communication between the smart device and server containing the
information linked to tagged object; Server: A server to store the information related
to the real world object.

©  ASCE
Computing  in  Civil  Engineering  2015 559

Figure 1: Components of hyperlinking system

Interactive Display System

The display system can be desktop display unit, laptop, or smart device display unit.
Desktop display unit includes monitor and can be made interactive by using mouse
and keyboard. Laptop unit includes non-interactive screen or interactive screen. The
non-interactive screen can be made interactive by using mouse and keyboard. A smart
screen can also serve as an interactive screen for the laptop. Smart device display unit
devices include touchpad or tablet or smartphone screen.

Information Exchange in BIM + QR Environment

The different components of an integrated BIM+QR Environment include real world


object tagged with QR code, BIM repository, and display unit. An automated flow of
information among different components of the BIM+QR environment is shown in
Figure 2. Smartphone scans the QR code of the real world object. When the scanning
is completed a web page is displayed and prompts the user to click the image. Once
the image is clicked, an identification code of the element of the 3D digital model is
stored in a text file on server and the user is prompted to go to the interactive display
unit. Based on the text file data, the BIM repository is searched and the results are
reflected on the display unit, through 3D model by highlighting the corresponding 3D
digital element of the real world object. This element will in turn facilitate to query
the required information from the BIM repository.

PILOT STUDY
A pilot study has been conducted by deploying, the above proposed BIM+QR
environment for O&M of Construction Management Department’s class rooms at
Southern Polytechnic State University. The above proposed integrated BIM+QR
environment framework was developed by using QR codes and Autodesk’s Revit

©  ASCE
Computing  in  Civil  Engineering  2015 560

2014. The study includes three steps: (1) hyperlinking real world objects; (2) BIM
repository development, and (3) automation of information exchange. The objective
of the study is to synchronize the user QR code input and provide access to the O&M
information thorough highlighting the component in BIM. Overview of the steps
involved in the process are presented below.

Real World
Object
Server

QR Code web page

url asp

Smart device text file

C# Application

3D Model

Information
Database

BIM Repository

Figure 2: Automated information flow in BIM+QR environment

Hyperlinking Real World Objects

In this step, webpages as shown in Figure 3a for different real world objects were
developed using html. These pages are hosted on a server. Various static QR codes
with URL for different objects were created. These QR codes were tagged to the real
world objects as shown in Figure 3b.

BIM Repository Development

BIM repository was developed using Revit 2014. The steps included for the
repository development process were 3D model development and association of the
information to the 3D model elements. A 3D model of the Construction Management
Department’s class rooms was developed using existing families and creating new
families as shown in Figure 4.

©  ASCE
Computing  in  Civil  Engineering  2015 561

(a) Demo unit (b) Demo unit tagged


web page with QR code

Figure 3: Hyperlinking real world object

Figure 4: Classroom Floor layout


The two different approaches for association of O&M information include: (a)
selection of each 3D element of the model and linking the documents by specifying
the documents storage path and (b) automation of linkage by placing the files in a
pre-assigned path with standardized storage location. The automation can be achieved
by adopting National Building Information Modeling standard (NBIMS) standard
called Construction Operations Building Information Exchange (COBie). COBie
facilitates manufactures to submit the electronic product information and then
provides it to O&M purposes (East, 2007). In this study, O&M information is
integrated by selecting each component and linking the documents by specifying the
documents storage path. O&M information integrated into the 3D model included
Microsoft Word files, PDF, photographs, audio and video files. Since the existing
parameters of the 3D elements were not sufficient, new parameters were added to the
3D elements. Some of the newly added shared parameters include O&M manuals,
Maintenance Schedule, Performance Test Videos, Specifications, Typical Section,
Construction Photos, Code Requirements, and Installation videos. These parameters
were made to appear under the group name ‘Other’ in the type parameters list. The
URL data format was used for each parameter.

©  ASCE
Computing  in  Civil  Engineering  2015 562

This format is useful for establishing the link between the respective files and
components. The association of information to the model components is
accomplished by assigning the file paths of the information to the parameters. This
link between the documents through the path stored in the parameter allows easy
access to the required information.

Automation of Information flow

In this step, the communication among real world object, server and BIM repository
was established. The tagged real world object establishes communication with the
server through the smartphone using the wireless network. When user scans the QR
code an html webpage opens and prompts the user to click the image. Once the user
clicks, an event is triggered to store the identification code of the element in a text file
on server. This data storage was triggered by using asp file. The communication
between Revit Architecture and server was established by using Revit APIs through
C# programming language. Once the communication is established the data from text
file is retrieved and corresponding 3D element gets selected and highlighted on the
interactive smart display unit. This element will in turn facilitate to query the
required information from the BIM repository. Figure 5, shows the steps followed by
the user to access the information using the BIM+QR environment. It also shows the
screen shot of the retrieved O&M manual, specifications, Performance Test Videos of
the door accessed from the BIM.

(a) Object
tagged with
Step-1: User scans the QR QR code
code using smart device

(b) User Screen

Step-2: 3D
element got
highlighted
and user
access the
information

Figure 5: User interacting in an integrated BIM+QR environment

©  ASCE
Computing  in  Civil  Engineering  2015 563

CONCLUSION
BIM+QR environment automates the element identification and selection process in
BIM. The BIM and QR code integration provides a seamless flow of information
between real world objects and BIM elements. This automation reduces the
identification and selection time, and reduces manual errors. BIM+QR environment
synchronizes the dynamic user input and increases the information retrieval
efficiency. As the number of smart devices users are on rise, BIM+QR environment
has the potential to make a paradigm shift in O&M. The methodology discussed in
this paper serves as an initial step to develop an integrated BIM+QR environment for
effective O&M applications.

REFERENCES
Bell, L.C. and McCullouch, B.G. (1998). “Bar code applications in construction,” J.
of Constr. Engrg and Mgmt, 114(2), 263-278.
East, W, E., (2007). “Construction Operations Building Information Exchange
(COBIE).” < http://www.wbdg.org/pdfs/erdc_cerl_tr0730.pdf> (March 5,
2011).
Goedert, J.D., and Meadati, P. (2008). “Integration of construction process
documentation into Building Information Modeling.” J. of Constr. Engrg and
Mgmt, 137(7), 409-516.
Li, N. and Becerik-Gerber, B. (2011). “Performance-based evaluation of RFID-based
indoor location sensing solutions for the built environment.” Advanced
Engineering Informatics, 25(3), 535–546.
Lin, Y. C., Su, Y. C. and Chen Y. P. (2014). “Developing Mobile BIM/2D Barcode-
Based Automated Facility Management System.” The Scientific World
Journal, 2014.
Navon, R. and Berkovich, O. (2005). “Development and on-site evaluation of an
automated materials management and control model,” J. of Constr. Engrg and
Mgmt, 131(12), 1328-1336.
Saeed, G., Brown, A., Knight, M., and Winchester, M. (2010). “Delivery of
pedestrian real time location and routing information to mobile architectural
guide.” Autom. Constr, 19 (4), 502 - 517.
Shehab, T. and Moselhi, O. (2005). “An Automated Barcode System for Tracking
and Control of Engineering Deliverables.” Proceeding of Construction
Research Congress, April 5 - 7, San Diego, CA
Su, Y. C., Hsieh, Y. C., Lee, M.C., Li, C.Y. Lin, Y. (2013). “Developing BIM-Based
shop drawing automated system integrated with 2D barcode in construction.”
Proceedings of the Thirteenth East Asia-Pacific Conference on Structural
Engineering and Construction (EASEC-13), September 11-13, 2013, Sapporo,
Japan.
Wikimedia (2014). “Object hyperlinking.”
<http://en.wikipedia.org/wiki/Object_hyperlinking > (April 15, 2014).

©  ASCE
Computing  in  Civil  Engineering  2015 564

Measuring End-User Satisfaction in the Design of Building Projects Using Eye-


Tracking Technology

Atefeh Mohammadpour1; Ebrahim Karan2; Somayeh Asadi3; and Ling Rothrock4


1
Postdoctoral Researcher, Department of Architectural Engineering, Pennsylvania
State University, 104 Engineering Unit A, University Park, PA 16802. E-mail:
[email protected]
2
Assistant Professor, Department of Applied Engineering, Millersville University,
Millersville, PA 17551. E-mail: [email protected]
3
Assistant Professor, Department of Architectural Engineering, Pennsylvania State
University, 104 Engineering Unit A, University Park, PA 16802. E-mail:
[email protected]
4
Associate Professor, Department of Industrial and Manufacturing Engineering,
Pennsylvania State University, 310 Leonhard Building University Park, PA 16802.
E-mail: [email protected]

Abstract

The importance of end-user participation in the design process of building and


construction projects has been recognized and addressed by a number of researchers
and practitioners. The main goal is to ensure that the project outcome meets the
facility users’ needs. In order to understand their needs, a variety of approaches (e.g.
focus groups, workshops, and questionnaires) for the building end-users participation
in the design process have been presented in the literature. Despite the contributions
and practical features of these methods, they require a significant amount of time and
effort to conduct and interpret the participants’ responses. To overcome this limitation,
this paper investigates the use of eye-tracking technology to measure and analyze
end-user satisfaction. This study is carried out to test the hypothesis that the users’
satisfaction of design variations is related to their visual attention. In other words,
design alternatives with high level of users’ satisfaction attract more attention. An
experiment using four alternatives for the design of a façade is performed to test the
effectiveness of eye-tracking technology. The design alternatives are developed and
displayed in a virtual 3D environment. Participants are asked to rate their level of
satisfaction with each alternative, while their interaction with the virtual models is
recorded using eye-tracking. The results of the experiment are also demonstrated to
domain experts to get a better understanding of the technology’s potential and
challenges.

Keywords: Eye-tracking; Building design; Visual perception

©  ASCE 1
Computing  in  Civil  Engineering  2015 565

INTRODUCTION

The importance of end-user participation in the design process of building and


construction projects has been recognized and addressed by a number of researchers
and practitioners. End-users are people who live or work in a building and although
they do not necessarily have the knowledge for managing the building, they may have
opinions about its performance (Lai and Yik 2007). Huovila and Seren (1998)
developed a framework based on Quality Function Deployment and Design Structure
Matrix methods to capture and to document customer needs, and to interpret them
into requirements in different phases of the design and construction processes. Kaya
(2004) highlighted the importance of end-users involvement in the process of shaping
their new physical environment. The collection and use of end-users' requirements is
seen as the key to successful management of building projects (Pemsel and Widén
2010). End-user satisfaction is critical not only on the outcome, but the way it is
addressed would affect the design of the system (Campbell and Finch 2004). The
main goal is to ensure that the project outcome meets the end-users’ needs. Managing
end-users in the design process is not an easy task and it requires a lot of time and
effort involving planning, workshops, interviews, presentations, and feedback
(Pemsel et al. 2010).
A variety of approaches (e.g. focus groups, workshops, and questionnaires)
for the end-users participation in the design process have been presented in the
literature in order to understand their needs and expectations of the building. For
example, Hebert and Chaney (2012) used a survey questionnaire to gain insight into
end-users’ preferences for facilities and services and suggested a similar methodology
for other facilities’ design applications. Christiansson et al. (2011) developed an
Information and Communication Technology (ICT) to support end-user participation
in the design process together with building designers, and to capture and formulate
end-user requirements and preferences on buildings and their functionality. Despite
the contributions and practical features of these methods, they require a significant
amount of time and effort to conduct and interpret the participants’ responses. A
method capable of acquiring measurable data about end-user satisfaction would be
highly beneficial for the assessment of design options. To overcome this limitation,
this paper investigates the use of eye-tracking technology to measure and analyze
end-user satisfaction in the design of building projects.
The study of people’s satisfaction focuses on how people perceive information
around them and how they interpret the information registered through their physical
senses (i.e. vision, hearing, taste, touch, and smell). The visual presentation or the
obtaining of knowledge through the sense of seeing has always been a factor in
decision making and customer satisfaction (Lurie and Mason 2007). This study is
carried out to test the hypothesis that the users’ satisfaction of design variations is
related to their visual attention. In other words, design alternatives with a high level
of user satisfaction would attract more attention. A description of the eye tracking
technology and its main components are provided in the next section and followed by
an experiment conducted to test the hypothesis.

©  ASCE 2
Computing  in  Civil  Engineering  2015 566

EYE-MOVEMENT DATA AND APPLICATIONS OF EYE-TRACKING

Eye tracking is the process of measuring an individual’s eye movements to


determine where a person is looking at any given time and the sequence in which
his/her eyes are shifting from one location to another (Poole and Ball 2006).
“Fixations” and “Saccades” are probably the main measurements used in eye-tracking
studies. A fixation is when the user’s gaze is relatively motionless on a specific area
and a saccade is a quick movement between fixations to another element (Ehmke and
Wilson 2007). Figure 1 shows the concepts of fixation and the movement between
fixations, or saccades. Once the relationship between eye-positions and scene
coordinates is calibrated, we can measure how the eyes typically move over a scene
(e.g. a digital image as shown in Figure 1).
In order to assess the eye movement data in the anticipation of stimuli, the
eye-tracking software is used to analyze the recorded gaze positions based on the
amount of interest and the aggregate eye fixations as in heat map visualization
technique, or based on their sequence as in gaze spots technique (Holmqvist et al.
2011). In addition to the analysis method, the eye-tracker (hardware components)
should be selected carefully to provide accurate and precise outcome data. The
accuracy of an eye tracker refers to the difference between the actual gaze position
and the captured position. The saccade resolution gives an impression of how fast an
eye tracker can detect saccade movement. The sampling frequency (measured in
hertz) shows the ability of the system to capture the number of samples (e.g. gaze
direction) in one second (Yousefi et al. 2015).

Figure 1. Different types of eye movements (fixation and saccade)

RESEARCH DESIGN

In order to extend from the theory point of view to practice, the experiment in
this study was conducted in four steps: experiment design, data collection, data
analysis, and domain experts’ evaluation elaborated as follows:

Experiment Design: An experiment using four alternatives for the design of a


façade was performed to test the effectiveness of eye-tracking technology. These

©  ASCE 3
Computing  in  Civil  Engineering  2015 567

alternatives were similar in terms of geometry and window position, but the texture
and color of materials were varied. The participants were provided with graphic
representation of the facade, from different views of the building (North, South, East,
and West). Four image slideshows were considered in this experiment, each image
contained four different designs (Figure 2). The images were counterbalanced to
control for order effect (although cannot be seen here).

Figure 2. Four alternatives for the design of a façade

Data Collection: In order to understand the user’s satisfaction of design


variations, two different approaches were used for data collection: (1) eye-tracking
and (2) questionnaire. First, the participant looked at the image slideshow, while the
eye-tracker recorded his/her eye movements (where, when, and how long). Through a
questionnaire, participants were then asked to rank the given designs and determine
the part of the design (without surfaces and color) they found most interesting.

Data Analysis: The collected data through eye tracking and questionnaires
were analyzed statistically to compare the user satisfaction of a design during the
design phase as well as test the hypothesis that design alternatives with high level of
users’ satisfaction attract more visual attention.

Domain Expert Evaluation: The domain experts (architects) evaluated and


rated each alternative according to its openings, texture, and color of its components.
They believed that the building features with warm colors attract more attention. This
evaluation was confidential until the end of eye-tracking experiments. The results of
the experiments were demonstrated to domain experts to better understand the
technology’s potential and challenges.

©  ASCE 4
Computing  in  Civil  Engineering  2015 568

PILOT STUDY

In this study Mirametrix S2 eye-tracker, which is an ideal system for on-


screen research, was used to record eye movement data. The technical specification of
the eye-tracker includes: accuracy 0.5 to 1 degree range, drift<0.3 degrees, data rate
60 Hz, freedom of head movement 25 (width) x 11 (height) x 30 (depth) cm,
calibration 9 points (~ 15 seconds to complete), binocular tracking, and bright pupil.
The process of the pilot study is presented in Figure 3. This is a within-subjects
experiment in which every single participant was subjected to every single design
alternative. The results were analyzed using 128 data samples collected from 8
participants.
Because of the different size of eyeball radius, shapes, and also glasses that
change the size of the eye, the eye-tracker should be calibrated for each participant.
The calibration procedure started with the participants sitting relaxed in front of the
eye-tracker camera and asked not to move. Consequently, the participant looked at 9
calibration spots on the screen. In the next step, the participants were asked to look at
the designs A to D displayed on the monitor while the eye tracking was recording
their eye movements. After each round, the participants were asked to (1) rate design
alternatives and (2) circle the part of the design that they found most interesting. It
should be noted that the design alternatives used for the first question had similar
geometry but different texture and color, while wireframe alternatives were used for
the second question. The recorded data through the eye-tracker and the data provided
from the questionnaire were reviewed, organized, and analyzed. The detail of data
analysis is explained in the following section.

Looking at Grade each


Analyze the
Calibration designs A to D design on the
recorded data
on the Image questionnaire

Figure 3. The process of the conducted pilot study

©  ASCE 5
Computing  in  Civil  Engineering  2015 569

RESULTS AND DISCUSSION

The number of fixations and the fixation duration spent by the participant on
each design alternative was compared with the data provided from the questionnaire.
The results demonstrated that the average percentages of time spent by the
participants on designs A to D were 28%, 30%, 19% and 23%, respectively. Based on
the questionnaire results, design A was selected as the best design alternative with
relative average score of 39% (average score/sum of the average scores), followed by
design D (24%), C (22%), and B (15%). The results of this comparison, which is
discussed later in this section, failed to reject the null hypothesis that particiants spend
more time looking at designs with higher score. There are two possible reasons for
these results: Not only does alternative with high level of participants’ satisfaction
(e.g. design A) attract more attention, but alternative with low level of participants’
satisfaction (e.g. Design B) also attracts more attention. Second, attention does not
necessarily equate to someone’s satisfaction. Although design alternatives with warm
colors (e.g. designs A and B) attract more attention, the combination of building’s
geometry and texture can be an important matter for the participants. In order to
address this issue, wireframe desing alternatives (without surfaces and color) were
used for the second question about the fixation durations spent on the parts that were
found interesting by the participants.
To analyze the end-user satisfaction during the design phase, a randomized
complete block design was employed in which the data collected from the first step
were sorted into homogeneous design alternatives, or blocks, and eye movement and
questionnaire data were then assigned within the blocks. The visual attention is
measured based on the fixation duration spent on each design alternative. The
summary of analysis of variance is articulated in Table 1.
The research model was established based on the idea that there is a relation
between the user satisfaction and their visual attention. There were no statistically
significant differences between participants’ responses and visual attention (fixation
durations spent on each design recorded by the eye tracker), as determined by the
analysis of variance (p =0.119). The residual plots, including normal probability plot,
histogram, residuals versus fits and order, and residuals versus order are shown in
Figure 4. The normal probability plot shows a fairly linear pattern which is an
indication of population normality (i.e. the distribution of residuals is normal).

Table 1. Summary of analysis of variance

Source DF Adj SS Adj MS F-Value p-value


Image 3 0.00060 0.000200 0.01 0.999
Design (Image) 12 0.24915 0.020763 1.72 0.155
Factors (Image, Design) 16 0.19338 0.012086 1.45 0.119
Error 224 1.86170 0.008311
Total 255 2.30483
Note: SS = Sum of Squares, MS = Mean Square

©  ASCE 6
Computing  in  Civil  Engineering  2015 570

Figure 4. Residual Plots for Response

An additional interesting result was high fixation values for the parts that were
found interesting by the participants. While the average area for these parts contained
8.9% of the total area, the participants spent 19.8% of their time on these areas of
interests. As shown in Table 2, the results of a paired t-test between the means of the
“most interesting” areas and the fixation durations spent on those areas show that the
participants spent considerable higher visual attention to the parts they consider
attractive. Due to the p value (t-value= 4.804; p<0.001), we can reject the null
hypothesis and accept the alternative hypothesis that there are statistically significant
differences between the means. Thus, it is concluded that the users’ satisfaction of
design variations (e.g. building compments) is related to their visual attention.

Table 2. Paired-sample t-tests for mean difference

Source N Mean Std. Deviation t-Value p-value


Area of attractive part 32 8.86 12.7
Fixation durations 32 19.77 3.34
Pair 4.804 <0.001
Note: SS = Sum of Squares, MS = Mean Square

CONCLUSIONS AND RECOMMENDATIONS

In this study eye-tracking technology was utilized during the design phase of a
building to compare the recorded eye movement data from the eye-tracker device
with collected data through the questionnaire. The null hypothesis states that there is
no difference between the participants’ satisfaction and the time spent on each design
alternative. We failed to reject the null hypothesis at the 5% level. Also, the
participants were asked to determine the part of the design that is most appealing to
them. The analysis of fixation durations shows that the participants spent about 20%
of their time on these areas of interest; however, these parts cover less than 9% of the
total area. Therefore, we can conclude that higher visual attention is paid to the parts
the participants consider attractive.

©  ASCE 7
Computing  in  Civil  Engineering  2015 571

The eye tracking technology is used in several disciplines such as marketing,


software design, web design, psychology, human behavior research, and game design.
This study provided an opportunity to get a new insight into the eye tracking usage
and its implementation in the construction industry by looking at end-user behavior
and their satisfaction during the design process. Although eye-trackers are expensive
and require specific skill and knowledge to be able to use them, with time they are
getting more user-friendly and cost-effective. Instead of having a general and
quantitative data for end-user satisfaction from the design of a building, implementing
eye-tracking technology provides a detailed quantitative data related to user attention
to the provided design during the design phase.

REFERENCES

Campbell, L., and Finch, E. (2004). "Customer satisfaction and organisational


justice." Facilities, 22(7/8), 178-189.
Christiansson, P., Svidt, K., Sørensen, K. B., and Dybro, U. (2011). "User
participation in the building process." Journal of Information Technology in
Construction, 16, 309-334.
Ehmke, C., and Wilson, S. (2007). "Identifying web usability problems from eye-
tracking data." Proceedings of the 21st British HCI Group Annual Conference
on People and Computers: HCI... but not as we know it-Volume 1, British
Computer Society, Swinton, UK, 119-128.
Hebert, P. R., and Chaney, S. (2012). "Using end-user surveys to enhance facilities
design and management." Facilities, 30(11/12), 458-471.
Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., and Van de
Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and
measures, Oxford University Press.
Huovila, P., and Seren, K.-J. (1998). "Customer-oriented design methods for
construction projects." Journal of Engineering Design, 9(3), 225-238.
Kaya, S. (2004). "Relating building attributes to end user's needs:“the owners-
designers-end users” equation." Facilities, 22(9/10), 247-252.
Lai, J. H., and Yik, F. W. (2007). "Perceived importance of the quality of the indoor
environment in commercial buildings." Indoor and built environment, 16(4),
311-321.
Lurie, N. H., and Mason, C. H. (2007). "Visual representation: Implications for
decision making." Journal of Marketing, 71(1), 160-177.
Pemsel, S., and Widén, K. (2010). "Creating knowledge of end users' requirements:
The interface between firm and project." Project Management Journal, 41(4),
122-130.
Pemsel, S., Widén, K., and Hansson, B. (2010). "Managing the needs of end-users in
the design and delivery of construction projects." Facilities, 28(1/2), 17-30.
Poole, A., and Ball, L. J. (2006). "Eye tracking in HCI and usability research."
Encyclopedia of human computer interaction, 1, 211-219.
Yousefi, M. V., Mohammadpour, A., Karan, E. P., and Asadi, S. (2015).
"Implementing Eye Tracking Technology in the Construction Process." 51st
ASC Annual International Conference Proceedings, College Station, TX.

©  ASCE 8
Computing  in  Civil  Engineering  2015 572

Automated Rule-Based Checking for the Validation of Accessibility and


Visibility of a Building Information Model

Y. C. Lee1; C. M. Eastman2; and J. K. Lee3


1
Ph.D. Candidate, College of Architecture, Georgia Institute of Technology, 723
Cherry St., Atlanta, GA 30332. E-mail: [email protected]
2
Director, Digital Building Laboratory, Georgia Institute of Technology, 723 Cherry
St., Atlanta, GA 30332. E-mail: [email protected]
3
Assisant Professor, Department of Interior Design, Hanyang University, 222
Wangsimni-ro, Seongdong-gu, Seoul 133-791, Korea. E-mail:
[email protected]

Abstract

Client-developed building programs are increasingly burdened with a number of


requirements such as building codes, safety, and programming requirements.
Managing these requirements ensures the successful performance of subsequent
procedures and the high quality of a project. However, because assessing a building
design relies primarily on the competence and dexterity of an architect, domain
experts often do not discover that these requirements have not been met until late in
the construction phase. For example, a hospital design, which has a complex structure
or complicated requirements, should be evaluated with multi-functional features that
help maintain the consistent quality of the design. Therefore, to support the
evaluation of a building design, this paper proposes automated rule-based checking
that assures satisfaction of the requirements of a spatial program and visibility. This
rule-checking process uses the extended rule language of the building environment
rule and analysis implemented on the Solibri Model Checker.

INTRODUCTION

A number of regulatory and subordinate requirements of a building design should be


planned and ensured throughout the design and construction phases (Eastman et al.
2009). Thus, during the early design phase, project participants devote a great deal of
time to checking a proposed design in accordance with required criteria and standards.
To improve the validation process, this paper suggests an automated rule-based
checking process for assuring satisfaction of the requirements of a spatial program
and visibility. To devise the most efficient way to address the numerous requirements
pertaining to a spatial program and visibility, we set up checking algorithms using the
libraries of the Solibri Model Checker (SMC) and extend the validation language of
the building environment rule and analysis (BERA). Based on the findings of a case
study, this research examines how requirements and regularized constraints

©  ASCE
Computing  in  Civil  Engineering  2015 573

associated with a spatial program and visibility are efficiently managed throughout
the design and construction phases.

CURRENT PROBLEMS IN CHECKING DESIGN REQUIREMENTS

Manually collecting diverse requirements and applying them to a design process are
generally burdensome for an architect. To satisfy a great deal of requirements, the
predesign phase has traditionally relied heavily on the competence and the dexterity
of an architect. However, because of the massive amount of criteria, manually
applying these requirements to a building design is tedious for an architect.
Unfortunately, as a result of this situation, the architect may neither properly address
nor adhere to the requirements until late in the construction process, which leads to
additional time and expense. Another major problem is that iterative exchanges of a
building design among project participants throughout the design and construction
phases can unintentionally edit the design, which may compromise conformity to
requirements (Solihin et al. 2014). Thus, when sending and receiving a building
design, domain professionals need to validate it iteratively according to the
predefined requirements (Lee et al. 2014). These repetitive evaluations, however, are
laborious, particularly in complex building projects. For example, a hospital design
may specify thirty or more requirements for only a single patient room. Thus,
satisfying all of the requirements of a proposed building design is significantly
demanding for project participants.

AUTOMATED RULE-BASED VALIDATION PROCESS

The objective of rule-based checking is to validate a building design based on how


objects, relationships, or attributes are configured (Eastman et al. 2009). With the
advent of Building Information Modeling (BIM), a computerized checking process
has been increasingly recognized as imperative to the evaluation of a building design
because it can help reduce a human error and improve the quality of the design. In
addition, a rule-based assessing system allows domain experts to streamline
fragmented processes throughout the design and construction phases so that they can
achieve improved advancement in the productivity of a building project. Among the
diverse types of requirements, this research addresses the evaluation of spatial
requirements and visibility that current checking applications do not deal with. The
possible requirements of two checking areas required in a conceptual hospital design
were provided by HOK, an architectural and engineering firm.

Requirements of a Spatial Program and Visibility

In general, a hospital or clinic design provides various types of spaces closely


connected and restrictively allocated so that they carry out the required functions of
treatment. Thus, such design entails a great deal of complex spatial requirements that
should be thoroughly managed throughout a building project. Figure 1 represents the

©  ASCE
Computing  in  Civil  Engineering  2015 574

descriptions of the possible requirements for accessibility and visibility that should be
managed in the design phase. Accessibility checking can consist of three scenarios:
direct access, indirect access, and passing through between spaces. These
accessibility conditions should be satisfied if a hospital or clinic design is to maintain
circulation and division between secure space such as restricted areas. In addition,
visibility checking helps an architect design a well-organized layout; for example, a
hospital layout should ensure that a nurses’ station affords a clear view of patient
rooms because nurses must maintain constant vigilance of patient conditions.
Guaranteeing visibility between a nurses’ station and patient rooms improves the
efficiency of patient monitoring and reduces workplace stress (Seo et al. 2010).

Direct access Indirect access Passing through

Visibility checking

Figure 1. Requirements of accessibility and visibility

Checking Algorithms and BERA Extensions for Spatial Requirements

Users can define diverse types of rule sets on the SMC application using rule
templates and checking parameters that can increase the flexibility of rule definitions.
The scope of values that parameters support, however, is currently limited at defining
rules such as ones for a spatial program and visibility (Ding et al. 2004). For instance,
even though the SMC provides a variety of checking templates for users, it lacks
templates and required parameters for the validation of accessibility and visibility.
Within the programmed checking framework, which offers only vendor-defined
parameters and prohibits users from generating checking rules by using the existing
SMC libraries, users are limited when implementing various evaluations of a BIM
model for their own use. Thus, to analyze the domain-specific knowledge of a BIM
model that the SMC does not support, we extended the BERA language to address a
spatial program and visibility rule checking using the SMC libraries and references.
In particular, we added necessary parameters such as selecting a door or a window as
a start and end points and designed algorithms to developing the suggested checking

©  ASCE
Computing  in  Civil  Engineering  2015 575

approach. Figure 2 illustrates the extended checking algorithm for the validation of
accessibility and visibility. This checking algorithm analyzes the number of turns
required to reach a targeted place from a base space using predefined start and end
points. If the number of turns required between two spaces is zero, the checking result
shows that two spaces are directly accessed and visibly secured.

Figure 2. Extend the checking algorithm for accessibility and visibility

Suggested Plan and Implementation of Rule Checking


The SMC imports and evaluates a building model defined in the IFC format using the
predefined rule set libraries with regard to object existence, relations, and path
distance checking (Solibri Inc.). The BERA Language developed by JinKook Lee and
Chuck Eastman facilitates the defining of queries for rule sets required for the
validation of a building model. The BERA plug-in embedded in the SMC translates
an IFC file developed by various BIM authoring tools and executes the checking
process with defined rule sets.
Figure 3 presents the architecture of the rule-checking process using the SMC and the
BERA language. First, the SMC and the BERA application translate an imported IFC
file and generate a BERA object model (BOM) according to the specific goal of use
(Lee et al. 2014). The BOM enables a user to maintain information about IFC model
data, uses them efficiently, and ultimately facilitates the checking process by
improving the data translation from IFC model data to the SMC libraries (Lee et al.
2011). In addition, using the extended BERA language, users can define rule sets and
call the SMC objects from the SMC library. Through diverse defined rule sets, users
with access to the SMC libraries linked by the BOM can assess the objects, attributes,
and properties of a building design. That is, the user-defined rules can retrieve the
checking libraries and algorithms embedded in the SMC to accomplish programmed
rule checking defined in the extended BERA language. The outcomes of the
validation process can be reported to the 3D visualization interface and the BERA
console view.

©  ASCE
Computing  in  Civil  Engineering  2015 576

IF
FC
Geometric
G BERA obbject
model model (B
BOM)
Requirementss of BERA executorr :
spatial prograam executing ruless
Mapping using Solibri
and visibilitty
libraries
Reequirement
BERA rulle sets
sets

Requirrements in a pro
oject Solibri Modell Checker BE
ERA language plug-in

Figure 3. Rule-cheecking proceess of the SM


MC and the BERA
B languuage

A CASE
C STUD
DY

Figgure 4 showss the validation results off a hospital design


d in term
ms of accesssibility
reqquirements. Green
G lines refer
r to direcctly accessedd relationshipps, and the blue
b line
represents a relationship thaat requires oneo more turnn between tw wo spaces. Figure
F 4 (a)
illuustrates direcct accessibility between the
t [105] stoorage area annd the [107] special
clinnic room, and Figure 4 (bb) shows inddirect accesssibility betweeen the [1066] X-ray
rooom and the [1 107] special clinic room. The numbeer of turns am mong spacess in a
hosspital BIM model
m is evalluated to validate accessiibility. Acceessible paths calculated
in the
t SMC aree visualized with w green and a blue lines to represennt accessibiliity between
twoo spaces.

( Direct acccessibility: a green


(a) g line (b) Inndirect accessiibility: a bluee line

Figure 4. Accessibiliity validationn between sppaces

Usiing the extennded BERA language for accessibilitty checking algorithm, users
u can
deffine comman nds in the BE
ERA editor to
t execute thhe validationn. Table 1 shoows two
commmands thatt implement direct and inndirect accesssibility checcking as shoown in
Figgure 4 (a) and
d (b). These commands retrieve
r the BOM objectts translated from an

©  ASCE
Computing  in  Civil  Engineering  2015 577

impported IFC file


fi and implement assocciated checkiing libraries of the SMC. The BOM
objects are validated accordding to user--defined rulees sets. The validation
v prrocess
gennerates resultts on 3D visuualization inn the SMC innterface to highlight the areas of
nonn-compliancee with regulatory requireements and on o the BERA A console viiewer to
creaate textual reeports. Specifically, the outputs of thhe rule-checkking executiion provide
valiidation condditions and juudging resullts (true, false, or unknowwn) against whether
w a
subbmitted buildding model contains
c corrrect componnents, attributtes, and propperties of
targgeted objectss.
Table 1.
1 Commands for accessiibility validaation
Command
d for direct acccessibility C
Command for indirect accesssibility
R officeRu
Rule ule(Space starrt, Space end)) Rule officeRule(Spa
of ace start, Spaace end)
{Path p = getPath(staart, end);} {PPath p = getP
Path(start, endd);}
officeRule(“Storage””, “Special Cllinic off
fficeRule(“X-raay Room”, “SpSpecial Clinic
Room”);; Rooom”);

Figgure 5 illustraates the resuults of visibillity checkingg implemented in the SM MC and the
BERA environm ment. Figuree 6 shows texxtual reportss for visibilitty checking generated
in the
t BERA co onsole vieweer. Similarlyy to the accesssibility validation, visibbility is also
asseessed accord ding to compputing the nuumber of turrns to reach the t end spot from the
starrt point. Table 2 includess the commaand for visibbility, which users can deefine in the
BERA editor. Figure
F 6 (a) represents
r thhat the nursee station in thhe [117] corrridor is
direectly visible to the [101] patent area.. Figure 6 (bb) shows the textual repoort that the
[1003] patient arrea is not vissible from thhe nurse statiion in the [117] corridor because
the number of turns
t requireed to reach thhe [103] patiient area from m the nurse station is
onee. The walkaable path, whhich require one more tuurn, prevent physical p obsservation of
patiients from a nurse station. Thus, thiss checking process
p help an architect ensure
satiisfying visib
bility among spaces in thhe building design
d duringg the predesiign phase.

Fig
gure 5. Visuualization of visibility rellationship beetween
a nurrse station annd patient roooms

©  ASCE
Computing  in  Civil  Engineering  2015 578

(b) Report for visibility passed (c) Report for visibility failed

Figure 6. Textual reports for visibility

Table 2. Command for visibility validation


Command for visibility

Rule officeRule(Space start, Space end)


{Path p = getPath(start, end);}
officeRule(“Nurse station”, “Patient area”);

Throughout the design and construction phases, these checking algorithms for
accessibility and visibility can be iteratively used to evaluate the conformity to the
predefined requirements of a building design and to maintain the quality of the design.
In addition, the BERA language can be extended to help architects and
nonprogrammers easily develop their own rules for specific assessment such as safety
and fire code exit path checking, sunlight and glare percentage analysis, and space
allocation. These possible rule language can be defined and executed by the extension
of the BOM in the BERA language and the checking libraries of the SMC application.
This opportunity will offer the value of greater regulatory predictability and
consistency, which reduce human errors throughout the design phase.

CONCLUSION

Project participants are constantly facing the challenging situation in which a building
model should be validated iteratively with regulations and complex requirements.
One of the primary benefits of BIM is an automated rule-based checking (Eastman et
al. 2008). This research addresses the rule sets and the checking algorithms of
accessibility and visibility and proves the effectiveness of automated rule-based
checking. In particular, through the extension of the BERA language, this research
primarily deals with the program requirements of a building design organized by
rooms or spaces where the rules apply and also on relationships between spaces. The
extended BERA language can be applied at the level of back-end extensibility: the
BERA language can be targeted and reused in other modeling platforms such as a
BIM authoring tool or a conceptual modeling application to evaluate its native model
according to the user-defined rule sets. In addition, the possible extensions of
dynamic BOM in the BERA language can help validate programming requirements,

©  ASCE
Computing  in  Civil  Engineering  2015 579

space layout and circulation, the fire and safety code, equipment placement, and diver
evaluation during the design phase. In the long term, these efforts of rule-based
checking will reduce the errors of a building design, minimize the risk of professional
liability, and facilitate the process of a design and its data exchange. In addition,
using rule-checking features, building professionals can ensure the document
submission for final approval through iterative confirming potential conflicts with
predefined regulatory requirements. The testing rule sets and algorithms also can be
re-used as a checking library throughout a building project because the fundamental
test-bed setup is kept. This automated validation process aims to encourage domain
experts using a 2D design to expedite the enhancement and the customization of BIM
application to comply with the IFC schema. The ultimate goal of the rule-checking
process, however, is not only to put in place an automated checking system, but also
to offer the impetus to gear up the architecture, engineering, and construction
industries towards greater interoperability through the secured deployment of IFC-
based BIM applications. To accomplish the concrete validation process for addressing
various design requirements, the development of a rule translator and a stand-alone
checking application that employ the BERA language will be performed.

REFERENCES

Ding, L., Drogemuller, R., Jupp, J., Rosenman, M. A., and Gero, J. S. (2004)
“Automated Code Checking.” CRC for Construction Innovation, Clients
Driving Innovation International Conference, Surfers Paradise, Qld.
Eastman, C. M., Teicholz, P., Sacks, R., Liston, K. (2008). “BIM Handbook: A Guide
to Building Information Modeling for Owners, Managers, Designers,
Engineers and Contractors.” John Wiley & Sons, Inc., New Jersey.
Eastman, C. M., Lee, J. M., Jeong, Y. S., Lee, J. K. (2009). “Automatic rule-based
checking of building designs.” Automation in Construction 18, 1011-1033.
Lee, J. K., Eastman, C. M., Lee Y. C. (2014). “Implementation of a BIM Domain-
specific Language for the Building Environment Rule and Analysis.” Journal
of Intelligent & Robotic Systems, 1-16.
Lee, Y. C., Eastman, C. M., Lee, J. K. (2014). “Validations for Ensuring the
Interoperability of Data Exchange of a Building Information Model.”
Automated In Construction, Under Review.
Solibri Inc., Solibri Model Checker. http://www.solibri.com/solibri-model-
checker.html (accessed 7.11.14).
Seo, H. B., Choi, Y. S., and Zimring, C. (2010). “Impact of Hospital Unit Design for
Patient-Centered Care on Nurses’ Behavior.” Environment and Behavior 2011
43: 443
Solihin, W., Eastman, C. M., Lee, Y. C. (2014). “Toward Robust and Quantifiable
Automated IFC Quality Validation.” Advanced Engineering Informatics.
Under Review.

©  ASCE
Computing  in  Civil  Engineering  2015 580

Using Building Energy Simulation and Geospatial Analysis to Determine


Building and Transportation Related Energy Use

Ebrahim Karan1; Somayeh Asadi2; Atefeh Mohammadpour3; Mehrzad V. Yousefi4;


and David Riley5
1
Assistant Professor, Department of Applied Engineering, Millersville University,
P.O. Box 1002, 40 East Frederick Street, Millersville, PA 17551. E-mail:
[email protected]
2
Assistant Professor, Department of Architectural Engineering, Pennsylvania State
University, 104 Engineering Unit A, University Park, PA 16802. E-mail:
[email protected]
3
Postdoctoral Researcher, Department of Architectural Engineering, Pennsylvania
State University, 104 Engineering Unit A, University Park, PA 16802. E-mail:
[email protected]
4
Director of Architecture, Rampart Architects Group, 19 Daad Street, Apt#19, Tehran,
Iran. E-mail: [email protected]
5
Associate Professor, Department of Architectural Engineering, Pennsylvania State
University, 104 Engineering Unit A, University Park, PA 16802. E-mail:
[email protected]

Abstract

According to the U.S. Energy Information Administration (EIA), building and


transportation sectors account for approximately 75% of CO2 emissions. Given the
magnitude of this statistic, many studies have been directed towards the issue of
energy use and carbon emissions of the built environment. Most of these studies
however, have focused only on either buildings or transportation systems. To analyze
the dynamics of energy use associated with buildings and transportation systems, it is
essential to explore the interactions between these two sectors in a single
comprehensive model. This paper develops a network infrastructure model to
determine the transportation energy intensity of a building as well as building energy
consumption based on the residents’ lifestyle. The proposed model is developed using
Geographic Information Systems (GIS) and Building Information Modeling (BIM) to
identify the current trends in energy use associated with people behavior and
infrastructure (buildings and transportation networks). BIM is used as a life cycle
inventory to model and collect building-related information and material quantities,
and GIS is used to define geo-referenced locations, storing attribute data, and
displaying data on maps. The main input to the model would be characteristics of
buildings and transportation networks, and socioeconomic data (population
dynamics) collected from a survey. The model then generates the energy and carbon
implications of the network in the form of a map.

©  ASCE 1
Computing  in  Civil  Engineering  2015 581

INTRODUCTION

Building and transportation are essential parts of all the reports on energy
consumption trends and projections for energy demand, and other energy related
topics. This is not surprising, because building and transportation sectors account for
approximately 75% of greenhouse gas (GHG) emissions. Energy use in the building
sector is defined as the energy consumed in residential and commercial buildings,
where people reside, work, or buy services. The transportation sector represents
energy use for moving people, materials and goods by different transportation modes
(e.g. highway, rail, and pipeline) (EIA 2013). Given the magnitude of this statistic,
many studies have been directed towards the issues of energy use and carbon
emissions of the built environment. Most of these studies, focused only on either
buildings or transportation systems. To analyze the dynamics of energy use associated
with buildings and transportation systems, it is essential to explore the interaction
between these two sectors in a single comprehensive model.
Previous studies have shown that understanding the importance of occupant
behavior is a crucial factor in the long-term success of energy efficiency measures in
different households (Lindén et al. 2006). However, building occupants are the group
that is sometimes neglected in energy efficiency design. This is probably the main
reason why the actual energy consumption in buildings is higher than those calculated
or projected. According to the results of an experimental study conducted over 3
years in multi-family buildings in Switzerland, Branco et al. (2004) found that the
real energy use was 50% higher than the estimated energy use due mainly to the real
conditions of utilization and actual weather conditions.
Transportation systems are usually modeled as a network of nodes (e.g.
buildings) which are connected by edges (e.g. highways). These nodes are locations
and facilities that have the capacity to generate traffic flow, and edges are trails, roads
and other types of transportation modes along which energy is consumed. The
transportation network is developed by knowing the number or frequency of trips in
each origin-destination (O-D) pair and used to underestand the relationships between
the various socio-economic and demographic factors, as well as other characteristics
of the built environment, and GHG emissions (Jia et al. 2009).
There are limited studies on the transportation energy expanse of buildings.
Recently, Wilson and Navaro (2007) have written an article entitled “Driving to
Green Buildings: The Transportation Energy Intensity of Buildings” which refers to
“the amount of energy associated with getting people to and from that building.” The
building’s transportation energy intensity has been used in this paper as a metric to
measure building performance. The authors claimed that for an average new office
building, transportation accounts for more than twice as much energy use as building
operation. According to a report prepared by U.S. Department of Transportation,
transportation energy use for average office building is 381 kWh/m3 year, while
operating energy use for average office building is 293 kWh/m3 year (Davis et al.
2007). These reasons justify extensive literature on the relationships between the built
environment and transportation-related energy use and GHG emissions, along with
implications for factors such as economic growth and quality of life (Porter et al.
2013).

©  ASCE 2
Computing  in  Civil  Engineering  2015 582

OBJECTIVE / SIGNIFICANCE

The uniqueness of this study rests on the notion of the built environment
model that integrates energy use in buildings and transportation infrastructures,
occupant behavior, and travel behavior into a dynamic network. In this network,
buildings (nodes) are connected by highways and roads. Geographic Information
Systems (GIS) is used as the platform for data acquisition and implementation, and
dynamic linkage between spatial (e.g. location of buildings) and attribute (e.g.
building’s carbon footprint) data. Moreover, the study is of particular significance
since it accurately reflects household and individual interactions with daily activity-
travel patterns. Due to lack of information regarding the dynamics of activities and
travel behavior of households, it is not possible to evaluate non-capital improvement
strategies associated with transportation energy. The objective of this study is to
bridge this gap through the application of activity-based modeling of travel demand,
which will enable decision-makers to understand the interactions between travel
choices and schedule of activities in terms of time and space. Based on the unit of
analysis, travel models can be categorized into trip-based and activity-based models.
In trip-based travel models, the travel demands are derived from the individual travel
behavior (e.g. individual person trip), but activity-based models use the need and
desire to participate in activities as the fundamental units of analysis.
The proposed model is developed using GIS and Building Information
Modeling (BIM) to identify the current trends in energy use associated with people
behavior and infrastructure. BIM is used as a life cycle inventory to model and collect
building-related information and material quantities, and GIS is used to define geo-
referenced locations, storing attribute data, and displaying data on maps. The
integration of BIM and GIS has been used successfully to solve spatial-related
problems in construction management. Examples of these applications include digital
modeling of building and landscape-level components (Karan et al. 2014),
construction site layout planning (Sebt et al. 2008), and facility management supply
chain (Karan and Irizarry 2014). This integration makes it possible to integrate
physical (e.g. buildings and roads) and social (e.g. people activities) features of the
built environment to determine building and transportation-related energy use. By
linking individual activity patterns and transportation infrastructure, this study will
promote sustainable energy policies to reduce GHG emissions from buildings and
transportation.

RESEARCH DESIGN AND METHODS

The research methodology is divided into three main steps: data collection,
development and calibration of the built environment model, and evaluation. A time-
use survey is conducted in the first step to gather travel information from individuals
and their activities. In the second step, an integrated BIM-GIS model is developed to
integrate physical (e.g. buildings and roads) and social (e.g. activities) features of the
built environment. Finally, the interaction between building and transportation
systems is explored in the third step.

©  ASCE 3
Computing  in  Civil  Engineering  2015 583

Step 1-Data Collection: Osburn Hall building on the Millersville University


campus was selected as the case study to simulate the energy consumption in a
building. The building is a three-story facility with an approximately 70,000 square
foot area. As-built drawings of the facility was used to develop a BIM model.
Therefore, detail and accurate data related to the existing conditions of the building
components was utilized in the energy simulation model. Travel behavior data was
collected from individuals who work in case study, it was also served as the main
destination point (node) of the individual’s trip.
The research team collected time-use survey data regarding individual’s
activities (e.g. work, school, shopping, personal business, etc.) over the course of a
week. The survey was administered to 22 full-time faculty members and staffs in the
Department of Applied Engineering, while 16 people participated in the survey. The
time-use survey required the participants to represent their travel information as a
series of tours that usually starts and ends at home. The purpose (going to/from work),
duration (departure and arrival times), and travel mode (personal car, public bus) of
activities for this tour was recorded every day over the course of a week. Table 1
shows the summary of data collection to understand the number and pattern of trips
by purpose and how people make travel decisions. In total 312 trips have been made
by the participants, with an average of 2.8 trips/person a day. Note that the percentage
of trips in terms of purpose or travel mode is calculated based on the distance traveled.
Only 3% of trips have been made by public transport.

Table 1. Travel Diary Summary.


Parameter Weekday Weekend Total
Total number of trips 244 (78%) 68 (22%) 312
Average distance traveled by each
27.6 mi 22.3 mi 26.1 mi
individual per day
Percentage of trips based on purpose
going to/from work 54% 12% 44%
personal 14% 20% 15%
shopping 11% 27% 15%
school-related 12% 5% 10%
other (e.g. restaurant, vacation) 9% 37% 16%
Average presence time in Osburn Hall
6.86 hr 1.14 hr 5.22 hr
building by each individual per day

Step 2-Built Environment Model: The built environment model is


developed as a network of nodes, representing the residential and public buildings,
which are connected by highways and roads. This study takes advantage of available
geospatial databases provided by the Pennsylvania Spatial Data Access to acquire
information regarding transportation network and the characteristics of land use.
Before adding the attribute data collected from the previous step, all the origin and
destination nodes as well as transportation modes were geocoded. Therefore, it was

©  ASCE 4
Computing  in  Civil  Engineering  2015 584

possible to measure the distance between any pair of origin-destination nodes for
different travel modes.
The building module of energy simulation considered both the building’s
occupant behavior and the physical components of the building (e.g. HVAC systems,
doors and openings, insulation and etc.) for evaluation of the energy use patterns.
Knowing the energy use category of each participant (i.e. austerity, average standard,
or high energy consumer) and the occupant schedules from the data collection step,
the dynamic model evaluates energy use patterns associated with the occupant
behavior.
BIM is used as a life cycle inventory to model and collect building-related
information and material quantities. eQuest was selected as the energy simulation tool
because of its robust and highly respected simulation engine and its parametric and
graphical reports. Figure 1 shows an example of the energy consumption map for
both building and transportation systems. The vehicle information (e.g. mile per
gallon), origin and destination locations, and the trip duration are the basic inputs of
the GIS model. The average speed is then calculated using the shortest distance
between the origin and the destination and the travel time for the trip. The energy (or
fuel) consumption is dependent on the vehicle information and the average speed (e.g.
the fuel consumption increases below or above the optimal speed). By applying the
fuel-efficiency rates provided by the U.S. Department of Energy, various fuel types
and transportation modes are taken into consideration and consequently fuel
consumption and energy use associated with the transportation system are calculated
within the built environment model. The GHG emissions of the building and

Figure 1. Spatial visual presentation of the energy use data associated with
case study building and transportation systems.

©  ASCE 5
Computing  in  Civil  Engineering  2015 585

transportation sectors are combined into one integrated GIS map. The results can be
further refined to reflect the individual’s activities.

Step 3-Evaluation and Scenario Analysis: Monthly energy consumption


data collected from the case study used to calibrate and evaluate the energy
simulation results obtained from the eQuest software. Building characteristics (e.g.
building envelope, type of HVAC system) are constant parameters and do not change
during calibration, but occupant behavior are variables that can be changed to
calibrate the model based on the whole-building monthly electricity and peak
demands.
The building operation schedules (e.g. classes, open lab, events, building tours,
and etc.) in November were used to measure the potential impact of the occupants on
energy consumption. The sum of the presence of occupants in the building for each
week was about 4340 occupant hours, with an average of 830 occupant hours on
weekdays and 95 occupant hours on weekends. Actual electricity usage in November
was 48,025 kWh, and the calibrated energy model shows 7,775 kWh energy use when
the building is unoccupied. Thus, the average energy consumption per occupant/hour
is 2.16 kWh. The result of the energy consumption value for the building was added
to the built environment model. The GHG emission produced per kWh is calculated
based on the GHG emission factors provided by the U.S. Energy Information
Administration (EIA). Figure 2 shows equivalent CO2 emissions of individuals
(traveler or building occupant) associated with energy use in buildings and
transportation.
The CO2 emissions’ results are used as a benchmark or basis for scenario
analysis. To address it, an integrated model with various buildings and transportation-
related scenarios were developed and simulated.
The potential energy savings in the case study building are simulated for three
categories of building materials and components: installation of skylights, generating
electricity with a solar photovoltaic (PV) system, and equipment power reduction.
The potential CO2 emission reductions in transportation are simulated for two
scenarios: using alternative-fuel type (electricity instead of gasoline) and work-at-
home scenario using telecommuting or teleworking.

Figure 2. Equivalent of CO2 emitted in participants’ transportation and case


study building

©  ASCE 6
Computing  in  Civil  Engineering  2015 586

Figure 3 shows the percentage of CO2 emission reduction for each scenario
based on benchmark values. To make comparisons among different CO2 emission
results, units are specified as percentages of the roof area (for skylights and PV
system), total power density (for equipment power), total number of car (for electric
cars), and total days in a week (for telecommuting). For instance, installing PV panels
to 10% of the roof produce approximately 11,500 kWh of electricity annually, about
1.8% of the energy consumption of the benchmark. According to the energy model,
equipment such as computers and tools in classrooms and labs (e.g. wood production,
metal, and computer labs), accounts for about 46% of the energy used in the case
study, which puts this component at the top of the list of building components with
high potential to save energy.
Since electric cars use electricity as their primary fuel, replacing a fraction of
total vehicles with electric cars is identified as one of the scenarios with the greatest
potential for CO2 emission reduction. However, the source of electricity and scope of
electricity supply should be taken into consideration. For instance, if we replace 10%
of all vehicles in the study area (electricity source and supply: Pennsylvania) with
electric cars, we can reduce CO2 emissions from transportation by about 5.7%. It
should be noted that the assessment of the time and cost associated with each scenario
is not included in the analysis.

Figure 3. CO2 emission reduction for scenarios

CONCLUSION

The proposed model is one of the first attempts to combine the energy
consumption in buildings and transportation together to identify the current and future
trends in energy use and CO2 emissions. The model incorporates the linkage amongst
activities and travel for each individual. The real benefit of this research will be more
useful in large-scale application, for example, energy planning of a university campus.
The results of the case study show the current trend in energy use. In order to explore
future trends in energy use and carbon emissions associated with buildings and
transportation systems, population dynamics should be incorporated in the model. In
addition, the system does not consider the time or cost required to implement energy

©  ASCE 7
Computing  in  Civil  Engineering  2015 587

saving scenarios. Future work should include an analysis with accurate costs and
relevant schedule specific to the study area.

REFERENCES

Boarnet, M., and Crane, R. (2001). "The influence of land use on travel behavior:
specification and estimation strategies." Transportation Research Part A:
Policy and Practice, 35(9), 823-845.
Branco, G., Lachal, B., Gallinelli, P., and Weber, W. (2004). "Predicted versus
observed heat consumption of a low energy multifamily complex in
Switzerland based on long-term experimental data." Energy and Buildings,
36(6), 543-555.
Davis, S., Diegel, S., and Boundy, R. (2007). "Transportation Energy Data Book:
Edition 26. Oak Ridge National Laboratory." ORNL, 6978.
EIA (2013). "International Energy Outlook 2013 With Projections to 2040." U.S.
Energy Information Administration.
Ewing, R. H., and Anderson, G. (2008). Growing cooler: the evidence on urban
development and climate change, ULI Washington, DC.
Frank, L. D. (2000). "Land use and transportation interaction implications on public
health and quality of life." Journal of Planning Education and Research,
20(1), 6-22.
Jia, S., Peng, H., Liu, S., and ZHANG, X. (2009). "Review of transportation and
energy consumption related research." Journal of Transportation Systems
Engineering and Information Technology, 9(3), 6-16.
Karan, E., Sivakumar, R., Irizarry, J., and Guhathakurta, S. (2014). "Digital Modeling
of Construction Site Terrain using Remotely Sensed Data and Geographic
Information Systems Analyses." Journal of Construction Engineering and
Management, 140(3), 04013067.
Karan, E. P., and Irizarry, J. (2014). "Developing a Spatial Data Framework for
Facility Management Supply Chains." Construction Research Congress 2014,
D. Castro-Lacouture, J. Irizarry, and B. Ashuri, eds., American Society of
Civil Engineers, Atlanta, GA, 2355-2364.
Lindén, A.-L., Carlsson-Kanyama, A., and Eriksson, B. (2006). "Efficient and
inefficient aspects of residential energy behaviour: What are the policy
instruments for change?" Energy policy, 34(14), 1918-1927.
Polzin, S., and Chu, X. (2007). "Exploring long-range US travel demand: A model for
forecasting state level person miles and vehicle miles of travel for 2035 and
2055." Center for Urban Transportation Research, University of South
Florida.
Porter, C. D., Brown, A., Vimmerstedt, L., and Dunphy, R. T. (2013). "Effects of the
Built Environment on Transportation: Energy Use, Greenhouse Gas
Emissions, and Other Factors."
Sebt, M. H., Karan, E. P., and Delavar, M. R. (2008). "Potential Application of GIS to
Layout of Construction Temporary Facilities." International Journal of Civil
Engineering, 6(4), 235-245.
Wilson, A., and Navaro, R. (2007). "Driving to green buildings: the transportation
energy intensity of buildings." Environmental Building News, 16(9).

©  ASCE 8
Computing  in  Civil  Engineering  2015 588

A Simulation Framework for Network Level Cost Analysis in Infrastructure


Systems

M. Batouli1; O. A. Swei2; J. Zhu1; J. Gregory3; R. Kirchain4; and A. Mostafavi5


1
Ph.D. Candidate, Florida International University.
2
Ph.D. Candidate, Massachusetts Institute of Technology.
3
Research Scientist, Massachusetts Institute of Technology.
4
Principal Research Scientist, Massachusetts Institute of Technology.
5
Assistant Professor, Florida International University.

Abstract

The rapid deterioration of infrastructure systems along with the shrinkage of


funding resources necessitates cost-effective management of infrastructure networks.
The existing methods for cost analysis of infrastructure networks are based on
optimizing network costs for a limited period, and hence, are prone to: (i) shifting the
cost burdens, (ii) not considering the service life of assets in a network beyond the
planning horizon and (iii) not considering uncertainty in factors such as future
preservation funding as well as timing and cost of preservation activities. In this
paper, we propose a simulation framework to address these limitations in network-
level cost analysis. The proposed framework is based on the premise that a
sustainable practice is the one that provides the longest service life at the lowest cost
in infrastructure networks. The proposed framework determines the network-level
costs considering the dynamic network-agency-user interactions and uncertainties.
The application of the proposed framework is demonstrated using a case study
pertaining to a pavement network. The results show the capability of the proposed
framework in evaluating and identifying sustainable strategies leading to the longest
service life for the assets at the minimum network-level costs.

INTRODUCTION

The ever-growing gap between available funds and necessary expenditures to


keep pace with the accelerating deterioration of U.S. infrastructure calls for
sustainable strategies for cost effective management of the nation’s civil systems. In
particular, decision-makers are increasingly interested in identifying efficacious
strategies that can provide long-term benefits for infrastructure networks (Rangaraju
et al., 2008). To achieve this goal life-cycle cost analysis (LCCA) has become a key
component of asset management in some governmental agencies. LCCA enables the
direct economic comparison between competing alternative investments in order to
identify the best value investment, which is the lowest long-term cost that satisfies
performance objectives (Keoleian and Spitzley, 2006; Santos and Ferreira, 2013;

©  ASCE 1
Computing  in  Civil  Engineering  2015 589

Walls and Smith, 1998). However, the existing LCCA approaches (e.g., Zhang et al.,
2013) have certain limitations for network-level cost analysis. First, they assume that
the timing, type and amount of future costs are deterministic, fixed values. However,
infrastructure networks include dynamic and uncertain interactions between the
environmental conditions, availability of funding, deterioration of assets, user
behaviors, and agency’s decision processes and priorities, all of which affect the
likelihood, timing, and amount of future costs (Batouli and Mostafavi, 2014). Second,
in the existing optimization-based cost analysis methodologies, costs are only taken
into consideration if they occur within the planning horizon. In reality, however, this
assumption is inconsistent with the continuous nature of service in infrastructure
networks; hence, using the existing optimization-based methods will lead to shifting
cost burdens beyond the planning horizon, defying the principles of sustainability
(Batouli and Zhu, 2014). For instance, in the example shown in Figure 1, preservation
activity AM4 is scheduled to be implemented during the final years of the planning
horizon. If the preservation activity is deferred, it will lead to cost reduction over the
planning horizon. However, this practice is not consistent with the principles of
sustainability. In order to resolve the cost-deferring tendency of the existing
optimization-based approaches, all life cycle costs of individual assets, even those
that fall beyond the planning horizon, should be taken into consideration. To this end,
an appropriate methodology for cost analysis in networks of infrastructure should be
capable of modeling the long-term costs beyond the planning horizon by considering
the dynamic interactions and uncertainties. This study proposes a simulation
framework for this purpose.

Legend:
: Cost of reconstruction of asset A (B) : Cost of maintenance/ rehabilitation

Figure 1. Discrepancy between service lives of individual assets and continuous


service life of an infrastructure network.
METHODOLOGY

We propose a simulation framework for network-level costs analysis in infrastructure


systems. We first present an overview of the steps in the proposed framework, and
then demonstrate the implementation of the framework in a case study. The proposed
framework includes four steps as shown in Figure 2: First, the interactions between
user, agency and assets are modeled. Users’ behaviors affect the level of demand on
assets, while at the same time, the assets’ quality and level of service influences the
users’ behaviors. The condition of an asset depends on the level of demand as well as
the preservation/expansion actions taken by the administrative agency. The
administrative agencies determine their management strategies based on the
conditions of assets, expectations of users, and availability of resources. These
dynamic interactions can be abstracted and simulated using appropriate methods such

©  ASCE 2
Computing  in  Civil  Engineering  2015 590

as agent-based modeling and dynamic mathematical modeling (Mostafavi et al.


2013).Second, using the simulation model created, the amount and timing of cost
cash flows are modeled. Then, the costs for all asset life cycles that fully or partially
overlap with the planning horizon are determined. A “life-cycle” for an asset is
defined as the time between two consecutive reconstruction activities for the asset.
For example, if the planning horizon is 40 years and the next reconstruction for asset
A will occur in year 50, the analysis will consider all costs up to year 50 (i.e., end of
the current service life for asset A).

Budget
Step 1: Simulating user/asset/agency

cycle costs beyond planning horizon


Step 2: Modeling asset-level life
Policy Limitations Political
Requirements pressure Asset 1
Life cycle 1 Life cycle 2
Agency
interactions

Asset 2
Life cycle 1
Demand/Pressure
User Asset Life cycle 2
Service
Sociotechnical Environmental Asset n
Condition Condition Life cycle 1 Life cycle 3
Step 4: Aggregation of cost annuities

Step 4.1. Monte Carlo simulation: Planning Horizon


Step 3: Annuitizing costs of every
to acquire network-level cost

life cycle of each asset

Asset 1
Probability Distribution of Input variables

Asset 2
Distribution of annuity costs

Step 4.2: Summing up asset-level


annuities to calculate network-level costs Asset n

Figure 2. The proposed framework for calculating network-level costs.

Third, the cash flows related to individual asset costs are converted into their
equivalent annual worth (i.e., annuity). This is because individual assets have
different life cycles from each other and from the planning horizon. Hence, using
annual worth conversion, the annual equivalent costs of each asset are determined
and aggregated to determine the network level annual equivalent costs over the
planning horizon (Newman 2004). Fourth, the variables and parameters affecting the
agent-network-user interactions are inherently uncertain. For example, the uncertainty
related to the level of funding, deterioration of assets, and the future preservation
costs affect the uncertainty in the cost cash flows, and hence, annual network-level
costs. In step 4, Monte-Carlo simulation is used to determine the mean and variance
of network-level costs. This will enable selecting strategies that lead to lowest
network costs with the greatest likelihood.

NUMERICAL EXAMPLE

Twelve sections of a road network provided in The ICMPA7 Investment


Analysis and Communication Challenge for Road Assets (Hass, 2008) were used to
demonstrate the application of the proposed framework. The roads in the network are
of different types, ages and conditions, as shown in Table 1. The scope of this

©  ASCE 3
Computing  in  Civil  Engineering  2015 591

numerical case study is limited to the costs incurred to the agency; hence, the user
costs and the influencing user behaviors are excluded from the analysis in this case
study.
Table 1. Characteristics of the Case Network.

Cost distribution (Normal (Mean, Sigma) in thousand dollars/lane-mile)


Road Road Length ESAL/
Lanes Routine Surface
Name Type (miles) Day Construction Overlay Rehabilitation
Maintenance Treatment
A R 1.55 4 224 (141.2, 24.9) (3.7, 0.9) (15.5, 4.3) (46.9, 11.9) (79.9, 7.9)
B I 0.50 4 1185 (341.5, 48.6) (4.7, 1.2) (21.4, 5.8) (70.8, 14.9) (116.1, 10.8)
C I 0.68 4 1645 (228.8, 36.3) (4.8, 1.2) (21, 5.6) (72.1, 15.2) (117.3, 10.5)
D I 0.19 4 1756 (416.9, 60.9) (5.6, 1.4) (27.3, 7.6) (82.7, 17.1) (140.4, 14.3)
E R 0.43 4 864 (278.1, 45.3) (5.4, 1.4) (24.4, 6.6) (80.2, 16.9) (132.5, 13.1)
F R 2.73 4 688 (260.4, 54) (3.1, 2.1) (10.6, 2.7) _ (58.7, 15)
G I 0.62 4 1142 (533.8, 110) (5, 3.3) (18, 4.7) _ (101, 26.8)
H R 1.06 6 1785 (376.1, 67.2) (3.3, 2.3) (11.3, 3) _ (63.6, 16.3)
I R 2.80 4 1785 (289.9, 60.1) (2.9, 1.9) (10.1, 2.6) _ (55.9, 14.2)
J I 1.37 4 1185 (312.2, 44.3) (4.2, 1.1) (17.6, 4.9) (65.1, 14) (102.1, 8.8)
K I 1.68 4 1479 (247.5, 34.8) (3.9, 1) (16.1, 4.3) (60.3, 12.7) (94.7, 8.2)
L I 0.62 6 1756 (33.1, 48) (4.1, 1.1) (17.9, 4.9) (61.9, 12.8) (100.3, 9.1)

Step 1: Simulating agency/asset/user interactions


A simulation model was created to abstract and model agency/asset/user interactions.
The performance of asset networks is a function of the timing and type of
preservation activities, availability of M&R funding, the agency’s decision processes
for prioritization of projects, and the life cycle cost and condition of individual roads.
In this example Present Serviceability Rating (PSR) was used as an indicator of
pavement performance. A simplified prediction model proposed by Lee et al. (1993)
was utilized to model the deterioration behavior of pavement assets. The model
predicts the long-term performance of a pavement given the initial conditions, traffic
load, structure of the pavement, and weather conditions (Eq. 1):

(1)

In Eq. 1, denotes the initial value of PSR for a given link right after
construction. This value is 4.5 according to Chootinan et al. (2006) and Lee et al.
(1993). Cumulative Equivalent Single Axle Loads per day (CESAL) and STR
(existing structure of pavement) capture the impact of traffic load and structural
design of the pavement, respectively. An adjustment factor (A.F.) was used to capture
the effect of climate conditions. Finally, a,b,c and d are empirically-based coefficients
whose values depend on the type of pavement (Lee et al. 1993).
The performance of pavement assets is also affected by the M&R activities.
Four types of M&R activities were considered in this case study: routine maintenance,
surface treatment, overlay, and rehabilitation. Each of these activities leads to a
certain level of improvement in performance depending on the age of the pavement
(Chootinan et al., 2006). The timing and type M&R activities depend upon the
decision-making processes of the administrative agency that modeled using agent-
based modeling. The main variables in the agent-based model include the

©  ASCE 4
Computing  in  Civil  Engineering  2015 592

performance conditions of assets and the level of funding. The decision rules of the
administrative agency follow a “worst-first” strategy in which the roads with lowest
performance are prioritized for allocation of M&R funding. A maintenance and
rehabilitation (M&R) activity is implemented if it can restore the pavement to an
excellent condition; otherwise, if an adequate funding is not available for the required
M&R, repair activities are deferred to the next period. The details related to the
agent-based modeling of the agency decision processes and user behaviors can be
found in Batouli and Mostafavi (2014). The outcomes of this simulation model
determine the performance conditions of pavement assets, the service life of each
assets, and the type and timing of M&R activities. Figure 3 depicts the simulated
performance condition of the pavement assets in the network. The service lives of
pavement assets are determined based on the threshold values of PSR to determine
the need for reconstruction. These threshold values were considered to be 2.2 and 2
for urban and rural roads, respectively (Elkins et el. 2013). Once a road reaches this
threshold PSR value, it is considered to be irremediable by maintenance activities,
and hence, it should be reconstructed.

PS
R

Year

Figure 3. Simulated performance condition of pavement assets.


Step 2: Calculation of asset-level life cycle costs
Using the outcomes of the previous step, a probabilistic life-cycle cost for each road
in the network was estimated consistent with Federal Highway Administration
(FHWA) guidance considering initial cost variation. Although other sources of
variation exist (e.g., future material prices, quantity of inputs, or maintenance
schedule), they are of secondary importance relative to initial cost variation (Swei et
al., 2013). Initial cost variation was taken into consideration through the same
methodology implemented by Swei et al. (2013); that is, economic theory postulates
the average cost of production decreases as production increases (e.g., economies of
scale). Making use of significant bid data available through Oman Bid Systems, a
univariate regression model for each relevant paving activity was developed where
average unit-cost is a function of bid volume. Given that the regression model will
not capture all of the variation which may exist, the standard error of that regression
is used in order to form a probability distribution for unit-cost. The current model
assumes routine maintenance requires a small amount of patching (0.5%) as well as
joint sealing for concrete pavements, surface maintenance involves fog seal cracking
or diamond grinding, and rehabilitation requires a mill and fill along with significant

©  ASCE 5
Computing  in  Civil  Engineering  2015 593

patching for asphalt pavements, or diamond grinding, sealing of joints, and


significant patching for concrete pavements. The cost distributions related to
construction and maintenance activities are given in Table 1. The outcome of this step
determine the cash flows related to the life cycle costs of each asset in the network.

Step 3: Annuitizing costs related to every life cycle of each asset


After the service life and life cycle costs of each asset were calculated, the annual
equivalent costs for each pavement asset were calculated using annual equivalent
worth analysis. A real discount rate of 4% was used in the annual equivalent worth
analysis, consistent with FHWA guidance and current practice for many DOTs (Walls
and Smith, 1998) to calculate the annual equivalent cost values for each pavement
asset. The outcome of this step determines the annual equivalent costs of each
pavement asset calculated in dollar per lane-mile-year.

Step 4: Aggregation of cost annuities to acquire network-level cost


In this step, the annual equivalent costs of each asset were aggregated to
determine the annual network costs based on several Monte-Carlo simulations. One
example for application of the framework proposed in this paper is to evaluate the
impacts of M&R funding on the network-level costs. Different levels of funding
between $10,000 and $1,200,000 were considered with the result of the analysis
shown in Figure 4. It is clear that an increase in the availability of M&R funding from
zero up to about $110,000 reduces the $/lane-mile-year at the network-level.
Although a maintenance funding less than about $110k is not adequate for most
corrective M&R activities, it enables conducting other less costly preventative
maintenance activities. Thus, it leads to an increase in the service lives of the
pavements, which improves the $/lane-mile-year of the network. On the other hand,
when the M&R budget increases beyond $110,000, the $/lane-mile-year at the
network level increases due to sub-optimal use of more expensive M&R activities
while they might not be necessarily needed. This implies that for a network with
specific characteristics (pavement type, age, length, etc.), there is a budget level that
leads to a minimum $/lane-mile-year across the analysis horizon. Identifying this
budget level is a critical step in sustainable management of a network. For the case
study network, a sustainable level of M&R budget is about $110,000. A budget
amount greater and less than $110,000 could reduce the sustainability of the road
network. Identifying this funding level is critical in sustainable management of a
network.
The level of M&R funding also affects the overall performance of the network
(as measured by the average PSR values of the roads). A greater investment on M&R
activities leads to a greater performance of the network. However, as shown in Figure
4.b, there is not a linear relation between the rate of increase in the level of
performance and the availability of M&R funding. Beyond a certain level of funding,
the rate of improvement in the performance of the network decreases. In the case
study network, the funding levels greater than $200,000 do not lead to significant
improvement in the overall performance of the network. Based on the previous
results, the sustainable level of M&R funding leading to the minimum $/lane-mile-
year at the network level is $110,000 in the case study. This M&R funding amount

©  ASCE 6
Computing  in  Civil  Engineering  2015 594

leads to an average performance of 3.37 at the network level over the 40-year
planning horizon.

a. Network-Level Cost b. Network-Level Performance


Figure 4. Impacts of availability of M&R funding on the network.

CONCLUSION

In this paper we proposed a simulation framework for assessment of life-cycle costs


of infrastructure networks. Unlike traditional cost analysis models, which are based
on lump-sum static assessment of costs, the proposed framework captures the
inherent uncertainties in timing and amount of future costs based on modeling the
complex dynamic interactions between the condition of infrastructure assets, the
behavior of users and the decision making processes in the administrating agency.
The application of the proposed framework was shown using a numerical case study.
In the case study, the proposed framework was used to determine the level of M&R
funding that leads to the lowest $/lane-mile-year values at the network level. The
value of $/lane-mile-year can be a measure of sustainability since a sustainable
practice is the one that provides the longest service life for the network at the lowest
costs. Hence, the results of the case study highlighted the capability of the proposed
framework in evaluating different alternatives and strategies for improving the
sustainability of road networks. Infrastructure agencies could adopt the framework
presented in this paper to evaluate the sustainability of different strategies (e.g.,
funding prioritization, material selection, design, and maintenance/rehabilitation
strategies) in management of their infrastructure networks. From a theoretical
perspective, the framework proposed in this study is a preliminary step toward
integrating the traditional infrastructure management principles with the theoretical
underpinnings of complex adaptive systems for identifying sustainable strategies in
infrastructure networks based on capturing the dynamic behaviors and uncertainty at
the interface of agency/asset/user interactions.

REFERENCES
Batouli, M., and Y. Zhu. (2014). Using Accrual Accounting Life Cycle Assessment as
an Indicator of Urban Sustainability. In Computing in Civil and Building

©  ASCE 7
Computing  in  Civil  Engineering  2015 595

Engineering, pp. 1934-1942.


Batouli, M., and Mostafavi, A. A Hybrid Simulation Framework for Integrated
Management of Infrastructure Networks. In Proceedings of Winter Simulation
Conference, 2014, forthcoming.
Chootinan, P., A. Chen, M. R. Horrocks, and D. Bolling. (2006). A Multi-Year
Pavement Maintenance Program using a Stochastic Simulation-Based Genetic
Algorithm Approach. Transportation Research Part A: Policy and Practice,
Vol. 40, No. 9, pp. 725-743.
Federal Highway Administration FHWA (1989). Pavement Policy for Highways,
U.S. Department of Transportation.
Golabi, K., R. B. Kulkarni, and G. B. Way (1982). A Statewide Pavement
Management System. Interfaces, Vol. 12, No.6, pp. 5-21.
Hass R. (2008). The ICMPA7 Investment Analysis and Communication Challenge
for Road Assets. In Prepared for the 7th Conference on Managing Pavement
Assets.
Keoleian, G. A., and D. V. Spitzley. (2006) Sustainability Science and Engineering:
Defining Principles. Elsevier, New York.
Lee, Y. H., A. Mohseni, and M. I. Darter. (1993). Simplified Pavement Performance Models.
In Transportation Research Record: Journal of the Transportation Research Board,
No. 1397, Transportation Research Board of the National Academies, Washington,
D.C., pp. 7-14.
Mostafavi, A., Abraham, D., and DeLaurentis, D. (2014). ”Ex-Ante Policy Analysis
in Civil Infrastructure Systems.” J. Comput. Civ. Eng. 28, SPECIAL ISSUE:
2012 International Conference on Computing in Civil Engineering, A4014006.
Newnan, D. G. (2004). Engineering economic analysis (Vol. 2). Oxford University
Press.
Rangaraju, P., S. Amirkhanian, and Z. Guven (2008). Life Cycle Cost Analysis for
Pavement Type Selection, Clemson University.
Santos, J., and A. Ferreira. (2013). Life-Cycle Cost Analysis System for Pavement
Management at Project Level. International Journal of Pavement Engineering,
Vol. 14, No. 1, pp. 71-84.
Swei, O., J. Gregory, and R. Kirchain, Probabilistic Characterization of Uncertain
Inputs in the Life-Cycle Cost Analysis of Pavements. In Transportation
Research Record: Journal of the Transportation Research Board, No.2366,
Transportation Research Board of the National Academies, Washington, D.C.,
2013 pp. 71-77.
Walls, J, and M. Smith. (1998). Life-Cycle Cost Analysis in Pavement Design – in
Search of Better Investment Decisions. Federal Highway Administration.
Wang, K. and J. Zaniewski (1994). Analysis of Arizona Department of
Transportation New Pavement Network Optimization System. In
Transportation Research Record: Journal of the Transportation Research
Board, No. 1455, Transportation Research Board of the National Academies,
Washington, D.C., pp. 91-100.
Zhang, H., G. A. Keoleian, and M. D. Lepech.(2013). Network-Level Pavement
Asset Management System Integrated with Life-Cycle Analysis and Life-
Cycle Optimization. Journal of Infrastructure Systems, Vol. 19, No. 1, pp. 99-
107.

©  ASCE 8
Computing  in  Civil  Engineering  2015 596

BIM-assisted Structure-from-Motion for Analyzing and Visualizing


Construction Progress Deviations through Daily Site Images and BIM

Kevin K. Han1 and Mani Golparvar-Fard2


1
PhD Candidate, Dept of Civil and Environmental Eng., Univ. of Illinois at Urbana-
Champaign, Urbana, IL 61801; email: [email protected]
2
NCSA Faculty Fellow and Assistant Professor of Civil Engineering, and of Computer
Science, Univ. of Illinois at Urbana-Champaign, Urbana, IL 61801; PH (217) 300-5226;
email: [email protected]

ABSTRACT

In an effort to document work-in-progress, many construction companies take


hundreds of images on their project sites on a daily basis. These images together with
4D BIM can serve as a great resource for analyzing progress deviations. To facilitate
image-vs.-BIM comparison, several methods have been introduced that tie in all images
together in 3D using standard Structure-from-Motion (SfM) procedures. The resulting
point clouds are then superimposed with the 4D BIM, resulting in back-projection of
BIM on all images that were successfully registered through the SfM procedure.
However, often site images exhibit wide baselines and thus are not successfully
registered with BIM. To address current limitations, this paper presents a method
together with experimental results that leverages BIM as a priori to initiate the SfM
procedure. It is shown that by interactively guiding BIM into one or a few images that
have significant overlap with the rest, the proposed BIM-assisted SfM procedure results
in more complete point clouds and also generate more accurate BIM overlays on site
images.

INTRODUCTION

With the recent advancements in camera-equipped smartphones and tablets,


hundreds of construction site images are taken and shared among project participants
on a daily basis. Many photography documentation services have also emerged in
recent years to deliver visual records of the as-built construction to project participants
(Bae et al. 2014; Han and Golparvar-Fard 2014). Some of these companies take images
periodically at designated locations that are associated with 2D plan documents,
allowing project teams to access the state of progress/operation at a given time and
location of interest. Their services, however, provide limited views and analytics
capabilities because images are often not referenced against or connected to one
another, and they do not exhibit any correspondence with Building Information Models
(BIM). Users are typically required to navigate through 2D plans and select photos that
may best represent the area of interest. To facilitate the application of these images for

©  ASCE 1
Computing  in  Civil  Engineering  2015 597

analyzing and communicating progress deviations, it will be useful to have each image
referenced against and compared with BIM and one another.
To aid image-vs.-BIM comparison, several methods have been introduced that
tie in all images together in 3D using standard Structure-from-Motion (SfM)
procedures. The generated point clouds are then superimposed with the 4D BIM,
resulting in back-projection of BIM on all images that were successfully registered
through the SfM procedure. However, often site images exhibit wide baselines and thus
are not successfully registered with BIM. To address current limitations, this paper
builds on (Karsch et al. 2014) and presents a method that leverages BIM as a priori to
initiate the SfM procedure.
By interactively guiding BIM into one or a few images that have significant
overlap with the rest, the proposed BIM-assisted SfM procedure results in more
accurate and complete point clouds and also generate more accurate BIM overlays on
site images. In the following the state-of-the-art in BIM-vs.-image registration and their
limitations are introduced.

Build Differences

Behind Ahead of
Schedule Schedule

Figure 1. A collection of site images, BIM, and back-projection of BIM into one of
the non-referenced images.

RELATED WORK

Over the last few years, several techniques have been introduced that register
site images with BIM. A dominant method involves collecting time-lapse images from
fixed camera viewpoints to document the work-in-progress. These images are either
compared with one another (Abeid et al. 2003; Bohn and Teizer 2009) or against a 4D
BIM (Golparvar-Fard and Peña-Mora 2007; Golparvar-Fard et al. 2009a; Ibrahim et al.
2009; Kim and Kano 2008; Rebolj et al. 2008; Zhang et al. 2009) which represents the
expected state of construction progress. To highlight deviations in construction
progress, several visualization methods are also proposed that color code construction
elements based on the metaphor of traffic light colors (Golparvar-Fard et al. 2007).
Figure 1 illustrates an example where 4D BIM is superimposed on a time-lapse photo
for progress monitoring purposes. Based on the metaphor of traffic light colors, the
elements behind or ahead-of-schedule are color-coded with red and green colors
respectively.
Another line of work automatically generates 3D as-built point cloud models of
the ongoing construction using SfM procedures and then compares the documented as-
built model to underlying 4D BIM. Golparvar-Fard (2009b, 2010, 2011) and Brilakis
et al. (2011) conduct research on SfM-based 3D as-built documentation. Golparvar-
Fard et al. (2012) improve the density of these 3D as-built point clouds by adopting a

©  ASCE 2
Computing  in  Civil  Engineering  2015 598

pipeline of Multi-View Stereo and voxel coloring algorithms, and present a method for
superimposing point cloud models with BIM through a set of corresponding feature
points between the as-built point cloud and BIM. While these methods have produced
promising results, yet they still exhibit the following limitations:

1) Accuracy and completeness of generating point clouds and back-projection of


BIM remain a function of the quality and overlap among site images. With any
discontinuity in image overlaps, the SfM procedure may fail and results in
several non-overlapping point clouds. These models will not share the same
coordinate system (because each will be up-to-scale), will remain relatively
sparser, and therefore will be more time consuming to register them with BIM.
2) Registration between BIM and a point cloud is done manually as a post
processing step and is often inaccurate. Users can reduce registration error by
performing manual registration multiple times and fine-tuning similarity
transformation of the point cloud, however part of this inaccuracy is inherent to
the SfM procedure. Because BIM and point clouds are registered during a post-
processing stage, the accuracy of back-projecting BIM in site images will be a
function of the accuracy of both 3D reconstruction and also BIM-vs.-point
cloud registration.
3) Because all images and BIM need to be processed together, it will be difficult
to scale the method to smartphone images taken on site and use image
interfaces to provide access to BIM. Scaling this solution will require a server-
client architecture, where each image can individually be localized against BIM.

While BIM can provide a strong a priori for initiating the SfM procedure and
allows for scalability of a solution for BIM-vs.-image registration, yet its application
has remained unexplored for most part. In a recent study, (Karsch et al. 2014) leverages
BIM as a priori and presents a constrained-based procedure to improve the image-based
3D reconstruction. Their results show that the accuracy and density of image-based 3D
reconstruction and back-projection of 3D BIM on unordered and un-calibrated site
images can be improved compared to the state-of-the-art. The following provides an
overview of the method and its functionalities.

METHOD OVERVIEW

The proposed method takes advantage of a small amount of user input to


register all images with the underlying BIM. By leveraging a few correspondences
between an image (anchor camera) and the underlying BIM, the registration between
the image and BIM will be derived by solving for the Perspective-n-Point algorithm.
The method then registers other images automatically using a constraint-based SfM
procedure. New images of the same site – taken at either an earlier or later date – can
also be registered with no additional interaction.
Once BIM is registered with the images, the images are processed to create
time-lapses from unordered collection of images, reasons about static and dynamic
occlusions, and then provides simple visualization metaphors that enable a user to

©  ASCE 3
Computing  in  Civil  Engineering  2015 599

interact with and explore the rich temporal data from the collection of the images and
the underlying BIM. The following presents an overview of each part of the method.

Registration of BIM into the anchor camera. To begin the registration process,
the user chooses an anchor camera from the collection (an image that has significant
overlap with majority of the images) and then selects 2D locations in the image and
corresponding 3D points on the mesh model (See Figure 2). The developed interface
facilitates this selection by allowing the users to quickly navigate around BIM. Given
at least four corresponding points, the six-parameter extrinsic parameters of the camera
– three rotation (R) and three translation parameters (T) – are derived in a Perspective-
n-Point, or PnP problem setting where the re-projection error is minimized using
Levenberg-Marquardt algorithm. Here, the intrinsic parameters are fixed to have no
radial distortion, and the focal length is obtained either from EXIF data of the anchor
camera.

Figure 2. Model Registration (anchor camera).

Constrained Bundle Adjustment for registration of all images in the


collection. In typical SfM bundle adjustment formulations, re-projection error is
minimized by simultaneously adjusting intrinsic and extrinsic camera parameters, and
triangulated points. In the new solution, a constrained bundle adjustment is formulated
which leverages one or more calibrated cameras. More specifically, during the bundle
adjustment, the correspondence between the anchor camera and BIM is used to
constrain the 3D points such that any point triangulated using a feature point from the
anchor camera must lie along the ray generated by the anchor camera. Therefore, By
re-parameterizing points as 𝑋 (𝑡 ) = ℙ + 𝑡 ℙ (𝑢) , where 𝑡 is a scalar,
ℙ is the center, and ℙ (𝑢) is the ray generated from pixel 𝑢 in the anchor
camera, the formulation becomes:

argmin   project ℙ , 𝑋 (𝑡 ) − 𝑢
ℙ\ℙ ,
 ∈  

+ ‖project(ℙ , 𝑋 ) − 𝑢‖      
 ∈   \
This formulation provides better estimates since the model is constrained by
accurate camera parameters. Because it has fewer parameters to optimize over, the
efficiency of optimization is improved and variance in the estimates are reduced

©  ASCE 4
Computing  in  Civil  Engineering  2015 600

respectively. Figure 3 shows examples from the outcome of running the constrained
Bundle Adjustment for registration of all images in the collection using the anchor
cameras shown in Figure 2.

Figure 3. The results of superimposing BIM on a collection of site images using


the BIM-assisted SfM procedure.

Creating time-lapses from unordered collection of images. To create time-


lapses, the first step is to identify for each image, other images that were taken from
roughly the same viewpoint. This can be determined by modeling a single homography
transformation in a RANSAC loop among matched features in every pair of images.
Once similar-viewpoint pairs are identified by setting a threshold on the percentage of
inliers with respect to the single homography transformation, the homography between
each image to the base image is  used  to  transform  one  image  into  the  other’s  view. This
results in pixel-aligned temporal information. If no nearby viewpoints are found, this
image cannot be traversed temporally in 2D and thus does not generate time-lapses, yet
the resulting 4D point cloud can still be traversable.

Figure 4. Alignment of unordered images for creating time-lapses.

©  ASCE 5
Computing  in  Civil  Engineering  2015 601

Figure 4 shows an example of how the temporal alignment is acheived using


images that were taken from roughly the same viewpoint. Once the alignment is
achieved, the user can choose an area of interest in the image and traverse backforward
and forward in time. Figure 5 show an example of such interaction using the same
images that were aligned in Figure 4.

Figure 5. Time-lapse experience: current time (left), a section of the envelope at a


different time (center), and both sections at two different times (right).

Reasoning about Occlusions. The method also reasons about occlusions and
re-adjusts the image by comparing BIM and point cloud geometry from the camera
view point (Figure 6a and b). By projection points in the BIM, it is predicted whether
or not this point is in front of the model. Because the back-projected point clouds are
typically sparse, the binary occlusion predictions can be flooded by superpixels and
finally smoothened by using a cross-bilateral filter. Figure 6c shows the result of
superimposing the occluded area over the BIM overlay.

Figure 6. Depth map vs. BIM overlay and reasoning about occlusion.

DISCUSSION ON ACCURACY AND COMPLETENESS


To Compare the presented method against the state-of-the-art method, six
datasets ranging from 10 to 160 images were put together from two construction sites
with different BIM. As shown in Table 1, the proposed method results in smaller
rotation and translation error compared to (Wu 2013).

Table 1. the method results in smaller rotation and translation error.


Construction datasets
The proposed method VisualSfM (Wu 2013)
Rotation error (degrees) 0.74 4.23
Translation error (meters) 0.50 2.18

©  ASCE 6
Computing  in  Civil  Engineering  2015 602

Figure 7 shows an example of how the constraints provided through the anchor
camera registration help prevent drift (left) (Karsch et al. 2015). They also results in
more accurate and more complete point clouds. (right: due to higher rate of image
registration).

Figure 7. Visual comparisons to VisualSfM of (Wu 2013).

CONCLUSION AND FUTURE WORK

This paper presents a method together with experimental results that leverages
BIM as a priori to initiate the SfM procedure. BIM-assisted SfM begins with registering
a BIM with images using an anchor camera manually registered with a BIM, which
drives remaining 3D reconstruction and registration processes. This step is carried on
throughout the project duration, generating as-built models at different times. The
outcomes are visualized in a photorealistic and time-lapse like environment where a
user can navigate in a 4D environment. The user can see part of or whole building in
different states of progress.
The open research challenges that still needs to be addressed are as follows: 1)
choosing an anchor camera with insufficient overlaps with the remaining images may
lead to failure; 2) analysis and improvement on computational time needs to be
conducted; and 3) developing a client-server architecture needs to be studied for
making this system scalable to smartphones for onsite personnel.

ACKNOWLEDGEMENT
This material is in part based upon work supported by the National Science
Foundation under Grant CMMI-1360562. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.

REFERENCES

Abeid,   J.,   Allouche,   E.,   Arditi,   D.,   and   Hayman,   M.   (2003).   “PHOTO-NET II: a
computer-based  monitoring  system  applied  to  project  management.”  Automation
in construction, Elsevier, 12(5), 603–616.
Bae, H., Golparvar-Fard,   M.,   and   White,   J.   (2014).   “Image-Based Localization and
Content Authoring in Structure-from-Motion Point Cloud Models for Real-Time
Field  Reporting  Applications.”  J. of Comp. in Civ. Eng. 637–644.

©  ASCE 7
Computing  in  Civil  Engineering  2015 603

Bohn,   J.   S.,   and   Teizer,   J.   (2009).   “Benefits   and   barriers   of   construction   project  
monitoring using high-resolution   automated   cameras.”   Journal of Construction
Engineering and Management, ASCE, 136(6), 632–640.
Brilakis, I., Fathi, H., and Rashidi, A. (2011).   “Progressive   3D   reconstruction   of  
infrastructure  with  videogrammetry.”  Auto. in Const., Elsevier, 20(7), 884–895.
Golparvar-Fard, M., and Peña-Mora,   F.   (2007).   “Application   of   visualization  
techniques  for  construction  progress  monitoring.”  Comp. in Civ. Engin. (2007),
216–223.
Golparvar-Fard, M., Peña-Mora,   F.,   and   Savarese,   S.   (2009a).   “Monitoring   of  
Construction Performance Using Daily Progress Photograph Logs and 4D As-
Planned  Models.”  Comp. in Civ. Engin., 53–63.
Golparvar-Fard, M., Peña-Mora, F., and   Savarese,   S.   (2009b).   “Interactive   Visual  
Construction Progress Monitoring with D4AR --4D Augmented Reality --Models.”  
Construction Research Congress 2009, 41–50.
Golparvar-Fard, M., Peña-Mora,  F.,  and  Savarese,  S.  (2010).  “D4AR--4 Dimensional
augmented reality-tools for automated remote progress tracking and support of
decision-enabling   tasks   in   the   AEC/FM   industry.”   Proc., the 6th Int. Conf. on
Innovations in AEC.
Golparvar-Fard, M., Peña-Mora,   F.,   and   Savarese,   S.   (2011).   “Integrated   Sequential  
As-Built and As-Planned Representation with Tools in Support of Decision-
Making  Tasks  in  the  AEC/FM  Industry.”  J. of Const. Engin. and Mgmt.
Golparvar-Fard, M., Peña-Mora,   F.,   and   Savarese,   S.   (2012).   “Automated   Progress  
Monitoring Using Unordered Daily Construction Photographs and IFC-Based
Building  Information  Models.”  J. of Comp. in Civ. Eng, 147–165.
Golparvar-Fard, M., Sridharan, A., Lee, S., and Peña-Mora,   F.   (2007).   “Visual  
representation of construction progress monitoring metrics on time-lapse
photographs.”  25th Anniversary of Const. Mgmt and Economics, 2007, 239–256.
Han, K., and Golparvar-Fard,  M.  (2014).  “Automated  Monitoring  of  Operation-level
Construction  Progress  Using  4D  BIM  and  Daily  Site  Photologs.”  CRC 2014.
Ibrahim, Y. M., Lukins, T. C., Zhang, X.,  Trucco,  E.,  and  Kaka,  A.  P.  (2009).  “Towards  
automated progress assessment of workpackage components in construction
projects  using  computer  vision.”  Adv. Engin. Informatics, Elsevier, 23(1), 93–103.
Karsch, K., Golparvar-Fard,   M.,   and   Forsyth,   D.   (2014).   “ConstructAide:   analyzing  
and   visualizing   construction   sites   through   photographs   and   building   models.”  
ACM Transactions on Graphics (TOG), ACM, 33(6), 176.
Kim,  H.,  and  Kano,  N.  (2008).  “Comparison  of construction photograph and VR image
in  construction  progress.”  Automation in Construction, 17(2), 137–143.
Rebolj,   D.,   Babič,   N.   Č.,   Magdič,   A.,   Podbreznik,   P.,   and   Pšunder,   M.   (2008).  
“Automated  construction  activity  monitoring  system.”   Adv. Engin. Informatics,
Elsevier, 22(4), 493–503.
Wu,  C.  (2013).  “Towards  linear-time  incremental  structure  from  motion.”  Proceedings
- 2013 Int Conference on 3D Vision, 3DV 2013, 127–134.
Zhang, X., Bakis, N., Lukins, T. C., Ibrahim, Y. M., Wu, S., Kagioglou, M., Aouad,
G.,  Kaka,  A.  P.,  and  Trucco,  E.  (2009).  “Automating  progress   measurement   of  
construction  projects.”  Automation in Construction, Elsevier, 18(3), 294–301.

©  ASCE 8
Computing  in  Civil  Engineering  2015 604

Using BIM for Last Planner System: Case Studies in Brazil

M. C. Garrido1; R. Mendes Jr. 2; S. Scheer3; and T. F. Campestrini4


1
Graduate Student, Graduate Program of Production Engineering, Federal University
of Paraná, C.P. 19011, CEP 81531-980, Curitiba, PR, Brazil. E-mail:
[email protected]
2
Professor, Department of Production Engineering, Federal University of Paraná, C.P.
19011, CEP 81531-980, Curitiba, PR, Brazil. E-mail: [email protected]
³Professor, Department of Civil Construction, Federal University of Paraná, C.P.
19011, CEP 81531-980, Curitiba, PR, Brazil. E-mail: [email protected]
4
Graduate Student, Graduate Program of Civil Engineering, Federal University of
Paraná, C.P. 19011, CEP 81531-980, Curitiba, PR, Brazil. E-mail:
[email protected]

Abstract

Both the use of Building Information Modeling (BIM) technology and lean
philosophy provides gains in the application of the lean principles. This paper
presents the results of the application of an integration framework for the use of BIM
with Last Planner SystemTM. These results were obtained from a real world using of
BIM technology in two residential projects in the city of Curitiba (Brazil). Firstly, it
was held a planning and control processes diagnostic; after this, the LPS
implementation and the development of the BIM models were done; and finally, the
use of the integration framework of BIM with Last Planner SystemTM. The use of
BIM model in production management assisted in the decision-making on short and
medium term planning and control. It was implemented and electronic
communication (online or offline) for visualizations of work package process status
and 4D visualization of construction schedules. The implementation of BIM and LPS
in these two projects demonstrates some specific interactions between BIM and LPS
proposed in literature.

INTRODUCTION

Although independent, lean construction and BIM modeling are already


recognized by researchers that both can help each other and increase their potential
application in construction management (Koskela et al, 2010). Conceptual analysis of
lean construction and Building Information Modelling indicates significant synergies
between the two. Interactions between the two range right from the design phase to

©  ASCE 1
Computing  in  Civil  Engineering  2015 605

handover and facilities management. Although lean construction is being applied


throughout, application of BIM still remains predominant in the design phase.
Previous case studies have proven that the use of BIM with lean practices during the
construction phase improves the efficiency of planning (Dave, Boddy, & Koskela,
2011). This paper investigates the combined application of lean and BIM on two
residential building projects and discusses the advantages of this implementation.

LAST PLANNER SYSTEMTM

Lean construction influences in the process of Planning and Production


Control. Through mapping and definition of workflows, information, supplies and
enterprise, planning production is prepared with vision of lean construction flows. A
lean tool used widely in production control is the Last Planner SystemTM (Ballard,
2000) that divides the process of planning and control hierarchically in master plan
(long term), Lookahead (medium term) and Last Planner or Work Weekly Plan (short
term) (Formoso et al, 2001).
According to Ballard (2000), the master plan of the construction brings
activities that SHOULD be done. The process of the Last Planner System ™ transfer
activities for a state of MAY be produced, through the systematic removal of
restrictions (specifications of material, labor and information). The Last Planners will
promise activities that CAN be made moving to activities that WILL BE now
produced on the Weekly Work Plan. At the end of the week, the Last Planners
realigned activities performed (DID). For the author the Last Planner System ™ is a
philosophy that should be followed by everyone involved in the production process.

BIM

Building Information Modeling (BIM) consists of an epic transition in design


practice, which implies a shift on the paradigm in relation to CAD systems by
automating aspects of production drawings (Eastman et al., 2011). BIM can be
understood as a design process, construction and operation of the building
collaboratively, made possible by information technology. Also as a product: the
information model of the construction, virtually designed with geometric data and
other important data for construction, manufacturing and operation of that building.
Wang et al., (2014) reinforce the need to define a work breakdown structure (WBS)
to organize the data of the BIM model.

INTERACTION BETWEEN BIM AND LEAN

Sacks et al. (2010) list researches that show a strong synergistic interaction
between BIM and lean construction. The use of BIM brings the need for revising the
organizational processes and definition of deliverables that lean construction seeks.
The same authors say that BIM brings the opportunity of definition and revising the
core processes. Therefore, you can use BIM features to achieve lean principles and
even pull workflows. The same authors proposed an array with 56 synergistic
interactions between BIM and lean construction.

©  ASCE 2
Computing  in  Civil  Engineering  2015 606

Sacks et al. (2013) use a system called KanBIM. This system uses BIM and
visual management of work packages to pull flows along the Last Planner System.
Lean principles are achieved with the pulled flow.
Bhatla and Leite (2012) propose a theoretical framework of interaction
between BIM and Last Planner SystemTM, where BIM features are used to remove
restrictions, along Lookahead, and coordinate meetings for MEP clash detection.

FRAMEWORK FOR BIM AND LAST PLANNER SYSTEMTM

As working guidelines, it is used the theoretical integration framework of BIM


with Last Planner SystemTM (LPS) proposed by Mendes Junior et al. (2014). The
WBS orients the elaboration of the BIM model, serving as a definition of work
packages. These packages demand information necessary for production: schedule,
cost and quality. The master plan is drawn up through the BIM 4D model, into the
SHOULD phase of the LPS.

Figure 1 – Theoretical integration framework of BIM with Last Planner


SystemTM.

In the Lookahead plan (CAN phase of LPS), the BIM model suggests
demands for information (RFI). These RFI turn to constraints and are forwarded to

©  ASCE 3
Computing  in  Civil  Engineering  2015 607

the members of the management team. Data collected on site bring a production
feedback, suggesting or not a planning update.
In the weekly meeting (WILL phase of LPS), the management team divides
the Zero-constrained activities into smaller tasks, sequence and discuss them, setting
the weekly schedule. All demands for information are systematically registered in the
BIM model, in their respective WBS code. Past the productive period, it is necessary
to manage the tasks performed and the not concluded ones. Quality management is
applied and the information of performed tasks is feedback in the BIM model.

METHODOLOGY

Two case studies were carried out in this work on residential construction sites
in Curitiba (Brazil). The data were collected through participant observation, direct
observation, semi-structured interviews and documentary analysis. In both projects,
the procedure adopted by the researchers is the one presented in figure 2.

Figure 2 – Research procedure.

Before the weekly meetings for LPS planning, the researchers took a period
for collecting data of the undergoing activities, aiming to understand and diagnose the
planning and control processes and establish an action plan. In the first weekly
meetings, the researchers conducted (one week) short-term plans with the
management team and the Last Planners (three foremen). After this, they began also
the medium-term plans, for six weeks in advance (Lookahead). In the last phase, after
the routinely use of LPS, the research BIM model was introduced both for short and
medium-term discussions. All data were kept in a database with online access at
construction site, and updated after the weekly meetings.

CASE STUDY: FOUR TOWERS RESIDENTIAL PROJECT

The first project consists of four residential building towers with nine floors
each, a basement and boulevard on reinforced concrete structures. The fourth tower
also has a second underground and commercial rooms.
At the beginning, the researchers collected the project WBS and CAD designs
from the organization to develop the BIM model. The master plan was produced in
meetings with engineers, foremen, supervisors and trainees. The master plan in this
case study was developed without the use of 4D BIM. It was exposed in a whiteboard
on the site office.

©  ASCE 4
Computing  in  Civil  Engineering  2015 608

The work packages were pulled to the Lookahead plan, with a “view window”
of six weeks forward, and its restrictions raised and communicated to the
management team. The Lookahead plan was held in spreadsheets, without the use of
the BIM model. However, the 3D visualization was used to leveling the
communication in the meetings, because it took the understanding on which work
packages were in the Lookahead more easy.
The researchers followed the restrictions on short meetings with the
management team. As the work packages were with zero restrictions their status was
changed in the BIM model (color number 2, see figure 5). These packages were in
production stock, to be discussed at the weekly meeting. After the weekly meeting,
work packages that had been completed assumed the status “Check quality” (color
number 4, see figure 5), so the trainee could check quality.
In this project, the restrictions, the sequencing and planning updates were not
managed in the BIM model. Due to the project being in the final stage of work, the
BIM model assumed a role only in phases Will and Did of the LPS. Thus, the work
package only took the status “Check quality”, “Not finished” and “Rework”. There
was a lot of problems with the completion of all the reworks. Due to reduced time to
completion, it was necessary to know exactly what were the activities for completion
and what was the rework to do. A specific production team was mobilized only to
solve these problems.
The BIM model managed the delivery of work packages with quality and
completion, from previous months, which were neglected, or currently on work. The
status of the work packages in the fourth tower is shown on the figure 3 below (see
colors convention in figure 5). It is possible to see completion problems distributed
throughout the tower (yellow status).

Figure 3 – Status of each work package in the fourth tower.

©  ASCE 5
Computing  in  Civil  Engineering  2015 609

CA
ASE STUDY
Y: FIFTEEN
N FLOORS RESIDENT
TIAL BUIL
LDING PRO
OJECT

This prroject consists of a tower with fifteen


fi floors, two baseements and
commmon areas for recreatioon, swimming pool andd barbecue fireplace,
f loccated in the
cityy of Curitibaa (Brazil).
In the same
s way, thhe researcheers used the WBS of thee project to develop
d the
BIMM model. Laast Planner System wass already in use in this constructionn company.
Whhen the reseaarchers begaan the studyy, the masteer plan alreaady existed. During the
devvelopment of o the 4D BIM,
B some changes
c werre carried out
o in the master
m plan,
maiinly in exterrnal activitiess.
The wo ork packagess are pulledd to the Loookahead andd its restricttions noted.
Thrrough 4D BIIM, the sequuencing is reeviewed andd, if necessaary revised. TheseT work
pacckages assum me the statuus “Restricttion” (colorr 1, see figgure 4 and 5) and the
resttrictions reggistered in thhe database. As the resstrictions aree removed, researchers
colllect the dataa and updatte the database until thee work packkage could assume the
stattus “No restrriction” (collor 2, see figgure 4 and 5),
5 entering ini the producction stock.
In the
t weekly meetings, thhe Last Plannners scheduule the worrk packages,, taking the
stattus “Weekly y plan”. The informationn of durationn, start date, end date andd quality of
eacch work pack kage are updated to thee BIM modeel. The qualiity supervisoor decide if
the activity is with
w lack off completion, if there is a need for rework
r or if it is within
the standards accepted
a by the organizzation and can be paid to labor teaam with the
stattus “Paid” (ccolor 8, see figure 5). A status view w of the building can be seen in the
figuure 4.
In this case
c study, thhe frameworrk was applied almost enntirely.

F
Figure 4 – Status of eacch work pacckage by apaartments: frrom “restriiction” to
“ffinished”.

RE
ESULTS

These real
r world applications
a demonstratee how to usse BIM techhnology for
elecctronic comm munication (both
( online and offline)) allowing thhe visualizatiion of work
pacckage contro ol status annd 4D visuaalization of constructionn schedules. The BIM
funnctionalities help the teaam managemment implem ment some leean principlees, such as,

©  ASCE 6
Computing  in  Civil  Engineering  2015 610

reduce production cycle durations, reduce inventory of work packages, and pull the
production.
A summary of the applied procedure, using the status and colors of figure 5,
is: while the work package is in the Lookahead plan its restrictions and demands for
information are managed by the management team and its status is “Restriction”
(color 1). When managers remove all the restrictions, the work package can be
scheduled, assuming the status “No Restriction” (color 2). If it is scheduled for the
current week, its status changes to “Weekly plan” (color 3). When it is concluded, its
status changes to “Check quality” (color 4), demanding quality verification. The
responsible for this verification decides, on the situation of the work package: order
completion, status “Not finished” (color 5); order rework, status “Rework” (color 6)
or approve and close the package, status “Finished” (color 7). In the time of labour
payments, the work packages with status “Finished” can be paid, and after this, its
status changes to “Paid” (color 8).

Figure 5 – Eight different status that a work package can assume.

In both case studies, the team of researchers expended nearly four months to
planning and control processes diagnostic. After this, four months to LPS
implementation; along with these four months developing the BIM model; and in the
final phase, four months attending the weekly meetings using the BIM model. Only
after the processes diagnostic and the LPS implementation, it was possible to use the
integration framework.

CONCLUSION

BIM and lean have been implemented independently on many projects to


realize significant benefits to all the stakeholders. Recent researches have pointed
several interactions between BIM and lean which can lead to further benefits. Some
integration frameworks for implementing BIM and lean have been proposed. The
case studies presented in this paper apply the integration framework proposed by
Mendes Junior et al. (2014) (see figure 1) with use of a BIM model and the Last
Planner SystemTM for the production management on the construction site. This
implementation of BIM with LPS increases the efficiency on decision-making on the
weekly plans.
The use of BIM model in production management assists in the decision-
making on short and medium term planning and control. The BIM model provides
correct information at the proper time. It is not necessary to waste much time for data
collection. In their site visits, the manager quickly started the discussions for
decision-making, to ensure that the work packages were produced on time, cost and
quality. The work package management via BIM was performed from the status
“Restriction” to “Paid”, with reliability and precision and without spent much time to

©  ASCE 7
Computing  in  Civil  Engineering  2015 611

find out the real status of each package. The BIM model was the information
centralizer of reliable information. To make it possible, the entire management team
needed to be involved: from foremen (Last Planners), supervisors responsible for
quality and measures for payments, manager (engineer) and the chief-engineer on site.
The results in this paper show the following features of BIM stated by Sacks
et al. (2010) were performed: single information source, automated clash checking,
and visualization of process status and online communication of product and process
information. This features worked supporting lean principles of reduce variability
(flow variability), reduce the production cycle durations, reduce inventory of tasks,
reduce the batch sizes, use pull system and use visual management with the use of
technology.

REFERENCES

Ballard, G. (2000). “The Last Planner System of Production Control.” Thesis (Doctor
of Philosophy) – School of Civil Engineering, Faculty of Engineering.
University of Birmingham, Birmingham.
Bhatla, A.; Leite, F. (2012). “Integration Framework of BIM with the Last Planner
System.” In: Annual Conference of the International Group for Lean
Construction, 12, Proceedings.... San Diego, United States.
Dave, B., Boddy, S., & Koskela, L. (2011). “Visilean: designing a production
management system with lean and BIM.” In: Annual Conference of the
International Group for Lean Construction, 11, Proceedings.... Lima, Peru.
Eastman , C.; Teicholz, P.; Sacks, R.; Liston, K. (2011) “BIM Handbook: A guide to
Building Information Modeling for Owners, Managers, Designers, Engineers
and Contractors.” 2nd ed. John Wiley & Sons, Inc.
Formoso, C. T. (Org.). (2001). “Planejamento e controle da produção em empresas de
construção.” Núcleo Orientado para a Inovação da Edificação. Universidade
Federal do Rio Grande do Sul. Porto Alegre, Brazil.
Mendes Junior, R.; Scheer, S.; Garrido, M. C.; Campestrini, T. F.; (2014).
“Integração da modelagem da informação da construção (BIM) com o
planejamento e controle da produção.” In: Encontro Nacional de Tecnologia
do Ambiente Construído, 15, Proceedings... Maceió, Brazil.
Sacks, R.; Koskela, L.; Dave, B. A.; Owen, R. (2010). “The interaction of lean and
building information modeling in construction.” Journal of Construction
Engineering and Management, ASCE, p. 1307-1315.
Sacks, R.; Barak, R.; Belaciano, B.; Gurevich, U.; Pikas, E. (2013). “KanBIM
workflow management system: prototype implementation and field testing.”
Lean Construction Journal, p. 19-35.
Smith, D. K.; Tardif, M. (2009). “Building Information Modeling: a strategic
implementation guide for architects, enginneers, constructor and real estate
asset managers.” 1nd e. John Wiley & Sons, Inc.
Wang, W.; Weng, S. Wang, S. Chen, C. (2014). “Integrating building information
models with construction process simulations for project scheduling support.”
Automation in Construction, Elsevier B.V. v. 37, n. 5, p. 68-80.

©  ASCE 8
Computing  in  Civil  Engineering  2015 612

Collecting Fire Evacuation Performance Data Using BIM-Based


Immersive Serious Games for Performance-Based Fire Safety Design
Jun Zhang1 and Raja R. A. Issa2
1
Ph.D. Student, Rinker School of Construction Management, University of Florida,
Gainesville, FL 32611. E-mail: [email protected]
2
Holland Professor, Rinker School of Construction Management, University of
Florida, Gainesville, FL 32611. E-mail: [email protected]

Abstract

Performance-based fire safety design in buildings has been widely accepted


throughout the world. However, performance features for safety design in building
have traditionally been designed using historical statistical data, or just the data from
previous experiences and do not always accurately correspond to the unique
characteristics of the building. Evacuation performance is one of the main factors to
be determined and is the most challenging parameter due to the complex human
behavior exhibited during the evacuation process. As the complexity and innovation
in modern building design increases, the historical methods to determine the
evacuation performance features in safety design are becoming insufficient.
Moreover, in reality, it is impossible to conduct an evacuation experiment to
determine the evacuation performance before construction, and is also difficult to
collect the accurate features in a simulated building condition during the prescribed
emergency situations. A new method, a building information modeling (BIM)-based
Immersive Serious Gaming environment is proposed, which provides an immersive
emergency evacuation scenario to solicit the behavior decisions made by the players.
Evacuation performance data is contemporaneously from the players playing the
game. The interaction between occupants (players) and the building during
evacuation are studied to determine evacuation performance and provide behavioral
data for the evacuation performance evaluation of the building design. This research
will extend the study and adaptability of serious computer games in performance-
based safety design.

INTRODUCTION

Background. Fire safety design in buildings in many countries around the world are
shifting from prescriptive-based codes to performance-based codes. A performance-
based code allows the use of any design which satisfies compliance with the safety
goals of the code. Those goals are explicitly spelled out in the code, as are means that
can be used to demonstrate compliance. The performance-based design approach
improves design flexibility by establishing clear code goals and leaving the means of
achieving these goals to the designer. As a result, the codes are more functional, less
complex and easier to apply. Another advantage of performance-based design is that
they will permit the incorporation and adoption of the latest building and fire
research, data and models. These models are used as the tools for measuring the

©  ASCE
Computing  in  Civil  Engineering  2015 613

performance of design alternatives against the established safety levels. The optimum
design would be achieved to meet the code safety objectives.
For performance based designs, the engineer is also tasked with deciding if
the building is designed with enough protection to allow the occupants to evacuate
before incapacitation occurs. In a constantly evolving building environment, fire
safety engineering depends greatly on knowledge a better understanding of the fire
phenomena, the behavior and response of the building occupants to the fire. As the
complexity and innovation in modern building design increases, the historical
methods to determine the evacuation performance features in safety design are
becoming insufficient. There are normally two potential ways trying to solve this
problem: one is to conduct fire drills or fire emergency evacuation experiments to get
more accurate information; another one is to perform compute evacuation modeling
to evaluate the evacuation performance. The fire drills or evacuation experiments tend
to be carried out very rarely due to the cost involved in planning them and the loss of
work hours in conducting them. Moreover, an evacuation drill concentrates on a
specific scenario and does not provide a comprehensive evacuation performance with
all the possible routes from different starting positions and different possible fire
locations. While, evacuation modeling is increasingly becoming a part of
performance-based analyses to assess the level of life safety provided in buildings, it
should be applied conservatively because this type of validation may not actually
simulate occupant behavior in a real situation. Some of the behavioral models will
perform a qualitative analysis on the behaviors of the population. However this is
problematic since occupant behaviors are difficult to catch in fire drills.
The problem in performance-base design for fire safety is the human behavior
factors, since large amounts of real human evacuation data are needed for developing
the evacuation simulation model. Therefore, the proposed solution is a building
information modeling (BIM)-based Immersive Serious Gaming (BIM-ISG)
environment, which provides entertaining immersive emergency gaming
environments to collect and contemporaneously store the behavior decisions made by
the players while they are playing the game.

BACKGROUND

Evacuation Models. There are many reasons for performing evacuation simulations
for a building. Depending upon when the fire protection engineer is brought into the
project, evacuation models can be used during different stages of the design phase of
the building. Evacuation models are key in allowing the engineers and designers to
answer “what if” questions about the building under design. If the model is used early
enough in the design phase, it can aid in identifying possible solutions to heavy
congestion points inside the building, therefore improving safety performance. It is
most likely, however, that an engineer is brought into a project when the design is
near completion and a problem has been identified. If the project has reached the
detailed design phase, adding new stairs, exits, or extending means of egress may be
an impossibility. In this case, the models can be used to make small, but important
changes to the building, and assess the results of such changes.

©  ASCE
Computing  in  Civil  Engineering  2015 614

For performance based designs, the engineer is also tasked with deciding if
the building is designed with enough protection to allow the occupants to escape
before incapacitation occurs. The engineer can use evacuation models to simulate
several different egress scenarios in order to evaluate the evacuation results from a
certain building. Input variables for egress scenarios include building characteristics,
such as number of floors and floor layouts, and occupant characteristics, such as
number of occupants, location of the occupants, speed, and body size. Bounding
evacuation results is important because many different fire scenarios can cause
different results, and human behavior in different fire situations are difficult to
predict. Through bounding, the designer attempts to anticipate different types of
emergencies and check if the building and occupants will reach targeted safety level
in a reasonable amount of time. The egress results are then compared with fire
modeling results for the building in order to establish whether or not the occupants
have a sufficient amount of time to escape before they are faced with hazardous
conditions, such as toxic products from smoke. In particular to map the human factors
onto computer models is a challenge, because each person’s singular behavior is
based on individual decisions and parameters and is not deterministic like the spread
of fire and smoke, which can be modeled and simulated based on natural principles.
So, according to Santos and Aguirre (2004) for an evacuation simulation, three
analytical dimensions need to be considered: the built environment (physical
location); the management of this environment (signage, escape routes); and the
social psychological and social organizational characteristics of the occupants.
Tavares and Galea (2009) noted that an evacuation simulation model must consider
four interactions: occupants–structure; occupants–occupants; occupants–fire (in case
of fire events) and fire–structure.

Agent-Based Modeling. Based on the physics modeling principle, the evacuation


progress is abstracted as a progress that the agent objects are driven to by various
factors with time flow. Characters and behaviors of individual are simulated by a
mathematics model and some rules. Van den Berg et al. (2008) proposed a new
concept “Reciprocal Velocity Obstacle” to simulate real time multi-agent navigation.
Their study assumed that agents navigated without explicit communication with each
other and simulation of hundreds of agents in densely occupied environments showed
that real-time and scalable performance is achievable with static and moving
obstacles. Pan (2006) developed MASSEgress (Multi-Agent Simulation System for
Egress analysis) which incorporates diverse human behavior such as competitive
behavior, queuing behavior, herding behavior, leader-following behavior into
evacuation simulation). In Pan’s simulation, human being’s attributes such as radius
of whole body, average walking velocity, and maximum running velocity were
derived from the work of Eubanks et al. (1998) and Thompson et al. (2003). In order
to find out human factors, researchers used different approaches including interviews,
questionnaires, experimentation and the analysis of past emergency situations records
from media or public record. The emphasis of these data-collection efforts was to find
out mobility parameters such as travel speed, required space for people of different
ages and genders (Rüppel and Schatz 2011). Visual Agents in these studies act
according to their pre-programmed behavior and decision making rules. These pre-

©  ASCE
Computing  in  Civil  Engineering  2015 615

defined rules in the Agent-based Model (ABM) considered individual agent behavior,
interactions among agents and group behaviors (Pan et al. 2007). Rüppel and Schatz
(2011) provided a comprehensive literature review in the ABM evacuation
simulation.

Serious Game. Serious Games refer to video games which are focused on supporting
activities such as education, training, health, advertising, or social change. To train
for improving personal fire safety skills, a few studies have tried to use serious game
technology to develop an immersive emergency environment, where the goal of the
game is to survive the fire and player’s survival is strictly dependent on choosing the
right actions in evacuating the building and taking as less time as possible to complete
the evacuation (Ribeiro and Almeida 2013). To succeed and progress in the game,
users would need to improve their decision making in fire situations by learning to
avoid common occupants’ errors. Rüppel and Schatz (2011) introduced the concept
for a BIM-based Serious Human Rescue game. The work combined BIM and serious
gaming technology into the building safety application. They made it possible to
extend BIM applications to integrate human factors. Rüppel and Schatz believed that
their work can bridge technology between the real and virtual world in the game
(Tizani and Mawdesley 2011).

Evacuation Database. Shi and Xie (2008) attempted to develop an extensive


database for evacuation models, considering factors such as pre-movement time,
walking speed, occupant characteristics, and actions and exit choice decisions, which
can be used as input parameters for evacuation models in performance-based design
or in validating the accuracy of evacuation models. Liu et al. (2014) proposed a
framework to build a human library through a BIM based gaming environment to
collect and store accurate human behavior data contemporaneously with the players
playing the game. The repository of human library was based on the framework of
Agent Based Modeling (ABM) where human behavioral data is encapsulated as
methods of agents. Their intent was to provide validated human egress behavior data
to support evacuation simulation and accurate emergency evacuation analysis.

PROPOSED APPROACH

The proposed approach consists of integrating BIM for serious game design, and
cloud computing technology, and prototyping a game application.

BIM Module Input. Games are designed with BIM Module of the designed building
as an input. The BIM module serves the purpose of providing geometric and non-
geometric information. The architectural software package uses BIM technology to
enable information exchange with other applications or plug-ins. Such information
may include the size, type, material, location, and fire rating of floors, doors, walls,
and ceilings, for example. BIM provides the physical geometry information about the
building for occupants’ escape path options, and the building structure material
properties to allow for an estimate about the fire propagation rate. The prototype

©  ASCE
Computing  in  Civil  Engineering  2015 616

described here uses Autodesk® Revit® Architecture as the tool for constructing the
BIM module, although other similar BIM software packages could be used as well.

Cloud-based Game. Cloud computing technology is adopted for the game design. It
provides the entire game experience to the users remotely from a data center. The
player is no longer dependent on a specific type or quality of gaming hardware, but
can use common devices. The end device only needs a broadband internet connection
and the ability to display High Definition (HD) video.

System Architecture Deployment. A user first logs into the system via a web
browser portal server, which is built with the BIM of the designed building. As shown
in Figure 1, the users’ behavior and attributes are collected by the web server for the
terminal devices. The attributes collected from the players includes their age, gender,
height and weight, this information would be collected from the data that players
enter to create the character which represents themselves, otherwise these attributes
are set using statistical average data by default. While the user is playing the game,
their reaction to the emergency situation would be recorded by the human behavior
data collector and send to the cloud.

Figure 1. Game Architecture Deployment

Data definition and Collection. In the BIM-ISG environment game scenarios are
pre-set by an administrator before game playing is started The building designer or
safety engineer can be the administrator, who is in charge of the BIM model, and set
fire load and fire location(or set them randomly distributed) if immersive fire
scenarios are needed. Human factors are pre-set by default using the existing data
from other human libraries, or statistical data. Game players are encouraged to set

©  ASCE
Computing  in  Civil  Engineering  2015 617

their human factors (e.g. travel speed, body size, gender and age) if available before
playing the game.

Performance-based Design for Building Evacuation. If sufficient data is collected,


the evacuation performance can be evaluated and the trial design for by improving the
building safety design can be implemented (e.g. changing the width of the stairs). The
evacuation time is taken as the performance indicator. In general, life safety from fire
is achieved if the required evacuation time is shorter than the available safe
evacuation time. As a proper safety design could not go beyond the evacuation for the
whole building, the safety design can be evaluated based on each floor’s evacuation
time (see Figure 2). The evacuation time which is shorter than the overall evacuation
time but longer than the targeted evacuation time for the floor can be used as a design
reference. A satisfied design is achieved when each floor’s evacuation time is shorter
than its targeted evacuation time.

Figure 2. Performance-base Design for Each Building Floor’s Evacuation Time

DISCUSSION

Research Contributions. To collect evacuation performance data and to conduct an


evacuation performance-based design, consideration of human behavior factors in
evacuation simulation, as well as the building design, is a big challenge for safety
engineers. The proposed use of immersive serious games for the collection of
behavioral data and the accessibility of the game through cloud computing
technology makes the game make the game available to a larger base of game
players, especially those future building occupants. Compared to other data that is
used for performance-based design, this method will provide more accurate and
comprehensive simulation information after validation. This developed game will be
used for tracking occupants’ evacuation behaviors and provide a foundation for future
emergency evacuation simulation and management. The proposed immersive gaming
technology makes it possible and affordable to collect the critical evacuation
performance data with a proper designed immersive serious game.

©  ASCE
Computing  in  Civil  Engineering  2015 618

Limitation and Future Studies. This paper proves that it is feasible to implement the
online immersive gaming environment for collection of building emergency
evacuation performance data. This framework does not evaluate the game quality and
performance of the cloud-based online gaming environment and the server
requirement for massive access to the gaming server. During the implementation
phase, these issues should be considered in more details. In addition, the player’s
gaming experience in an immersive gaming environment, such as lack of comfort in
an immersive game, ways to help end users adjust in uncomfortable game situation
have not yet been explored. Even though immersive gaming technology has been
shown to provide a good feeling of presence, the game players feeling about this
proposed game has not be validated yet. Some pilot study players’ feedback about the
gaming environment will be necessary. Finally, to make the game fun to play, an
effective reward system should be developed as an incentive for players.

CONCLUSION

This paper discussed a new approach to collect human behavior and performance data
in emergency evacuations from players in a serious game. A framework was
proposed to use current technology integrating cloud gaming, immersive gaming, and
BIM to solve the problem of the capturing of real human behavior in emergency
situations. BIM is used in the game design phases to create the building environment
and evacuation routes information. Cloud computing is proposed to solve the problem
of accessibility by players and make it possible to have vast numbers of connected
game devices for human behavior collection. An immersive game is proposed to help
solicit the gamers’ real behavior in the gaming environment. The human egress data
collected from the game is expected to greatly benefit future emergency evacuation
simulation studies and facilitate performance-based fire safety design for building
evacuation.

REFERENCES

Burstedde, C., Klauck, K., Schadschneider, A., and Zittartz, J. (2001). "Simulation of
pedestrian dynamics using a two-dimensional cellular automaton." Physica A:
Statistical Mechanics and its Applications, 295(3), 507-525.
Du, J., and El-Gafy, M. (2012). "Virtual Organizational Imitation for Construction
Enterprises: Agent-Based Simulation Framework for Exploring Human and
Organizational Implications in Construction Management." Journal of
Computing in Civil Engineering, 26(3), 282-297.
Eubanks, J. J., Hill, F., and Casteel, A. (1998). "Pedestrian Accident Reconstruction
and Litigation. Lawyers & Judges Publishing Company." Inc.
Gwynne, S. (2011). "Employing Human Egress Data." Pedestrian and Evacuation
Dynamics, Springer, 47-57.
Liu, R., Du, J., & Issa, R. R. (2014). "Human Library for Emergency Evaluation in
BIM-based Serious Game Environment." In Proceedings ICCBE/ASCE/CIBW078
2014 International Conference on Computing in Civil and Building Engineering.

©  ASCE
Computing  in  Civil  Engineering  2015 619

Liu, R., Du, J., & Issa, R. R. (2014). "Cloud-based deep immersive game for human
egress data collection: a framework." Journal of Information Technology in
Construction, 336-349.
Pan, X. (2006). "Computational modeling of human and social behaviors for
emergency egress analysis." Stanford University.
Pan, X., Han, C. S., Dauber, K., and Law, K. H. (2007). "A multi-agent based
framework for the simulation of human and social behaviors during
emergency evacuations." Ai & Society, 22(2), 113-132.
Ribeiro, J., Almeida, J. E., Rossetti, R. J., Coelho, A., & Coelho, A. L. (2013).
Towards a serious games evacuation simulator. arXiv preprint
arXiv:1303.3827
Rueppel, U., and Stuebbe, K. M. (2008). "BIM-based indoor-emergency-navigation-
system for complex buildings." Tsinghua Science & Technology, 13, 362-367.
Rüppel, U., and Schatz, K. (2011). "Designing a BIM-based serious game for fire
safety evacuation simulations." Advanced Engineering Informatics, 25(4),
600-611.
Santos G, Aguirre B.E. (2004). "A critical review of emergency evacuation
simulation models." In: Proceedings of the workshop on building occupant
movement during fire emergencies,Gaithersburg, Maryland. p. 27–52.
Shi, L., Xie, Q., Cheng, X., Chen, L., Zhou, Y., & Zhang, R. (2009). "Developing a
database for emergency evacuation model." Building and Environment, 44(8),
1724-1729
Smith, S. P., and Trenholme, D. (2009). "Rapid prototyping a virtual fire drill
environment using computer game technology." Fire Safety Journal, 44(4),
559-569.
Tavares, R. M. and Galea, E. R. (2009). "Evacuation modelling analysis within the
operational research context: A combined approach for improving enclosure
designs." Building and Environment, 44(5), 1005-1016.
Tizani, W., and Mawdesley, M. J. (2011). "Advances and challenges in computing in
civil and building engineering." Advanced Engineering Informatics, 25(4),
569-572.
Van den Berg, J., Lin, M., and Manocha, D.(2008) "Reciprocal velocity obstacles for
real-time multi-agent navigation." Proc., Robotics and Automation, 2008.
ICRA 2008. IEEE International Conference on, IEEE, 1928-1935.
Varas, A., Cornejo, M. D., Mainemer, D., Toledo, B., Rogan, J., Muñoz, V., and
Valdivia, J. A. (2007). "Cellular automaton model for evacuation process with
obstacles." Physica A: Statistical Mechanics and its Applications, 382(2),
631-642.
Wang, Y., Zhang, L., Ma, J., Liu, L., You, D., and Zhang, L. (2011). "Combining
building and behavior models for evacuation planning." Computer Graphics
and Applications, IEEE, 31(3), 42-55.
Zhang, L., Wang, Y., Shi, H., and Zhang L.(2012). "Modeling and analyzing 3D
complex building interiors for effective evacuation simulations." pFire Safety
Journal,53, 1-12.

©  ASCE
Computing  in  Civil  Engineering  2015 620

Automatic Construction Schedule Generation Method through BIM Model


Creation

Jaehyun Park1 and Hubo Cai2


1
Ph.D. Student, Lyles School of Civil Engineering, Purdue University, 550 Stadium
Mall Dr., West Lafayette, IN 47907. E-mail: [email protected]
2
Assistant Professor, Division of Construction Engineering & Management, Lyles
School of Civil Engineering, Purdue University, 550 Stadium Mall Dr., West
Lafayette, IN 47907. E-mail: [email protected]

Abstract

Building information modeling (BIM) has recently experienced rapid


technological advancement and gained prevalence in the architectural, engineering,
and construction (AEC) industry. BIM models contain valuable information on
buildings to support four-dimensional (4D) modeling that involves the integration of
three-dimensional (3D) building components and time as specified in the one-
dimensional (1D) construction schedule. While 4D modeling has been proven to be
useful in effective construction management, a critical challenge in current 4D
modeling practice is the discrepancy between the element breakdown structure (EBS)
of the BIM model and the work breakdown structure (WBS) of scheduling. This
discrepancy leads to considerable time and effort being spent on matching and linking
BIM model components and schedule activities. This paper presents a framework for
automating the generation of construction schedules during the BIM model creation
process. A matching and pairing mechanism between EBS and WBS was created to
automate the matching and linking between BIM model elements and construction
schedule activities. A prototype system was developed and tested to validate this
framework. The proposed framework is expected to automatically generate detailed
construction schedules during the development process of BIM models.

INTRODUCTION

Construction scheduling is a critical task in construction project management


(Moon 2013). However, most construction schedules are still being generated
manually, which requires a huge amount of time (Kim 2013). Given the rapid
technological advancement and prevalence of building information modeling (BIM),
the architectural, engineering, and construction (AEC) industries have adopted BIM
technologies to improve the construction planning and scheduling processes. BIM
models contain valuable information on building to assist the generation of
construction schedules and can be utilized in 4D (three-dimensional [3D] building
components plus time as specified in the schedule) modeling. 4D modeling has been

©  ASCE 1
Computing  in  Civil  Engineering  2015 621

proven to be useful in effective construction management (Koo and Fischer 2000;


Hartmann et al. 2008; Wang 2014). However, current 4D modeling is limited to
supporting construction scheduling because of the difficulties in linking BIM models
and construction schedules (Kim et al. 2013). The main challenge in current 4D
modeling practices is the discrepancy between the element breakdown structure
(EBS) of BIM and the work breakdown structure (WBS) of traditional construction
schedules. This conflict leads to a huge amount of time and effort being spent on
manually linking BIM components to related scheduling activities. Besides the
required time and effort, this linking process is presently error-prone.
The main objective of the study presented in this paper is to develop a
framework for automating the generation of construction schedules through the
creation of BIM models to improve the effectiveness of BIM in construction
scheduling and 4D modeling. The remainder of this paper is organized as follows.
First, a literature review is presented of relevant studies on automated construction
scheduling based on computer-aided design (CAD) models or BIM models.
Subsequently, the fundamental principles and the workflow of the proposed
framework are explained in detail. A prototype system has been developed and tested
to validate this framework. Finally, conclusions and discussions for further research
are provided.

LITERATURE REVIEW

A number of studies have been conducted to automate the construction


scheduling process. Before the presence of BIM in the AEC industry, Cherneff et al.
(1991) proposed a method to integrate a computer-aided design (CAD) system with
construction schedules using knowledge-based programming and database techniques.
This method demonstrated the potential to generate a construction schedule
automatically in the AEC industry by means of a computer. Fischer and Aalami
(1996) designed a method to generate a construction schedule using knowledge of
construction methods. In their study, they described the object models of the
construction methods, which connect the construction schedule and design. As BIM
is being widely adopted in the industry, recent efforts have been made to generate
construction schedules by using the stored information in BIM models. For instance,
de Vries and Harink (2007) and Mikulakova et al. (2010) adopted knowledge-based
approaches for the automatic generation of construction schedules based on historical
project data integrated into BIM models. Kim et al. (2013) also proposed a model for
the BIM-based automated generation of construction schedules. Their study included
an automated framework for generating schedules using data and information stored
in BIM models. Specifically, a database of tasks and relationships associated with
BIM model components was built to extract appropriate information for construction
schedules. While these research efforts have been at least partially successful in
automating the generation of construction schedules from the integration of
construction knowledge and BIM models, the discrepancy between the EBS in BIM
and the WBS of traditional construction schedules remains a significant challenge.
Developing mechanisms to pair and match EBS and WBS is required to fully
automate the generation of solid construction schedules from BIM models.

©  ASCE 2
Computing  in  Civil  Engineering  2015 622

PROPOSED FRAMEWORK

A framework has been created in this study to pair and match EBS and WBS
to facilitate the automation of construction scheduling. Figure 1 illustrates the
framework and its workflow. The framework consists of four phases: A) Extract the
object information from the BIM models, B) define the WBS lists and relations of
each element type, C) pair the defined WBS lists and objects of the BIM model, and
D) generate the schedule.
This framework has three prerequisites. First, the designed BIM model should
follow a common model hierarchy, such as the story-based model hierarchy of
Industry Foundation Classes (IFC) and Graphisoft ArchiCAD, which is critical to the
whole process of this method. For example, if the IFC model is used for this process,
the user should design the BIM model based on an IFC model hierarchy. Thus, the
wall elements in the IFC model should be drawn up for each story and use an IfcWall
entity. Second, the materials should be assigned to each object, which is critical. The
names of the assigned materials are later used to identify each object in pairing them
with the WBS lists. Third, the WBS directory and sequence rules should be defined
by the user. Many construction companies and government agencies have already
established and use their own manuals for the scheduling and management of
construction projects. For instance, the New Jersey Department of Transportation
(NJDOT) provides a construction scheduling manual for designers and contractors
(NJDOT 2013).

Figure 1. Methodology Workflow.

A) Extract the Object Information from the BIM Models. For the
automatic schedule generation method, the most important task is to extract relevant
information from objects in the BIM model. The BIM model continues to establish
basic elements, such as walls, columns, beams, and slabs, using current commercial
BIM authoring software (e.g., Autodesk Revit or Graphisoft ArchiCAD) in
accordance with the aforementioned prerequisites. Therefore, if the user uses
Autodesk Revit to design a BIM model, object information from the BIM model can
be extracted for this method, as shown in Figure 2.
Figure 2 presents an example of data extraction from a column object in a
BIM model that is a 40-cubic-foot concrete column and located in the Level 1 area.
Similarly, data from other objects can also be extracted and grouped through a

©  ASCE 3
Computing  in  Civil  Engineering  2015 623

repetition of this step, and the extracted data in this step will be used in section C
(pair the defined WBS lists and objects of the BIM model).

Figure 2. Example of Data Extraction from BIM.

B) Define the WBS Lists and Relations of each Element Type. This task
defines WBS lists and the relations between each element type as basic rules for the
tasks in sections C and D. Once all the basic rules are defined, the user is able to use
the defined rules consistently for the generation of construction project schedules
based on BIM. For example, if the user continues this task for regarding reinforced
concrete work and uses Autodesk Revit to draw the BIM model, the final output of
this step can be described, as shown in Table 1.

Table 1. Example of Defined WBS Lists and Relations of each Element Type.
Element Built-In Placement
Material Level Related WBS Lists Relations
Type Category Level
01 Formwork
02 Reinforcement 01 FS
Wall Walls Concrete n n
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
02 Reinforcement 01 FS
Column Columns Concrete n n
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
Structural 02 Reinforcement 01 FS
Beam Concrete n n+1
Framing 03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS
01 Formwork
02 Reinforcement 01 FS
Floor Floors Concrete n n+1
03 Pouring Concrete 02 FS
04 Formwork Removal 03 FS

Reinforced concrete work usually undergoes four steps: 1) Formwork, 2)


reinforcement, 3) pouring concrete, and 4) formwork removal. Reinforced concrete
structures are generally associated with the wall, column, beam, and floor element
types of BIM models, which are assigned a concrete material type. Based on the
previously mentioned rules, the relations between element type and WBS lists in
terms of reinforced concrete work have been defined. The discrepancies between
element type and built-in category are a characteristic of the BIM authoring software

©  ASCE 4
Computing  in  Civil  Engineering  2015 624

used. In addition, if the user uses Autodesk Revit for their BIM tasks, the beam and
floor elements should be placed in the n+1th floor level if the user draws an nth floor
structure. On the other hand, if the user uses Graphisoft ArchiCAD as BIM authoring
software or other software packages, these rules can vary in accordance with the
element hierarchy of the BIM software used. Therefore, there are many possible ways
to define relations regarding the same tasks depending on the BIM authoring software
used and the BIM model hierarchy.

C) Pair the Defined WBS Lists and Objects of the BIM Model. The aim of
this step is to pair each BIM model object and WBS list to generate a schedule with
scheduling software. This process has several steps, as shown in Figure 3. Most of the
construction tasks are repeated on each floor. Therefore, the total number of levels in
a BIM model is detected as a first process. This is one more than the total stories in
the actual building. For instance, if the user draws a two-story building, the ceiling of
the second story should be placed in the third story using a floor object. Thus, the
detected total number of levels is reduced by one. In the next step, the WBS lists
defined in section B are duplicated with the name of each level and the paired BIM
model object groups. For example, the formwork activity in Level 1 can be paired
with the walls and columns assigned concrete material and placed in Level 1 and the
beams and floors assigned concrete material and placed in Level 2. If these pairing
tasks are completed, all the processes are ready to export output to the scheduling
software. Assuming that the user has designed a two-story reinforced concrete
structure BIM model using Autodesk Revit, the final output of this step can be
described as shown in Table 2. Following this procedure, although the BIM model
contains three levels, the total number of levels in the BIM model has been
categorized by two levels: Level 1 and Level 2. In addition, the reinforced concrete
work activities have been duplicated, and relevant BIM model object groups have
been paired.

Figure 3. Procedure during Pairing WBS Lists and BIM Model Objects.

Table 2. Example of Pairing Defined WBS and BIM Model Object Group
BIM Object Placement
Level WBS Lists Relations
Group Level
Wall, Column Level 1
01 Level 1 Formwork
Beam, Floor Level 2
Wall, Column Level 1
02 Level 1 Reinforcement 01 FS
Beam, Floor Level 2
Level 1
Wall, Column Level 1
03 Level 1 Pouring Concrete 02 FS
Beam, Floor Level 2
Wall, Column Level 1
04 Level 1 Formwork Removal 03 FS
Beam, Floor Level 2

©  ASCE 5
Computing  in  Civil  Engineering  2015 625

Wall, Column Level 2


05 Level 2 Formwork 04 FS
Beam, Floor Level 3
Wall, Column Level 2
06 Level 2 Reinforcement 05 FS
Beam, Floor Level 3
Level 2
Wall, Column Level 2
07 Level 2 Pouring Concrete 06 FS
Beam, Floor Level 3
Wall, Column Level 2
08 Level 2 Formwork Removal 07 FS
Beam, Floor Level 3

D) Generate the Schedule. After all the tasks are completed, the output can
be produced. The generated preliminary schedule is exported to the format of the
commercial scheduling software (e.g., Microsoft Project). The activities are grouped
and defined by story, and each activity list includes the story level, a default 1-day
duration (assuming a 5-day and 40-hour workweek), the start date, the end date, and
predecessors. By exporting the output to commercial scheduling software, the user
can utilize other functions in the scheduling software and develop the project
schedule easily.

Figure 4. BIM in Case Study.

Table 3. Results of Generating Schedule


Dura- Prede-
ID Task Name Start Finish
tion cessors
1 Reinforced Concrete Work 8 days Tue 12/2/14 Thu 12/11/14
2 Level 1 Reinforced Concrete Work 4 days Tue 12/2/14 Fri 12/5/14
3 Level 1 Formwork 1 day Tue 12/2/14 Tue 12/2/14
4 Level 1 Reinforcement 1 day Wed 12/3/14 Wed 12/3/14 3
5 Level 1 Pouring Concrete 1 day Thu 12/4/14 Thu 12/4/14 4
6 Level 1 Formwork Removal 1 day Fri 12/5/14 Fri 12/5/14 5
7 Level 2 Reinforced Concrete Work 4 days Mon 12/8/14 Thu 12/11/14 2
8 Level 2 Formwork 1 day Mon 12/8/14 Mon 12/8/14 6
9 Level 2 Reinforcement 1 day Tue 12/9/14 Tue 12/9/14 8
10 Level 2 Pouring Concrete 1 day Wed 12/10/14 Wed 12/10/14 9
11 Level 2 Formwork Removal 1 day Thu 12/11/14 Thu 12/11/14 10

DEMONSTRATION & VALIDATION

In order to test the proposed framework, a simple BIM model has been created
using Autodesk Revit, and a prototype system has been developed using Autodesk

©  ASCE 6
Computing  in  Civil  Engineering  2015 626

Revit API. The BIM model in the case study consists of two levels, and each level
includes six columns, six beams, two walls, and one floor, as shown in Figure 4. The
materials of all the objects have been assigned a “concrete, cast-in-place gray” type.
The generated schedule tasks have been progressed in relation to reinforced concrete
work, as in the previous examples. Tables 3 and 4 present the results of the generated
schedule and paired WBS lists and BIM model objects. Eleven tasks (eight summary
tasks and three subtasks) have been generated, while the default duration of each
generated activity is 1 day. Thirty objects have been paired with the defined WBS
lists. Therefore, it can be concluded that the proposed method in this research has
been validated.

Table 4. Results of Paring WBS and BIM Objects


Object Object Placement
Material Paired WBS
ID Type Level
316090 Wall Level 1
315999 Wall Level 1
310361 Column Level 1
315685 Column Level 1
310260 Column Level 1
310019 Column Level 1
Level 1 Formwork /
315580 Column Concrete, Level 1
Level 1 Reinforcement /
310141 Column Cast-in-Place Level 1
Level 1 Pouring Concrete /
316959 Beam gray Level 2
Level 1 Formwork Removal
316684 Beam Level 2
316883 Beam Level 2
317042 Beam Level 2
317017 Beam Level 2
316983 Beam Level 2
323321 Floor Level 2
316648 Wall Level 2
316441 Wall Level 2
311059 Column Level 2
315244 Column Level 2
310946 Column Level 2
310857 Column Level 2
315070 Column Level 2 Level 2 Formwork /
Concrete,
Level 2 Reinforcement /
310695 Column Cast-in-Place Level 2
Level 2 Pouring Concrete /
323148 Beam gray Level 3 Level 2 Formwork Removal
323125 Beam Level 3
323103 Beam Level 3
323078 Beam Level 3
323061 Beam Level 3
323023 Beam Level 3
323348 Floor Level 3

©  ASCE 7
Computing  in  Civil  Engineering  2015 627

CONCLUSION AND DISCUSSION

Currently, BIM is attracting a lot of attention in the AEC industry to assist


construction project management. This paper presents a framework for automating
the generation of construction schedules through the creation of BIM models and
consequently the automatic generation of 4D models. It overcomes the main
challenge caused by the discrepancy between the EBS of BIM models and the WBS
of traditional construction scheduling by matching and pairing mechanisms. The
framework consists of four phases: Extracting object information from the BIM
model, defining the WBS lists and the relations between each element type, pairing
the defined WBS lists and BIM model objects and generating the schedule. The
proposed method has been validated by a case study using a developed prototype
system. It was found that the proposed methodology is able to generate user-defined
traditional construction schedules. Although the current methodology is at an early
stage, it is expected to automate the generation of construction schedules and 4D BIM
through the creation of BIM models. Ongoing research is being conducted to address
limitations, such as that of estimating activity durations automatically.

REFERENCES

Cherneff, J., Logcher, R., and Sriram, D. (1991). “Integrating CAD with
Construction-Schedule Generation.” J. Comput. Civ. Eng., 5(1), 66-84.
De Vries, B., and Harink, J.M.J. (2007). “Generation of a construction planning from
a 3D CAD model.” Autom. Constr., 16(1), 13-18.
Fischer, M. and Aalami, F. (1996). “Scheduling with Computer-Interpretable
Construction Method Models.” J. Constr. Eng. Manage., 122(4), 337-347.
Hartmann, T., Gao, J., and Fischer, M. (2008). “Areas of Application for 3D and 4D
Models on Construction Projects.” J. Constr. Eng. Manage., 134(10), 776-785.
Kim, H., Anderson, K., Lee, S., and Hildreth, J. (2013). “Generating construction
schedules through automatic data extraction using open BIM (building
information modeling) technology.” Autom. Constr., 35, 285-295.
Koo, B., and Fischer, M. (2000). “Feasibility Study of 4D CAD in Commercial
Construction.” J. Constr. Eng. Manage., 126(4), 251-260.
Mikulakova, E., König, M., Tauscher, E., Beucke, K. (2010). “Knowledge-based
schedule generation and evaluation.” Adv. Eng. Inf., 24(4), 389-403.
Moon, H., Kim, H., Kamat, V., and Kang, L. (2013). “BIM-Based Construction
Scheduling Method using Optimization Theory for Reducing Activity
Overlaps.” J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000342 ,
04014048.
New Jersey Department of Transportation (NJDOT). (2013). “BDC13T-02 -
Construction Scheduling Manual, 2013.” NJDOT, <http://www.state.nj.us/
transportation/eng/documents/BDC/pdf/CSM20130430.pdf> (Nov. 26, 2014).
Wang, W., Weng, S., Wang, S., and Chen, C. (2014). “Integrating building
information models with construction process simulations for project
scheduling support.” Autom. Constr., 37, 68-70.

©  ASCE 8
Computing  in  Civil  Engineering  2015 628

Data-Driven Analysis Framework for Activity Cycle Diagram-Based


Simulation Modeling of Construction Operations

Sanghyung Ahn1, Phillip S. Dunston2, Amr Kandil3 and Julio C. Martinez4*


1
Postdoctoral Research Fellow, School of Civil Engineering, The University of
Queensland, Brisbane St Lucia, QLD 4072, Australia; PH +61-7-3365-3568; FAX
+61-7-3365-4599; email: [email protected]
2
Professor, School of Civil Engineering, Purdue University, 550 Stadium Mall Drive,
West Lafayette, IN 47907-2051; PH (765) 494-0640; FAX (765) 494-0644; email:
[email protected]
3
Associate Professor, School of Civil Engineering, Purdue University, 550 Stadium
Mall Drive, West Lafayette, IN 47907-2051; PH (765) 494-2246; FAX (765) 494-
0644; email: [email protected]
4*
(Posthumously, deceased on June 4th, 2013) Professor, School of Civil Engineering,
Purdue University

ABSTRACT

This paper proposes a framework for autonomous data-driven discrete-event


simulation (DES) modeling process that transforms operation data to Activity Cycle
Diagram (ACD)-based DES models without a priori model network. The proposed
framework uses the idea of process mining to discover a real process by extracting
knowledge from activity logs, which are the lists of sequences of activities per cycle in
terms of resources. The discovered process is then modeled as an ACD which can
replicate the underlying activity logs. Activity log plays a key role in this procedure as
it determines the set of activities that we can identify and thus govern the analysis scope
of the generated simulation model. It is therefore important to design and collect
activity log data in a way to capture all the activities for an intended abstraction level.
In order to automate the model generation procedure considering this requirement, we
employ the notion of activity ontology, which specifies various activities and their
relations for a given construction operation at different abstraction levels in a
predefined manner. Using the ontology-based framework, we can systemize the
procedure of simulation modeling, which entails selecting the abstraction level of
simulation model, identifying the list of activities that need to be inferred from data to
create activity logs, and determining the type and resolution of data to be collected. The
proposed framework is explained and demonstrated using an earthmoving operation
example.

INTRODUCTION

Discrete-event simulation (DES) that uses forms of activity cycle diagrams


(ACDs) and the activity scanning (AS) modeling paradigm has been recognized as a
useful technique for the quantitative analysis of operations and processes of a
constructed facility (Martinez and Ioannou 1999; Martinez 2010). Extensive research

©  ASCE
Computing  in  Civil  Engineering  2015 629

efforts have been made over four decades to improve the steps needed for a proper DES
modeling in Architecture, Engineering, Construction and Facility Management
(AEC/FM) industry. In an effort to facilitate simulation study and application,
Visualization, Information Modeling, and Simulation (VIMS) committee—one of the
technical committees under the ASCE’s Technical Council on Computing and
Information Technology—identifies and discusses three challenging areas in computer
simulation (Lee et al. 2013), which include: 1) realistic simulation modeling; 2)
applicability of simulation models to the industry; and 3) academic and educational
obstacles. The focus of this paper is on addressing the first two challenges by proposing
a framework that formalizes DES modeling process to support rapid generation of data-
driven DES models based on ACDs.
Thanks to the latest sensor technology advancements, several researchers have
recently introduced a concept of near real-time simulation which adapts real-time data
using tracking technologies such as global positioning systems (GPS), radio frequency
identification (RFID), and ultra wide-band (UWB) to update model parameters (mainly
activity durations) if and when necessary (Song and Eldin 2012; Akhavian and
Behzadan 2013; Vahdatikhaki and Hammad 2014). However, these previous studies
for near real-time simulation require a priori model which is not discovered from the
collected data but defined by modelers. Simulation model discovery based upon real
operation data is essential to achieve realistic simulation modeling and improving the
applicability of simulation models to the industry gaining more credibility of
simulation study.
Therefore, the objective of this study is to formalize a framework (Figure 1) of
autonomous data-driven DES model generation process that transforms sensor-based
real world data to DES models from scratch. Such a framework requires designing a
classification schema (i.e., taxonomy) of the level of model detail that categorize
individual activities with respect to type of resources, and scope of intended analysis
and modeling it in such a way that enables a computer system to leverage the real
operation data to infer status of activities correctly. An activity ontology hierarchy is
designed to specifies various activities and their relations at different abstraction levels
to formally define activity types and hierarchies for a given construction operation.
Using the ontology-based framework, we can systemize the procedure of simulation
modeling, which entails selecting the abstraction level of simulation model, identifying
the list of activities that need to be inferred from data to create activity logs, and
determining the type and resolution of data to be collected. The proposed framework is
demonstrated using an earthmoving operation example.

METHODOLOGY

Activity log. The proposed framework uses the idea of process mining to
discover real processes (i.e., not assumed processes) by extracting knowledge from
activity logs (Table 1), which are the lists of sequences of activities per cycle in terms
of resources. The discovered processes are then represented as ACDs at intended
abstraction levels. Since ACDs are a natural means for representing three-phase AS
simulation models (Martinez and Ioannou 1999), we can use the ACD as a blueprint

©  ASCE
Computing  in  Civil  Engineering  2015 630

for construction simulation systems such as CYCLONE (Halpin and Riggs 1992) and
STROBOSCOPE (Martinez 1996).

Figure 1. Autonomous data-driven DES model generation framework

ACD consists of alternating circles and rectangles connected with links. The
rectangles are called activities and represent tasks performed by one or more resources.
The circles are called queues and represent inactive resources in specific states.
However, the traditional description of ACD does not provide a formal rational of
sequential process to develop models so that computer systems can use to connect
between the elements to create a model. Hence, the in-depth definition of ACD
elements is essential to represent the role of activity log in computer simulation system.

Table 1. Excavator and dump truck operation activity logs

Definition 1. (ACD) Let N ( R, QR , AR , TR , I, O, FR , LR ,# ) be an ACD where


A

R ( R A RP ) is a finite set of resources involved in the process, such as materials


(i.e., passive resource: RP ), labor and equipment (i.e., active resource: RA );
RA ( RA,CL RA, NL ) is a finite set of active resources which stay in a closed loop R A, CL
and in a non-closed loop RA, NL ; RP ( RP ,CL RP , NL ) is a finite set of passive resources

©  ASCE
Computing  in  Civil  Engineering  2015 631

which stay in a closed loop RP , CL and in a non-closed loop RP , NL ; QR is a finite set of


queues in which resource R involves; AR A is a finite set of activities in which resource
R involves such that QR R ; TR (QR AR A ) is a finite set of directed arcs
(i.e., draw links) between preceding queues and activities; I is a finite set of activities
which require different types of inactive resources to start; O is a finite set of activities
which release different types of resources as outcomes of activities; FR ( AR A QR )
is a set of directed arcs (i.e., release links) between activities and following queues; and
LR A , # is a simple activity log, which is a multi-set of traces over some set of activities
at cycle # of resource RA .

The ACD shown Figure 2 can be formalized as follows:


R A,CL {Excavator, Truck}, R A, NL
RP , NL {Soil}, RP ,CL
QExcavator {ExcIdle}
QTruck {TrkWtLd, RdyToHaul, RdyToDump, RdyToReturn}
QSoil {SoilInStkPile, DumpedSoil}
AExcavator {Load }
ATruck {Load, Haul, Dump, Return}
TExcavator {( ExcIdle, Load )}
TTruck {(TrkWtLd, Load ), ( RdyToHaul, Haul ), ( RdyToDump, Dump), ( RdyToReturn, Return)}
TSoil {( SoilInStkPile, Load )}
I {Load }
O {Dump}
FExcavator {( Load, ExcIdle)}
FTruck {( Load, RdyToHaul ), ( Haul, RdyToDump), ( Dump, RdyToReturn), ( Return, TrkWtLd )}
FSoil {( Dump, DumpedSoil )}

Activity logs listed in Table 1 are defined by LR and illustrated in Figure 2 as


ACD elements with queues, directed arcs and activities. In turn, an ACD is composed
of these activity logs of (TR FR ). For instance, the components of Activities Load and
Haul logs of Truck#1 in Table 1 are defined by
{(TrkWtLd, Load ), ( RdyToHaul, Haul )} TTruck . After discovering the causal
dependency between Activities Haul and Load by using probabilistic inference
algorithms (Agamennoni et al. 2010) or non-probabilistic techniques such as rule-based
inference systems (Song and Eldin 2012; Vahdatikhaki and Hammad 2014) and K-
means clustering (Akhavian and Behzadan 2013), a network link can be generated as
{( Load, RdyToHaul)} FTruck , thereafter, to produce an activity log as
LTruck , #1 [ Load , Haul ]. In this manner, we can transform all traces acquired from

©  ASCE
Computing  in  Civil  Engineering  2015 632

a real world operation data into a ACD model which can replicate the underlying
activity logs.

Figure 2. Conventional ACD for excavator and dump truck operation


as coarse-grained model
Activity abstraction level. In DES simulation study, the level of activity detail,
which is a set of Activities AR , determines the scope of the model. In other words, the
depth of analysis that the model can perform depends on the activity abstraction level.
For example, an ACD for excavator and dump truck operation shown in Figure 3
illustrates fine-grained activities compared to the ACD shown in Figure 2. To create
the coarse-grained model, such activity logs of
LTruck , #1 [ Load , Haul , Dump, Return ] and LExcavator , #1 [ Load ] are sufficient.
On the other hands, if we want to analyze more in detail with regard to the excavator
operation as intended in the fine-grained model (Figure 3), the level of activity detail
needs to describe more with subclass activities of Activity Load of excavator,
LExcavator , #10 [ DumpSoilToTruck, SwingEmpty, Excavate, SwingLoaded ]. Activity
Load of truck can be divided into two Activities EnterLdArea and DumpSoilToTruck,
such that an activity log of Truck can be composed of five different activities,
LTruck , # 47 [ DumpSoilToTruck, Haul, Dump, Return, EnterLdArea ] . This level of
Activity Load can map a truck Activity EnterLdArea which starts at the waiting area
and ends at the actual position where truck is to be loaded. When we substitute an
activity with a set of subclass activities, the parent activity should be a union of the set
of its subclass activities. More importantly, children activities must be mutually
exclusive at the same level of detail.

Activity-specific ontology hierarchy. In general, the types of work assigned


to given construction equipment and labor are usually deterministic. When an
earthmoving operation, for example, is to use a set of excavator (i.e., loader) and dump
trucks (i.e., haulers), material to be moved (i.e., passive resource) is determined as soil
in nature. It remains us to determine the abstraction level of DES model that we want
to build by using data collected. Such needs require designing a classification schema
(i.e., taxonomy) of activities that categorize each activity abstraction level with respect
to their flexibility. The needs is met by defining an activity-specific ontology. An

©  ASCE
Computing  in  Civil  Engineering  2015 633

ontology is an explicit specifications of terms in a domain and relations among them


(Gruber 1993). The purpose of developing an ontology concurs with the requirements
for a formal representation of list of activities which need to be discovered as activity
logs from field data.

Figure 3. Conventional ACD for excavator and dump Truck operation


as low (fine)-grained model
Figure 4 shows an example of excavation and dump truck operation ontology
hierarchy. To build an activity-specific ontology, we start from the highest activity
abstraction level by identifying common activities between different types of active
resources. Activity Load occurred by interaction between excavator and dump trucks
(i.e., active resources) drawing resource soil (i.e., passive resource) is the only common
activity at the highest activity abstraction level, Level #1 in Figure 4. An activity class
has the following attributes: name, conditions needed to start, outcome of activity, and
corresponding questions for state identification. For example, Activity Load may have
attributes as shown in Table 2. Since both the condition needed to start and the outcome
of Activity Load includes the same idle status of excavator at source, excavator can
have one activity at the highest abstract level. That is, all subclass Activities
AExcavator {DumpSoilToTruck, SwingEmpty, Excavate,SwingLoaded } at Level #2 of
Activity Load in Figure 4 are carried out when truck is under loading when Activity
Load occurs. The outcome of Activity Load of truck indicates that Activity Haul
becomes the following activity. At the highest abstraction level, we only need to
determine an activity which differentiates each cycle from time series data. Thus, after
identifying Activity Load, which dissect each cycle, Activities Haul, Dump, Return can
exist as an aggregate Activity HaulDumpReturn at Level #1 in Figure 4. Once a set of
activities at the highest abstraction level is established, another set of activities at the
next abstraction level can be developed based upon the previous abstraction level as
illustrated in Figure 4. There is a special set of activities which can be added to any
abstraction levels. Such Activities EngineOff, Repair and Stop can be included at any
levels to excavator and dump truck earthmoving operation when data can capture the
corresponding states.

Identify data needs. The goal of activity recognition from data is to answer
state identification questions from activity class attributes (Table 2). The answers can

©  ASCE
Computing  in  Civil  Engineering  2015 634

be delivered by determining what to measure and how to collect data. The proposed
framework uses active resource (e.g., equipment) class attributes and methods (Table
3) to systemize in choosing data collection methods. For example, a simulation model
at the highest abstraction model in Figure 4 only requires to identify start and end states
of Activiy Load. In the process of finding an answer to a state identification question
for each status from Table 2, “how do we know when truck is empty or fully loaded?”,
we can decide a data collection system we need. Data is a finite set of records of static
or dynamic attributes of resources listed in Table 3. The type of sensor can be selected
by equipment class methods. In order to check if the weight of truck is changed with a
method of isWeightChanged, load cell data with timestamp can fully answer the given
questions.

Figure 4. Activity-specific ontology hierarchy


Table 2. Activity class attributes

Table 3. Equipment class attributes and methods

©  ASCE
Computing  in  Civil  Engineering  2015 635

CONCLUSIONS AND FUTURE WORK

This paper has presented a framework for data-driven analysis by automating


ACD-based DES modeling process of construction operations without a priori model.
The framework employs the notion of activity ontology to provide a formal
specifications of activities and their relations for a given construction operation at
different abstraction levels in a predefined manner. The activity ontology allows us to
determine the abstraction level of simulation model as well as the list of activities that
need to be identified from data to create activity logs. Additionally, the framework
includes activity and resource class attributes and methods, which can rationalize the
procedure of selecting what to measure and how to collect data. Although the proposed
framework can incorporate various activity recognition algorithms, the authors will
discuss activity log-based process mining techniques in future studies.

REFERENCES

Agamennoni, G., Nieto, J. I., and Nebot, E. M. (2010). “Vehicle activity


segmentation from position data.” 2010 13th International IEEE Conference
on Intelligent Transportation Systems (ITSC), 330–336.
Akhavian, R., and Behzadan, A. (2013). “Knowledge-Based Simulation Modeling of
Construction Fleet Operations Using Multimodal-Process Data Mining.”
Journal of Construction Engineering and Management, 139(11), 04013021.
Gruber, T. R. (1993). “A translation approach to portable ontology specifications.”
Knowledge Acquisition, 5(2), 199–220.
Halpin, D. W., and Riggs, S. (1992). Planning and analysis of construction
operations. John Wiley & Sons.
Lee, S., Behzadan, A., Kandil, and A., Mohamed, Y. “Grand Challenges in
Simulation for the Architecture, Engineering, Construction, and Facility
Management Industries.” Computing in Civil Engineering (2013), American
Society of Civil Engineers, 773–785.
Martinez, J. C. (1996). “STROBOSCOPE: State and resource based simulation of
construction processes.” Doctoral Dissertation. Department of Civil and
Environmental Engineering, University of Michigan, Ann Arbor, MI.
Martinez, J. C., and Ioannou, P. G. (1999). “General-purpose systems for effective
construction simulation.” Journal of construction engineering and
management, 125(4), 265–276.
Martinez, J. (2010). “Methodology for Conducting Discrete-Event Simulation Studies
in Construction Engineering and Management.” Journal of Construction
Engineering and Management, 136(1), 3–16.
Song, L., and Eldin, N. N. (2012). “Adaptive real-time tracking and simulation of
heavy construction operations for look-ahead scheduling.” Automation in
Construction, 27, 32–39.
Vahdatikhaki, F., and Hammad, A. (2014). “Framework for near real-time simulation
of earthmoving projects using location tracking technologies.” Automation in
Construction, 42, 50–67.

©  ASCE
Computing  in  Civil  Engineering  2015 636

Process Mining Technique for Automated Simulation Model Generation


Using Activity Log Data

Sanghyung Ahn1, Phillip S. Dunston2, Amr Kandil3 and Julio C. Martinez4*


1
Postdoctoral Research Fellow, School of Civil Engineering, The University of
Queensland, Brisbane St Lucia, QLD 4072, Australia; PH +61-7-3365-3568; FAX
+61-7-3365-4599; email: [email protected]
2
Professor, School of Civil Engineering, Purdue University, 550 Stadium Mall Drive,
West Lafayette, IN 47907-2051; PH (765) 494-0640; FAX (765) 494-0644; email:
[email protected]
3
Associate Professor, School of Civil Engineering, Purdue University, 550 Stadium
Mall Drive, West Lafayette, IN 47907-2051; PH (765) 494-2246; FAX (765) 494-
0644; email: [email protected]
4*
(Posthumously, deceased on June 4th, 2013) Professor, School of Civil Engineering,
Purdue University

ABSTRACT

Generating the structure of a simulation network is one of the most important


steps in the process of modeling construction operations using discrete-event
simulation (DES). It is, however, a complicated and time-consuming task that
requires extensive expert knowledge and data pre-processing, which are needed to
establish plausible assumptions to build the network. Often such assumptions fail to
capture reality, producing a large discrepancy between the generated simulation
model and the underlying actual operation, due to an absence of data or incomplete
knowledge of the modeler or subject matter expert (SME). As an alternative solution,
this paper proposes an approach to learning the simulation model structure from data.
We introduce techniques for discovering workflow models from activity log data,
which are time-ordered records of all the activities performed by various types of
machines during a given construction operation. The latest advancements in data
collection and processing techniques such as activity recognition algorithms have
made it possible to harvest activity logs based on sensor-based time series data
collected from construction equipment. Since activities are fully ordered and recorded
sequentially, these activity logs can be used to construct a process specification which
adequately models activity cycle diagram (ACD). We introduce a refined -algorithm
to extract a process model from such log data and represent it in terms of an ACD-
based DES model. This paper demonstrates the proposed method in the context of
earthmoving operations and shows that it can successfully mine the workflow process
of the earthmoving operations represented by an ACD.

INTRODUCTION

Construction processes are very complex and include highly interdependent


components subject to complex activity startup conditions. A computer simulation

©  ASCE
Computing  in  Civil  Engineering  2015 637

language based on ACDs can express the logic of complex simulation models very
effectively (Martinez and Ioannou 1999). However, even though computer simulation
has been broadly researched and practiced over the past four decades, simulation
models are still often expensive and time-consuming to develop. Some challenges to
the successful completion of a simulation modeling (Law and Kelton 1991) includes:
(1) inappropriate level of model detail; (2) failure to have people with knowledge of
simulation methodology and statistics on the modeling team; and (3) failure to collect
good system data. Such challenges have been a bottleneck in achieving realistic
simulation model and improving the applicability of the simulation models to the
industry (Kandil et al. 2013).
With the recent advent of cheap, reliable remote sensing technologies,
researchers in construction engineering domain explored automated data collection
platforms including global positioning systems (GPS), radio frequency identification
(RFID), and ultra wide-band (UWB) and processing techniques to infer activities by
using rule-based inference systems and K-means clustering (Song and Eldin 2012;
Akhavian and Behzadan 2013; Vahdatikhaki and Hammad 2014). These recent
studies aim at Near Real-Time Simulation (NRTS) to generate realistic simulation
models that are responsive to changes in the real construction operation system during
the execution phase. However, the current NRTS-based studies require the initial
structure of a simulation model, which already defines and fixes the level of model
abstraction. Thus, the adaptive model updating and refinement can only be made on
model parameters such as activity durations, not on model structures themselves.
To cope with these limitations, this paper proposes an approach to learning the
simulation model structure from data by using process mining techniques as
illustrated in Figure 1. We introduce process mining techniques for discovering
workflow models at various abstraction levels from activity log data, which are time-
ordered records of all the activities performed by different types of machines (i.e.,
resources) during a given construction operation. Thus, the activity logs can be used
to construct a process specification which adequately models a corresponding activity
cycle diagram (ACD). In this study, we employ a refined -algorithm to extract a
process model from such log data and represent it as an ACD-based DES model. This
paper demonstrates the proposed method in the context of earthmoving operations
and shows that it can successfully mine the workflow process of the earthmoving
operations represented by an ACD.

Figure 1. Process mining: ACD-based DES model discovery from activity log

©  ASCE
Computing  in  Civil  Engineering  2015 638

METHODOLOGY

Process mining. Process mining is a relatively young research discipline that


sits between machine learning and data mining on the one hand and process modeling
and analysis on the other hand. The idea of process mining is to discover, monitor and
improve real processes (i.e., not assumed processes) by extracting knowledge from
event logs readily available in today’s systems (van der Aalst 2011). It has received
considerable attention in recent years due to its potential for significantly increasing
productivity and saving cost, especially in Business Process Management (BPM) and
Process-Aware Information Systems (PAISs) (van der Aalst et al. 2004).
BPM heavily relies on process models. Among notations used in operational
business processes modeling, Petri nets have been used for DES. A class of activity
scanning-based DES modeling systems are founded on Petri net concepts. Moreover,
Petri nets are practically identical to ACDs in functionality and appearance when used
to perform DES (Martinez and Ioannou 1999). Therefore, in this study, we take
advantage of process mining techniques to derive process models in ACDs.

Activity log. Let us assume that it is possible to sequentially record events


such that each event refers to an activity (i.e., a well-defined step in the process) and
is related to a particular case (i.e., a process instance). Hence, activity log in this
paper refers to event log. Here activity names are used to identify events (i.e.,
activities). An activity log consists of cases, where cases consist of activities. The
activities for a case are represented in the form of a trace, i.e., a sequence of unique
activities.

Activity Cycle Diagram. The goal of this study is to systemize ACD-based


DES model generation process from data without a predefined model network. This
requires the definition of ACD network structure in such a way that computer system
can adapt to formalize the specifications of model network generation.

Definition 1. (ACD) Let N ( QR , AR , TR , FR ) be an ACD where QR is a


finite set of queues, each of which holds a particular resource-type member R ; AR is
a finite set of activities in which resource R involves such that QR R ;
TR ( QR AR ) is a finite set of directed arcs (i.e., draw links) between preceding
queues and activities; and FR ( AR A QR ) is a set of directed arcs (i.e., release links)
between activities and following queues. In this study, we consider all activity cycles
to be in the form of closed loops for their logical consistency.

Algorithm. To achieve the goal of mining a process model as an ACD from a


given simple activity log, we introduce a refined -algorithm. The presented
algorithm is modified based on the naïve -algorithm (de Medeiros et al. 2003). This
section shows the main concept required to understand the refined -algorithm. We
assume an ideal condition where no noise is present for the notational simplicity.
Under these ideal circumstances we investigate where the refined -algorithm is
possible to discover the corresponding ACD. The refined -algorithm is based on four

©  ASCE
Computing  in  Civil  Engineering  2015 639

ordering relations (Definition 2) which can be derived from the log: L , L , # L , and
L .

Definition 2. (Log-based ordering relations) Let L be an activity log over A ,


i.e., L B (A *). Let a, b A .
(1) a L b if and only if there is a trace t1 , , ti , ti 1 , , t n such that
ti a and ti 1 b , where
a. i {1, , n 2} if n 2 and i [ti ti 2 ]
b. i {1, , n 1} otherwise
(2) a L b if and only if a L b , where
a. b L a if i [ti ti 2 a and ti 1 b] for i {1, , n 2} and n 2
b. b L a otherwise
(3) a # L b if and only if a L b and b L a
(4) a L b if and only if a L b and b L a but a L b

The log-based ordering relations can be used to discover patterns in the


corresponding ACD, as illustrated in Figure 2. If a and b are in sequence, the log
will show a L b (e.g., Load L Haul in Figure 2(a)). If a L c, b L c, and
a # L b, then this suggests that after the occurrence of either a or b, c should occur.
Figure 2(b) shows this so-called XOR-join pattern (i.e., ReturnRoute1 L Load,
ReturnRoute2 L Load, and ReturnRoute1 # L ReturnRoute2). If a L 1 c, and

b L2 c in two different activity logs (i.e., L1 and L 2 from two different resource-
type members, e.g., L Truck and L Excavator ), then it appears that c needs to synchronize
a and b (AND-join pattern). Figure 2(c) describes this pattern, i.e.,
EnterLdArea L Truck DumpSoilToTruck, SwingLoaded L Excavator DumpSoilToTruck.

The logical counterpart of the AND-join pattern is the AND-split pattern shown in
Figure 2(d). If a L 1 b, and a L 2 c in two different activity logs (i.e., L1 and L 2
from two different resource-type members, e.g., L Truck and L Excavator ), then the logs
suggest that after the occurrence of a , both b and c should occur.

Definition 3. ( -algorithm) Let L be an activity log over AL R A . (L) is


defined as follows. R refers to a particular resource type (e.g., Excavator and Truck).
(1) AL R { a R AR | L a R }
(2) AI R { aR AR | L aR first ( ) }
(3) AO R { aR AR | L aR last ( ) }
(4) X L R { ( A, B ) | A AL R A B AL R B a A b B a L b
a1, a 2 A a1 # L a2 b 1, b 2 B b1 # L b2 }

©  ASCE
Computing  in  Civil  Engineering  2015 640

(5) YL R { ( A, B ) X LR | ( A , B) X L R A A B ( A, B ) ( A , B) }
(6) QL R { q( A, B ) | ( A, B ) YL R } { iq L R , oq L R }
(7) TL R { ( q( A, B ) , b) | ( A, B ) YL R b B } { ( iq L R , a ) | a AI R }
(8) FL R { ( a, q( A,B ) ) | ( A, B ) YL R a A } { ( a , oq L R ) | a AO R }
(9) ( L) (QR , AR , TR , FR )

Figure 2. Typical process patterns in ACD and the footprints they leave in the
activity log

In Step 1, it is checked which activities do appear in the log ( AL R ) . These will


correspond to the activities of the generated ACD. AI R is the set of start activities, i.e.,
all activities that appear first in some trace (Step 2). AO R is the set of end activities,
i.e., all activities that appear last in some trace (Step 3). Steps 4 and 5 form the core of
the -algorithm. The challenge is to determine the queues of the ACD and their
connections. We aim at constructing queues named q( A, B ) such that A is the set of
input activities and B is the set of output activities of q( A, B ) . All elements of A
should have causal dependencies with all elements of B , i.e., for all
(a, b) A B : a L b . Moreover, the elements of A should never follow one another,
i.e., for all a 1 , a 2 A : a 1 # L a2 . A similar requirement holds for B too. X L R in Step 4
is the set of all such pairs that meet the requirements just mentioned. If one would
insert a queue for any element in X L R , there would be too many queues. Therefore,
only the “maximal pairs” ( A, B ) should be included in YL R (i.e., in Step 5, all
nonmaximal pairs are removed). Note that for any pair ( A, B ) X L R , and nonempty
set A A , it is implied that ( A , B ) XLR .

©  ASCE
Computing  in  Civil  Engineering  2015 641

Every element of ( A, B ) YL R corresponds to a queue q( A, B ) connecting


activities A to B . In addition, QL R in Step 6 also contains an initial source queue
iq L R and a unique end queue oq L R (i.e., resource holder at the end of each cycle).
In Step 7 and 8, the arcs of the ACD are generated. All start activities in AI R
have iq L R as an input queue and all end activities AO R have oq L R as output queue.
All queues q( A, B ) have A as input nodes and B as output nodes. The result is an
ACD ( L) (QR , AR , TR , FR ) that describes the behavior seen in event log L .

An example of ACD discovery. Let us now consider activity logs


L1 LExcavator and L2 LTruck in excavator and truck earthmoving operation as an
example. This example shows that the refined -algorithm is indeed able to discover
ACD based on activity logs. Table 1 shows the footprints of L 1 and L 2 .

1000
L1 [ Load ]
945 50
L2 [ Load , Haul , Dump, Return , Load , Haul , Dump, Re turn, Stop , Return ,
5
Load , Haul , Dump, Return, Repair , Return ]

A L1 AI 1 AO1 { Load } , X L 1 Y L1 , Q L1 { iq L 1 , oq L 1 }
TL1 { ( iq L1 , Load ) } , FL 1 { ( Load , oq L 1 ) }
( L1 ) (Q L 1 , AL 1 , TL 1 , FL 1 )

Table 1. Footprints of L 1 and L 2

AL2 { Load , Haul , Dump , Return , Stop , Repair }


AI 2 { Load } , AO 2 { Return }
X L2 { ( { Load }, {Haul } ), ( {Haul }, {Dump } ), ( {Dump }, { Return } ),
( { Return},{ Stop} ), ( { Return},{Repair} ), ( { Stop},{ Return} ),
({ Repair, Return}) }

©  ASCE
Computing  in  Civil  Engineering  2015 642

YL2 { ( { Load }, { Haul } ), ( { Haul }, { Dump } ), ( { Dump , Stop , Repair }, { Return } ),


({ Return},{ Stop}), ({ Return},{ Repair}) }
Q L2 { q( { Load }, { Haul } ) , q( { Haul }, { Dump } ) , q( { Dump , Stop , Repair }, { Return } ) , q( { Return }, { Stop } ) ,
q({ Return },{ Repair }) , iq L 2 , oq L 2 }
TL 2 { ( q ( { Load }, { Haul } ) , Haul ), ( q ( { Haul }, { Dump } ) , Dump ), ( q ( { Return }, { Stop } ) , Stop ),
( q({ Return },{ Repair }) , Repair ), ( q({ Dump , Stop , Repair },{ Return }) , Return ),
( iq L 2 , Load ) }
FL 2 { ( Load , q( { Load }, { Haul } ) ), ( Haul , q( { Haul }, { Dump } ) ), ( Dump , q( { Return }, { Stop } ) ),
( Return, q( { Return }, { Repair } ) ), ( Dump, q( { Dump , Stop , Repair }, { Return } ) ),
( Stop , q( { Dump , Stop , Repair }, { Return } ) ), ( Repair , q( { Dump , Stop , Repair }, { Return } ) ),
( Return , oq L 2 ) }
( L2 ) (QL 2 , AL 2 , TL 2 , FL 2 )

Figure 3 shows N1 ( L1 ) and N 2 ( L 2 ) , i.e., the models just computed.


The structure of N 1 is one-length loop. N 2 contains two-length loops. The
discovered ACDs in Figure 3 proves that the refined -algorithm presented in this
paper can successfully mine the underlying process such that the ACDs can indeed
replay the traces. The derived models can then be programmed in any ACD-based
DES computer systems such as CYCLONE (Halpin and Riggs 1992) and
STROBOSCOPE (Martinez 1996).

Figure 3. ACDs N 1 and N 2 derived from L 1 L Excavator and L 2 L Truck

©  ASCE
Computing  in  Civil  Engineering  2015 643

CONCLUSIONS AND FUTURE WORK

This paper has introduced the refined -algorithm with the definition of ACD
structure. This algorithm is an extended -algorithm, which is modified for ACD-
based DES model generation. However, the refined -algorithm has problems with
noise, infrequent/incomplete behavior, and complex routing constructs. Nevertheless,
it provides a good introduction into automated data-driven DES model generation.
The -algorithm is simple and many of its ideas have been embedded in more
complex and robust techniques. We will use the algorithm as a baseline for discussing
the challenges related to process discovery for automated data-driven simulation
model generation and introducing more practical algorithms in the future.

REFERENCES

Akhavian, R., and Behzadan, A. (2013). “Knowledge-Based Simulation Modeling of


Construction Fleet Operations Using Multimodal-Process Data Mining.”
Journal of Construction Engineering and Management, 139(11), 04013021.
Halpin, D. W., and Riggs, S. (1992). Planning and analysis of construction operations.
John Wiley & Sons.
Lee, S., Behzadan, A., Kandil, and A., Mohamed, Y. “Grand Challenges in
Simulation for the Architecture, Engineering, Construction, and Facility
Management Industries.” Computing in Civil Engineering (2013), American
Society of Civil Engineers, 773–785.
Law, A. M., Kelton, W. D., and Kelton, W. D. (1991). Simulation modeling and
analysis. McGraw-Hill New York.
Martinez, J. C. (1996). “STROBOSCOPE: State and resource based simulation of
construction processes.” Doctoral Dissertation. Department of Civil and
Environmental Engineering, University of Michigan, Ann Arbor, MI.
Martinez, J. C., and Ioannou, P. G. (1999). “General-purpose systems for effective
construction simulation.” Journal of construction engineering and
management, 125(4), 265–276.
Medeiros, A. K. A. de, Aalst, W. M. P. van der, and Weijters, A. J. M. M. (2003).
“Workflow Mining: Current Status and Future Directions.” On The Move to
Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE, Lecture
Notes in Computer Science, R. Meersman, Z. Tari, and D. C. Schmidt, eds.,
Springer Berlin Heidelberg, 389–406.
Song, L., and Eldin, N. N. (2012). “Adaptive real-time tracking and simulation of
heavy construction operations for look-ahead scheduling.” Automation in
Construction, 27, 32–39.
Vahdatikhaki, F., and Hammad, A. (2014). “Framework for near real-time simulation
of earthmoving projects using location tracking technologies.” Automation in
Construction, 42, 50–67.
van der Aalst, W. M. P. V. D. (2011). Process Mining. Springer.
van der Aalst, W., Weijters, T., and Maruster, L. (2004). “Workflow mining:
discovering process models from event logs.” IEEE Transactions on
Knowledge and Data Engineering, 16(9), 1128–1142.

©  ASCE
Computing  in  Civil  Engineering  2015 644

Towards the Integration of Inspection Data with Bridge Information Models to


Support Visual Condition Assessment
V. Kasireddy1,* and B. Akinci2
1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
5000, Forbes Avenue, Pittsburgh, PA 15213. E-mail: [email protected]
2
Department of Civil and Environmental Engineering, Carnegie Mellon University,
5000, Forbes Avenue, Pittsburgh, PA 15213. E-mail: [email protected]

Abstract

Thorough condition assessment of America’s aging bridges is necessary as


they are either nearing their service lives or have already passed it. It is a challenge to
abstract relevant information from heterogeneous bridge data sources, such as
previous inspection reports, as-built drawings, to develop a thorough understanding of
the current and historical condition of a bridge. This can limit decision-making
capabilities and efficiency of an inspector during condition assessment. Current
standards, such as Industry Foundation Classes (IFC) and more recently, IFC-Bridge,
allow representation of spatial and physical features of a bridge. However, they do
not contain explicit representation concepts that can integrate inspection-specific
details (e.g. element condition ratings, repair history etc.) with spatial and physical
information from a bridge information model, which would help in condition
assessment. In this paper, we identify requirements of such an integrated model, and
discuss possible approaches to develop such a model by leveraging and extending
existing IFC-Bridge standards.

Keywords: Bridge inspection; Condition assessment; Industry foundation classes;


IFC-Bridge; Integrated information models; Semantic-rich information models

INTRODUCTION

Condition assessment is done several times over the life-cycle of a bridge to


support various tasks, such as inspection planning, on-site data collection and
maintenance planning. To support condition assessment, detailed data about condition
of bridge elements are collected on site during bridge inspection. In addition to field
data, bridge inspectors also need to review documentation such as previous inspection
reports, load rating analysis reports, design drawings, as-built information, inspection
manuals, and other specifications about a bridge (FHWA, 2012 and Abudayyeh & Al-
Battaineh, 2003). Furthermore, bridge inspectors are often challenged by the need to
get a thorough understanding of the current and historical condition of a bridge in a
short period of time and this challenge is further exacerbated by the need to extract

©  ASCE 1
Computing  in  Civil  Engineering  2015 645

relevant data from multiple dispersed sources. These all contribute to occasional
inadequate inspection of a bridge (FHWA, 2001).
To abstract correct and relevant information, on the other hand, is not a
straightforward task. It requires formal semantic-rich representation of condition
information, alongside integrating that information with spatial contextual
information from a bridge information model. Such an integrated model, when
maintained over multiple inspections also offers an opportunity to reason temporally
with condition data. This in turn supports a critical aspect of bridge condition
assessment, i.e. to understand the evolution of defects over time (Jahanshahi & Masri,
2013). Maintaining, of course, requires storing condition information from successive
inspections in the same model and creating relationships between them.
In this paper, we identify ways to leverage IFC based model to integrate
condition information and specifically analyze the extent to which IFC-Bridge can
represent inspection-related information and how it can be extended to support
condition assessment in bridge inspections. The authors also describe a case study
done on a steel beam concrete bridge in support of the discussions in this paper.

BACKGROUND STUDY

IFC, developed by buildingSMART (buildingSMART, 2008), is a widely


used standard for representation and interoperability of building information.
Understandably, as the focus of this schema has been buildings, it has several
shortcomings, in terms of representation of information applicable to bridge domain.
For this purpose, version 2x4 of IFC has been extended to formally capture spatial
and physical descriptions of bridges (Lee & Kim, 2011), and the extension is now
known as IFC-Bridge. The current draft version of IFC-Bridge was developed by G.
Arthaud and E. Lebegue from France (Arthaud & Lebegue, 2007). It is not declared
officially as a standard for bridges. Nevertheless, several researchers from all over the
world are collaborating and continuously developing it (Infra Room, 2013), to make it
a comprehensive schema for bridge information representation and exchange.
As part of our study, along with IFC and IFC-Bridge, we also looked at other
schema such as TransXML (Harrison, 2005), CityGML (OGC, 2008), ISO TC211
19133 etc. Among all of them, IFC and IFC-Bridge is the only schema that currently
supports 3D parametric modeling, allows customization/extension of existing
representation and relationship concepts, and IFC is already widely adopted by
software vendors in comparison to other schema.
Likewise, recent versions of IFC-Bridge focus mainly on the semantics
concerning the geometry and grouping information, and its potential to support
condition assessment in bridges is still untapped. Therefore, in our case study, we
used IFC, and IFC-Bridge specifically, to further investigate its extensibility to
support condition assessment in bridges.

CASE STUDY

The case study that we focused on this research is a steel beam bridge with a
suspended deck passing over other roadways in Boston. One of the goals of this case

©  ASCE 2
Computing  in  Civil  Engineering  2015 646

study was to collect necessary data to investigate the extent to which current
standards support condition assessment of bridges. This bridge was known to have
many locations that suffer from deterioration and it also contains instances of
different defect types, such as section loss, concrete spalling, exposed reinforcement,
etc. Therefore, this bridge is an appropriate candidate for our preliminary study.
As part of the case study exercise, we gathered information sources
pertaining to the case-study bridge such as bridge as-built drawings, bridge inspection
reports, point clouds using a ground laser scanner, bridge condition photos and hand
measurements of various bridge elements and defects present on them. Using as-built
drawings, we built a preliminary 3D model of the bridge using Autodesk Revit. On
the basis of observations made using the collected point cloud information and bridge
condition photos, we realized that as-built drawings had missing details and also,
some member additions were not updated in the drawings. Thus, they don’t reflect the
actual current conditions on the bridge. Therefore, we used the point cloud data to
perform deviation analysis on the 3D model and updated it. Then, we corroborated
the changes with available photos and hand measurements. In the inspection reports,
spanning several pages, we found information about bridge location and meta
information, site inspection details, and then, condition ratings, subjective condition
descriptions, defect sketches and photos of various elements in the bridge. Thereafter,
we embedded this information into the 3D model to create an integrated information
model that contains condition information and semantic relationships between
different spatial and physical elements of the bridge.
Typically, an inspector has to look for information, similar to what was
embedded in the integrated information model, in several 2D artifacts and mentally
integrate details, such as measure extents of the defects, previous condition ratings
and previous repair actions, to make an assessment of existing conditions. Overall,
condition assessment for a typical bridge involves comparing complete condition
information from preceding inspections to be able to assess changes over that
duration. Also, completeness in the condition information means integrating it with
bridge spatial and meta information, on-site inspection, repair action, spatial
component, and physical element information (See grey shaded box in Figure 1).

INTEGRATION APPROACH

To perform condition assessment of a bridge, we need to reason about bridge


spatial and meta information such as bridge owner, posted status etc., on-site
inspection information, bridge components information, bridge physical elements and
element type information, physical element condition and its geometric information,
and repair actions on defects. Therefore, we need to formally represent this
information.
Based on our review of inspection reports and inspection manuals, and
interactions with bridge inspectors, we came up with a data model (in EXPRESS-G
language) that can potentially support condition assessment tasks (Figure 1). In this
section, we will explore if classes in this data model can be represented using IFC-
Bridge schema and for cases where that is not feasible, we will detail how existing
concepts can be extended or modified to make the representation possible.

©  ASCE 3
Computing  in  Civil  Engineering  2015 647

Figure 1. Data model to support condition assessment.


Bridge spatial and meta information
The bridge itself can be modeled as IfcBridge-object, which is derived from
IfcObject through IfcBridgeStructureElement and IfcCivilStructureElement. Bridge
information, such as city, year built, owner, structure number and bridge rating,
posted status etc, can be modeled using IfcProperty-objects, and related to IfcBridge
object using IfcRelDefinesByProperties-relationship (See figure 2).

Figure 2. Representation
On-site inspection information of bridge spatial and meta information
On-site inspection can be represented in IFC-Bridge schema by creating an
IfcTask-object. As it is derived from IfcObject, all its related information, such as
inspection date, total inspection hours, current deck rating and current superstructure

©  ASCE 4
Computing  in  Civil  Engineering  2015 648

rating, can be related via IfcRelDefinesByProperties by modeling each of them as


IfcProperty-objects. Details about inspection crew and equipment can be modeled as
IfcResource and assigned to IfcTask-object using a relationship of type
IfcRelAssignsToProcess. Inspection task itself can be assigned to IfcBridge-object
using IfcRelAssignsToProcess (See figure 3).
Bridge spatial components information
Bridge spatial components, such as deck, superstructure, substructure,
channels, and traffic safety components, can be represented as IfcBridgePart. This
class has an attribute called StructureElementType of type
IfcBridgeStructureElementType. However, the current version of IFC-Bridge schema
has limited number of enumeration options that do not represent all the required
bridge components, so it needs to be extended to include spatial components such as
superstructure, substructure, channelway, etc. Also, bridge spatial component objects
resembles the role of part in the whole-part relationship with IfcBridge-object, they
can be related using a subtype of IfcRelDecomposes-relationship, i.e. IfcRelNests or
IfcRelAggregates. However, IfcRelNests-relationship, requires that both whole and
part need to be of the same object type. So, in this scenario, IfcRelAggregates-
relationship may be more appropriate to relate IfcBridgePart-object with an
IfcBridge-object (See figure 4).
Bridge physical elements and type information
Bridge physical elements can be represented as IfcBridgeElement or one of its
subtypes: IfcBridgePrismaticElement and IfcBridgeSegment. The class specialization
is based on the uniformity in geometric properties of a bridge element. Each of the
sub-types have type attribute, which can be used to model bridge element type
information. As IfcBridgeElement objects derive from IfcObject, they can be
aggregated to an IfcBridgePart-object using IfcRelAggregates-relationship (See
Figure 5).
Bridge physical element condition, defects and geometric representation of
defects information
Currently, IFC-Bridge V2 does not contain specifications that would allow
explicit representation of bridge physical element condition and defects. In IFC 2X4
version, for IfcElement-objects and their subtypes, there is a propertyset available by
name Pset_Condition with properties such as AssessmentDescription,
AssessmentDate and AssessmentCondition. AssessmentDescription and
AssessmentCondition are of the type IfcText and IfcLabel respectively. This level of
detail is much below what is required for condition assessment. Since defects also
have geometric information, that needs to be represented, it is possible to model
bridge physical element condition and defects as derived class of IfcObject rather than
that of IfcProperty. IfcElementPart could be used for this purpose since it derives
from IfcElement, which is a subtype of IfcObject. IfcElementPart could then be
assigned to an IfcBridgeElement-object using IfcRelAssignsToProduct. As is the case
with on-site inspection process, even repair actions can be modeled as IfcTask and
related to defect (i.e. IfcElementPart) using IfcRelAssignsToProcess.
Additionally, the defect conditions have geometric representations (Engin,
2014). As defects are modeled as a subtype of IfcObject, IfcRepresentation can be
used to represent defect geometry. Furthermore, based on the prior work in this area

©  ASCE 5
Computing  in  Civil  Engineering  2015 649

(AASHTO, 2010, FHWA, 2001 and Jahanshahi & Masri, 2013), and interactions with
bridge inspectors in Boston and Pittsburgh, an important aspect of condition
assessment is to understand how the defect has evolved over time. Hence, there is a
need to formally specify the temporal context of defects to be able to capture
evolution of defects over successive inspections. IfcRepresentationContext class has
provision to allow IfcObject have multiple representations in different contexts
through attribute ContextType (Akinci & Boukamp, 2003). In ContextType attribute, a
unique inspection-specific reference such as an inspection date could be used by
casting it to an IfcLabel. Inspection date is unique as it is unlikely that a same type of
inspection will be performed on the same bridge on the same day. Therefore, by using
ContextType, we can distinguish geometric representations of various defects in a
temporal sense and it can be used for the purpose of understanding defect evolution
during condition assessment. Other geometric properties that are static with respect to
different inspections, such as location origin of the defect, or presence of defect on a
fracture critical member, can be related to IfcObject-object using
IfcRelDefinesByProperties relationship (See Figure 6)

Figure 3. Representation of bridge spatial and meta information

CONCLUSION

In the case study we used in this work, we created an integrated information


model of a bridge using IFC-Bridge schema, to support condition assessment. While
investigating the schema in our case study, we found that current version of IFC-
Bridge schema does not support representation of condition information as well as its
integration with other relevant information from a bridge information model with
explicit representation classes and relationships between them.
Therefore, we propose a new approach to support condition assessment in
bridges. We utilized IFC-Bridge specification, without significantly adding new
classes to the existing data model, in other words, by making maximum use of the
existing schema classes. Particularly, our three-pronged approach is a combination of
(i) using geometry as a means for defect representation with IfcRepresentation, (ii)
using extended relationship concepts from IFC and IFC-Bridge schema such as
IfcRelAssignsToProduct, IfcRelAssignsToProcess IfcRelDefinesByProperties, and
IfcRelDecomposes, to link spatial and physical bridge element information with
condition information, and (iii) using concept of representation context to
contextualize defect information from multiple inspections. One limitation in this
approach is the use of classes from existing schema as proxy classes to represent
some classes required for condition assessment. For instance, IfcElementPart is used
as a proxy class to represent BridgePhysicalCondition class in the absence of an
explicit IFC-Bridge representation of condition and its semantic relationships. This
limitation can be overcome by formally representing such classes in the schema.

©  ASCE 6
Computing  in  Civil  Engineering  2015 650

As inspection forms used to record and report condition information of a


bridge, and recording guidelines, vary across different state department of
transportation (DOTs) in United States, we plan to further test extensibility of the
identified concepts in this study, by using condition information from different type
of bridges from different state DOTs in the near future.

Figure 4. Representation of bridge spatial component information

Figure 5. Representation of bridge physical element information

Figure 6. Representation of Bridge physical element condition, defects and


geometric representation of defects information

ACKNOWLEDGEMENTS
The project is funded by a grant from the National Science Foundation (NSF),
#1328930. NSF's support is gratefully acknowledged. Any opinions, findings,

©  ASCE 7
Computing  in  Civil  Engineering  2015 651

conclusions or recommendations presented in this paper are those of authors and do


not necessarily reflect the views of the NSF. The authors also acknowledge the
support, during case study data collection efforts, from collaborators in our project,
Prof. J Hajjar, Dr. B Guldur and Y Yan from Northeastern University, Boston, and
also from MassDOT.

REFERENCES

AASHTO. (2010). Manual for Bridge Evaluation. AASHTO publication.


AASHTO. (2013). Manual for Bridge Element Inspection. AASHTO publication.
Abudayyeh, O., & Al-Battaineh, H. (2003). As-Built Information Model for Bridge
Maintenance. J. Comput. Civ. Eng., 17(2), 105-112.
Akinci, B., & Boukamp, F. (2003). Representation and integration of as-built
information to IFC based product and process models for automated
assessment of as-built conditions. NIST SPECIAL PUBLICATION SP, 543-
550.
Arthaud, G., & Lebegue, E. (2007). IFC-Bridge V2 data model-Edition R7.
BuildingSMART.
buildingSMART. (2008). Retrieved from http://www.buildingsmart-tech.org/
Engin, A. (2014). Utilization of As-is Building Information Models Obtained from
Laser Scan Data for Evaluation of Earthquake Damaged Reinforced Concrete
Buildings. Dissertation Thesis. Carnegie Mellon University.
FHWA. (2001). Reliability of Visual Inspection for Highway Bridges. FHWA report.
FHWA. (2012). Bridge Inspector’s Reference Manual (BIRM). Washington, DC:
FHWA publication.
Harrison, F. (2005). XML Schema for Transportation Data Exchange: Project
Overview and Status. Retrieved from
http://www.transxml.org/Info/Project+Documents/543.aspx
Infra Room. (2013). Retrieved 12 5, 2014, from buildingSMART International User
Group: http://iug.buildingsmart.org/resources/itm-and-iug-meetings-2013-
munich/infra-room
Jahanshahi, M. R., & Masri, S. F. (2013). A new methodology for non-contact
accurate crack width measurement through photogrammetry for automated
structural safety evaluation. Smart Mater. Struct., 22(3), 12 pp.
Lee, S. H., & Kim, B. G. (2011). IFC Extension for Road Structures and Digital
Modeling. Procedia Engineering, 14, 1037-1042. doi:j.proeng.2011.07.130
Liebich, T., Adachi, Y., Forester, J., Hyvarinen, J., Richter, S., Chipman, T., & Weise,
M. (2010). Industry Foundation Classes IFC2xEdition4 Release Candidate 1.
BuildingSMART.
OGC. (2008). CityGML. Retrieved from
http://www.opengeospatial.org/standards/citygml
Yabuki, N., Lebegue, E., Gual, J., Shitani, T., & Zhantao, L. (2006). International
collaboration for developing the bridge product model IFC-Bridge. 11th Int.
Conf on Computing in Civil and Building Engineering.

©  ASCE 8
Computing  in  Civil  Engineering  2015 652

Methodology for Crew-Job Allocation Optimization in Project and Workface


Scheduling

Ming-Fung Francis Siu1; Ming Lu2; and Simaan AbouRizk3


1
Construction Engineering and Management, University of Alberta, Edmonton,
Alberta, Canada. E-mail: [email protected]
2
Construction Engineering and Management, University of Alberta, Edmonton,
Alberta, Canada. E-mail: [email protected]
3
Construction Engineering and Management, University of Alberta, Edmonton,
Alberta, Canada. E-mail: [email protected]

Abstract

Existing resource scheduling methodologies are insufficient for controlling


workflows for individual craft persons in project and workface planning. In practice,
the workflow of an individual resource is assigned by a project manager in
consideration of the resource supply and resource demand for particular time periods
of the project duration. This research study proposes a crew-job allocation
methodology to facilitate scheduling and resource management at both project and
workface levels. We propose the use of a mathematical model to formulate and solve
for the identified problem factoring in resource-time tradeoff options on individual
activities. Then a crew-job interaction table is instrumental in visualizing the
optimum resource-loaded schedule, which features the shortest project duration and
the leanest resource supply under time-dependent resource constraints. Case studies
are given to illustrate application of the proposed methodology. This technique
potentially provides analytical decision support for not only making cost-effective
resource-loaded schedules, but also facilitating the controllability of schedule
execution.

INTRODUCTION

Workface planning is defined as “the process of organizing and delivering all


the elements necessary, before work is started, to enable craft persons to perform
quality work in a safe, effective and efficient manner” (CII 2013; COAA 2014).
Previous research has proved that workface planning is the critical ingredient to
improving productivity on construction field activities (Fayek and Peng 2013). Based
on close collaboration with leading industry partners (contractor companies) in
Alberta, Canada, we propose a crew-job scheduling methodology to facilitate
scheduling and resource management at both project and workface levels.
Figure 1 shows the project-level resource usage histogram based on a typical
project schedule. The total project duration and periodic resource supply are
determined. The resource supply of a particular time period is estimated and matched
with the resource demand. The resource demand is aggregated from the sequenced
activities for field operations at one particular time point. Figure 2 shows the
sequenced activities and corresponding resource usage histogram at the workface

1
©  ASCE
Computing  in  Civil  Engineering  2015 653

level. Resource-time tradeoff options are generally considered at this level. With
available resource supply, the execution mode of individual activities is selected
among available options in order to shorten activity and project durations. Each
option is associated with one unique combination of activity duration and resource
demand as per the construction method. As such, the schedule for field operation is
formulated in consideration of time-dependent resource availability constraints and
resource-time tradeoff options. Ideally, the simultaneous minimization on three
criteria –namely: project completion time, activity completion times, and resource
supply– are desirable for scheduling and resource management at both project and
workface levels.

At project level
Resource usage
At workface Minimize:
level Project
completion
time
Resource
supply

Time
0 tp T

Figure 1. Project level planning.


At workface level
tp

Resource-time
∗ ∗ tradeoff

Resource usage Minimize:


Activity
completion times

Figure 2. Workface level planning.

The software system, Primavera P6, serves as the primary scheduling tool in
current practice. Primavera P6 adopts critical path method (CPM) to formulate
schedules. To facilitate both project and workface level planning, three levels of
detailed schedules are formulated (Siu, Lu and AbouRizk 2014). At the project level,
a summary schedule shows major milestones and work packages. At the workface
level, detailed working schedules are formulated to provide the fine granularity of
weekly and daily activities, and the periodic crew resources that are available are
defined. Although Primavera P6 allows for varied resource availability limits for
different time periods (Harris 2013), we found it does not provide functionalities to
address the critical decision processes for project and workface planning, as
elaborated below:
(i) The determination of resource supply quantities for particular time periods is
largely dependent on the experience of project managers and schedulers,

2
©  ASCE
Computing  in  Civil  Engineering  2015 654

without any analytical decision support. As such, the matching of resource


supply and demand for particular time periods is manually carried out through
a trial-and-error process.
(ii) The selection of activity execution mode, i.e., resource-time tradeoff, is
largely dependent on the experience of site superintendents and schedulers, in
consideration of site spatial and safety requirements.
(iii) The activities/work-packages are assigned randomly to individual resources
by superintendents/foremen, based on the availability of resources.

LITERATURE REVIEW

It is generally stipulated in the contractual documents that the schedule should


be developed based on the critical path method (CPM) in order to guide activity and
project executions. However, CPM becomes convoluted if the schedule is resource-
constrained (Fondahl 1991). Previous research improved the reliability of resource-
constrained scheduling by means of mathematical modeling, evolutionary algorithm
and simulation.
Mathematical modeling solves project scheduling problems by formulating
analytical equations. Pritsker et al. (1969) proposed mathematical models which
define the objective function to minimize project completion time, with activity
start/completion times being the decision variables and the resource availability limit
generally set as a constant. Talbot (1982) further proposed mathematical models in an
attempt to tackle resource-time tradeoff problems. Menesi and Hegazy (2014) applied
constraint programming to simultaneously solve resource allocation, resource
leveling and resource-time tradeoff problems. Nevertheless, the combinatorial
explosion problems are likely to be encountered in a project of practical size and
complexity. Therefore, researchers currently focus on reducing the variables by
tightening the bounds with respect to the decision variables and proposing heuristic
rules for branch-and-bound algorithms to improve the computing efficiency of
enumerating solutions (Zhu et al. 2006).
Evolutionary algorithm introduces randomness into the search of optimum
solutions. The optimization is analogous to a trial-and-error process and entails a
large number of iterations in order to evaluate a large quantity of feasible solutions.
Each potential solution encodes a set of priorities and execution modes corresponding
with each activity. The schedule with the shortest project duration is identified as the
optimum solution. The proposed approaches include genetic algorithm, ant-colony
optimization and leapfrog optimization (Ashuri and Tavakolan 2013). In spite of the
effectiveness and efficiency of evolutionary algorithms in solving practical resource
scheduling problems, they remain relatively weak in theoretical foundations.
Simulation is a technique used to account for operational details and model
individual resource workflows. In analyzing the impact of resource availability,
researchers have applied simulation modeling and conducted sensitivity analysis
based on pre-specified resource supply limits (Chen et al. 2012). However, the
formulation of an optimum project schedule with the shortest project duration in
consideration of the resource-time tradeoff option is not possible. Furthermore, each

3
©  ASCE
Computing  in  Civil  Engineering  2015 655

simulation model is case-dependent, tedious to build and update, and not oriented
towards reaching theoretical optimums.
In short, the aforementioned previous research largely assumed that the
resource availability remains unchanged throughout the project period. However, it is
not economical or feasible to allocate the maximum quantity of available resources to
execute the project. Developing an analytical technique to formulate and visualize the
optimum resource-loaded schedule, which features the best selection of activity
resource-time tradeoff options, the shortest project duration and the leanest resource
supply under time-dependent resource constraints, is desirable. This motivates us to
propose a crew-job allocation methodology consisting of mathematical formulations
and a crew-job interaction table to represent the scheduling results. The resulting
optimum solution provides decision support in terms of (i) fixing resource supply in
meeting resource demand over particular project time periods; (ii) selecting activity
execution modes; (iii) planning for individual resources’ workflow.

MATHEMATICAL MODELING

A linear integer programming mathematical model is proposed by extending


the classic resource scheduling model formulated in operation research and computer
science by Pritsker et al. (1969) and Talbot (1982). We declare the decision variables
as the activity/work-package completion times, project completion time and resource
supply quantities for particular time periods. Note that the activity/work-package
completion time variables are declared with respect to the particular resource-time
tradeoff option. Equation (1) is the objective function intended to minimize
activity/work-package completion time, project completion time and resource supply.
Equation (2) shows the technological constraints that maintain the precedence
relationships between activities/work-packages. Equation (3) constrains one
completion time of each activity/work-package/project. Equation (4) shows the time-
dependent resource constraints that maintain the resource supply and demand
relationship over the project time periods, while Equation (5) sets the boundaries of
resource supply with respect to particular resources.

T T
minimize f = (txn,t m ) + (txet ) + ( Rrtp ) (1)
n m t =0 t =0 r tp
T T
( txn,t m,i ) ≤ ( t − ds,m ) xn,t m, j , {i, j} ∈ PS (2)
m t =0 m t =0
T T
xn,t m = xet = 1 (3)
m t =0 t =0
t + d a,m
( rr,t n, m xn,t m ) ≤ Rrtp , t ∈ T (4)
n m t=t

lbrtp ≤ Rrtp ≤ ubrtp (5)

t : Time point PS : Precedence set r : Resource type

4
©  ASCE
Computing  in  Civil  Engineering  2015 656

tp : Time period i : Predecessor rr,tn,m : Resource demand


T : Time j : Successor Rrtp : Resource supply
n : Activity number d a : Duration of activity lbrtp : Lower bound
m : Mode number ds : Duration of successor ubrtp : Upper bound
f : Objective function xnt : Activity completion time xet : Project completion time

Example case study

A small case adapted from a classic textbook example (Ahuja et al. 1994) is
given to illustrate the application of the proposed mathematical model. The example
project consists of nine activities and involves two types of resources (Resource A
and Resource B). At the project level, the project can be completed within 30 time
units. The resource supply is considered with respect to three time periods. The
resource supply is expressed as a range of lower and upper bounds (Table 1). At the
workface level, the technological constraints are observed according to the activity
precedence relationships, which are indicated by the arrows depicted in the
activity/work-package network (Figure 3). Resource-time tradeoff options are
available for executing particular activities. The resource demand as per each
resource-time tradeoff option is summarized in Table 2. For example, Activity A can
be executed in Mode 1 or Mode 2. It can be completed in 2 time units by assigning 1
unit of Resource A, or completed in 1 time unit by allocating 2 units of Resource A;
Activity C can be executed by use of either Resource A or Resource B, where
Resource B acts as a substitute resource for Resource A.

Figure 3. Activity network of example case study.

The mathematical model is readily solvable by computer software systems


developed for linear integer programming. The solvers, such as MATLAB
(MathWorks 2014), generally implement branch-and-bound algorithms to enumerate
the theoretical optimum solution. MATLAB Version 2014 is chosen in the current
research as it has been found effective to determine optimum solutions to complex
problems. The resulting values of decision variables
0 2 1 8 7 20 3 16 10 19 20
{xs , xA,1, xB,2 , xC,1, xD,1, xE,1, xF,2 , xG,1, xH,1, xI,1, xe } are determined as 1. The values of
decision variables {RA0−6 , RB0−6 , RA6−10 , RB6−10 , RA10−30 , RB10−30 } are determined as
{3,2,2,2,2,3} . The values of other decision variables are generated as 0.

5
©  ASCE
Computing  in  Civil  Engineering  2015 657

As such, the shortest project duration is 20 time units. The optimum supply of
resources is determined as: (i) 3 units of Resource A and 2 units of Resource B
should be allocated from Time 0 to Time 6; (ii) 2 units of Resource A and 2 units of
Resource B should be allocated from Time 6 to Time 10; (iii) 2 units of Resource A
and 3 units of Resource B should be allocated from Time 10 to Time 20. Table 2
(bolded values) shows the selected resource-time tradeoff options for site operations.
Figure 4 depicts the formulated optimum schedule with the corresponding resource
usage histogram. The results assist the site superintendents/schedulers in determining
the resource supply over the project period. As such, under-supply or over-supply of
the resources can be avoided.

Table 1. Periodic resource supply of example case study.


Time Period (tp) Time Point (t) Resource A Resource B
Time Period 1 Time 0 – Time 6 3–4 2–3
Time Period 2 Time 6 – Time 10 2–5 2–5
Time Period 3 Time 10 – Time 30 0–4 0–3

Table 2. Activity-resource requirements of example case study.


Activity A B C D E F G H I
Mode 1 2 1 2 1 2 1 1 1 2 1 2 1 2 1 2 3
Duration 2 1 3 1 5 5 4 4 3 2 6 3 2 1 3 2 1
Resource A 1 2 1 2 2 0 0 2 2 2 1 3 2 3 0 1 3
Resource B 0 0 0 1 0 3 2 2 1 2 3 4 1 2 1 1 0

CREW-JOB INTERACTION TABLE

To plan the workflow of individual resources (i.e. craft persons), a crew-job


interaction table is used to simulate detailed resource workflows, based on the
optimum solutions and three heuristic rules. They are (i) the work content based
prioritization rule, (ii) the earliest-ready, first-serving rule and (iii) the early-initial,
first-ending rule. Note that the first and second rules were originally proposed by Lu
and Li (2003) to formulate a non-optimum resource-constrained schedule using the
simulation (heuristics) approach.
• The work content based prioritization rule is applied to prioritize activities.
The work content of an activity is calculated by multiplying its duration and
resource quantities as required. The larger the work content, the higher the
activity priority in obtaining resources to execute the work.
• The earliest-ready, first-serving rule is applied to prioritize individual
resources to be assigned to the prioritized activity. The work is allocated to
the resource that has the earliest ready-to-serve time. This is intended to
distribute the work uniformly to all available resources.
• The early-initial, first-ending rule is applied to prioritize resources. Higher
priority is given to resources with earlier initial engagement time in order to
end its service time.
Based on the analytical solutions and proposed heuristic rules, the crew-job
interaction table is formulated. Figure 5 depicts the table scheme for the example case

6
©  ASCE
Computing  in  Civil  Engineering  2015 658

study. The horizontal axis denotes project progress time, and the vertical axis denotes
each individual resource of the crew. This crew-job interaction table assists the
superintendents/foremen to present the detailed individual resource workflows.

Figure 4. Optimum schedule of example case study.

Figure 5. Crew-job interaction table.

Practical case study

The proposed crew-job allocation methodology was also applied to the


industrial turnaround project reported in Siu, Lu and AbouRizk (2014). This project
consists of 107 activities. There are 9 resource-time tradeoff options available for
executing 4 activities. There are 19 types of resources involved. The project duration
is approximately 200 hours. The resource supply quantities are re-assessed after 100
hours. The mathematical model consists of 22952 decision variables, 3928
technological and time-dependent resource constraints. The results were successfully
generated using MATLAB Version 2014 within a reasonable amount of time. It
demonstrates the applications of proposed methodology in practical settings.

CONCLUSION

A methodology for crew-job allocation optimization to facilitate scheduling


and resource management at both project and workface levels is proposed. An
analytical technique along with a crew-job interaction table are developed to present
optimum scheduling results factoring in resource-time tradeoff options on individual
activities. The optimum schedule features the shortest project duration and the leanest
resources supply under time-dependent resource constraints. This innovative

7
©  ASCE
Computing  in  Civil  Engineering  2015 659

methodology provides decision support in terms of fixing resource supply in meeting


resource demand over particular project time periods, selecting resource-time tradeoff
options, and planning for resource workflows.

REFERENCES

Ahuja, H. N., Dozzi, S. P. and AbouRizk, S. (1994). Project management: techniques


in planning and controlling construction projects. 2nd Edition, John Wiley &
Sons, Inc., New York.
Ashuri, B. and Tavakolan, M. (2013). “Shuffled frog-leaping model for solving time-
cost-resource optimization problems in construction project planning.”
Journal of computing in civil engineering. 0(0), 04014026.
Chen, S., Griffis, F., Chen, P., and Chang, L. (2012). “Simulation and analytical
techniques for construction resource planning and scheduling.” Automation in
Construction. 21, 99–113.
Construction Industry Institute (CII) (2013). Advanced work packaging: design
through workface execution. Implementation resource 272-2, Version 3.0.
Construction Industry Institute.
Construction Owners Association of Alberta (COAA). (2014). COAA workface
planning rules. Edmonton, Alberta, Canada.
Fayek, A. R. and Peng, J. (2013). “Adaptation of workface planning for construction
contexts.” Canadian journal of civil engineering. 40(10), 980–987
Fondahl, J. W. (1991). ‘‘The development of the construction engineer: past progress
and future problems.’’ Journal of construction engineering and management.
117(3), 380–392.
Harris, P. E. (2013). Project planning and control using oracle primavera p6: version
8.2 eppm web. Eastwood Harris Pty Ltd.
Lu, M. and Li, H. (2003). “Resource-activity critical-path method for construction
planning.” Journal of construction engineering and management. 129(4),
412–420.
MathWorks (2014). Optimization toolbox™ user’s guide. The MathWorks, Inc.
Menesi, W. and Hegazy, T. (2014). “Multimode resource-constrained scheduling and
leveling for practical-size projects.” Journal of management in engineering.
04014092.
Pritsker, A. B., Waiters, L. J. and Wolfe, P. M. (1969). “Multiproject scheduling with
limited resources: a zero-one programming approach.” Management science.
16(1), 93–108.
Siu, M., Lu, M. and Abourizk, S. (2014). “Strategies for optimizing labor resource
planning on plant shutdown and turnaround.” Construction research congress.
1676–1685.
Talbot, F. B. (1982). “Resource-constrained project scheduling with time-resource
tradeoffs: the non-preemptive case.” Management science. (28), 1197–1210.
Zhu, G., Bard, J. F. and Yu, G. (2006). “A branch-and-cut procedure for the
multimode resource-constrained project-scheduling problem.” Journal on
computing. 18(3), 377–390.

8
©  ASCE
Computing  in  Civil  Engineering  2015 660

Error Assessment of Machine Vision Techniques for Object Detection and


Evaluation

S. German Paal1 and I. F. C. Smith, F.ASCE2


1
Applied Computing and Mechanics Laboratory, School of Architecture, Civil and
Environmental Engineering, Swiss Federal Institute of Technology (EPFL), GC G1
537, Station 18, CH-1015 Lausanne. E-mail: [email protected]
2
Applied Computing and Mechanics Laboratory, School of Architecture, Civil and
Environmental Engineering, Swiss Federal Institute of Technology (EPFL), GC G1
537, Station 18, CH-1015 Lausanne. E-mail: [email protected]

Abstract

As image-based assessment practices become increasingly present in the field


of civil engineering, so do the interpretation errors. In machine vision, these errors are
typically represented by quantitative metrics stating the presence of error. In order to
provide a comprehensive assessment of the state of a structure and structural elements,
errors should be interpreted and rationally integrated into decision making. In this
paper, a performance-based approach to include the source and impact of the errors in
machine vision object detection techniques is presented. Structural element evaluation
is used as an example. The approach involves an adaptation of the error-domain
model falsification method developed previously. More specifically, an automated
method in reinforced concrete column damage state estimation based only on visually
observed damage characteristics is used as a study case. Predicted damage state(s)
(eleven in total, DS0-DS10) are compared with the measured damage state of the
column surface image that has been retrieved by way of the automated machine-
vision method. Ultimately, a set of damage states is provided as equally possible
solutions for the input image of the column surface. This analysis helps focus efforts
to reduce errors on image characteristics that are the greatest source of ambiguity. In
this study, lower errors associated with determining damage states 0 to 5 would be the
most helpful.

INTRODUCTION

In order to ensure the safety of buildings and critical infrastructure (routinely


and after natural disasters), it is always necessary to visually inspect the physical
nature of structures in critical zones. Typically, such assessments are performed
manually by certified inspectors; however, these assessment practices (FEMA 2006;
ATC 1989, 1995) have been deemed time-consuming, unsafe, subjective and
expensive (Bartel 2001; German et al. 2012). In order to address these limitations,
recent progress in the field of computer vision has been applied to damage detection

©  ASCE 1
Computing  in  Civil  Engineering  2015 661

of reinforced concrete (RC) structures. Many machine vision methods in crack


detection (Abdel-Qader et al. 2006; Liu et al. 2002) and crack property retrieval
(Yamaguchi and Hashimoto 2010; Zhu et al. 2011) have been created. In addition,
spalling has been the focus of researchers in this area since cracking and spalling are
two of the strongest indicators of damage to RC structural elements. Machine vision
methods in the detection and property retrieval of spalling have also been created
(German et al. 2012; Paal et al. 2014). Although, these methods prove to provide
accurate means of detecting and retrieving significant damage properties, there has
been little systematic evaluation of the impact of image-interpretation error.
This paper adapts a recently developed measurement-data-interpretation
methodology to image recognition. Through a discrete-population approach called
error-domain model falsification, the errors associated with detecting damage features
during structural element evaluation are quantitatively linked to interpretation
ambiguity.
The next section summarizes the error-domain model-falsification
methodology and describes how it can be adapted to image recognition. This is
followed by a section presenting results of the error-domain model falsification
method in image recognition. A damaged RC column image is presented as a case to
illustrate the methodology.

METHODOLOGY

In this section, the general principle of error-domain model falsification is


presented as well as how it is used to assess the impact of errors associated with the
damage-state model. An automated model has been created which estimates the
damage state of RC columns that have been damaged in earthquakes based on
retrieved visual damage properties (Paal et al. 2014). In this model, the visual damage
on the RC column surface is detected by way of machine-vision methods. The extent
of the damage (i.e., extent of spalling, type and extent of cracking) is also retrieved
with these methods. This information is then correlated with damage states to
determine the existing damage state and residual drift capacity of the RC column, see
Table 1. The methodology described in the remainder of this section is used to
evaluate the error associated with this assessment model.
In error-domain model falsification (Goulet and Smith 2011, 2013) (Equation
1), the behavior model, g(.) has a set of np unknown physical parameters, = [ 1, 2,
…, np]T as well as the uncertainty associated with these parameters. The error-
domain model falsification technique provides a comparison of predictions with
measurements as shown in Equation 1, where gi( ) represents the prediction value
and yi represents the measured value. In this technique, model errors ( *model,i) and
measurement errors ( *meas,i) are considered.
The combination of these uncertainties is used to establish threshold bounds
[Tlow,i, Thigh,i] that determine whether or not a model prediction is falsified. This
combination is equivalent to the difference between predictions and measurements
(residual) according to Equation 1. Threshold bounds may be fixed, for each
measurement location and type (type is an important aspect in this study). They are
fixed according to either a target reliability of identification, typically 95%, or

©  ASCE 2
Computing  in  Civil  Engineering  2015 662

through the combination of intervals when more precise information related to errors
is not available.

Table 1. Damage state model.


Damage State Damage Description
D0 No damage
D1 Flexural cracks at top and bottom 1/3rd of
column and span width of column
Longitudinal cracks at top and bottom 1/3rd
of column
D2 Shear cracks occurring at the top and bottom
1/3rd of column
S2 Shear cracks occurring in the middle 1/3rd of
column
S3.0 Widening and localization of shear cracks
[Wc / b 1/60]
S3.1 Longitudinal cracking on side faces
S3.2 Concrete spalling exposing longitudinal steel
S3.3 Longitudinal bar buckling/crushing of core
concrete [LT / b 1]
F4 Concrete spalling top and bottom 1/4th or
1/5th of respective faces
F5 Concrete spalling exposing longitudinal steel
F6 Longitudinal bar buckling/crushing of core
concrete [LT / b 1]

An initial model set (IMS) is generated to cover the space of possible


predictions. A model instance is falsified from the initial model set (IMS) if, for any
measurement, the observed residual is outside the interval defined by these threshold
bounds. All model instances found to provide predictions that are within these bounds
for all measurements are called candidate models and they comprise the candidate
model set (CMS).

(Equation 1)

In the work presented in this paper, the model of interest is the damage state
model (Table 1), while each measurement (predicted value) is an image of a RC
column surface. From this image, the damage parameters which are necessary in
order to determine the damage state are extracted. Table 2 shows these damage
parameters and their associated measurement and modelling errors. Each

©  ASCE 3
Computing  in  Civil  Engineering  2015 663

measurement error is directly related to the accuracy in the respective machine vision
method for retrieving the damage parameter. The modelling error accounts for the
conservative nature of the damage state model. These are user-introduced systematic
errors that intended to be typical of how experts have dealt with modelling errors in
the past. For example, when confronted with an inequality such as those contained in
Table 1, experts round off calculations up to 10% in order to provide a safe
classification. This is equivalent to a systematic error that is represented in the last
column of Table 2.

Table 2. Measurement and modelling error for each damage parameter.


Measurement Modelling
Column Face [0.15,-0.15] [0,0.1]
Flexural Cracks [0.18,-0.18] [0,0.1]
Longitudinal Cracks [0.12,-0.12] [0,0.1]
Shear Cracks [0.06,-0.06] [0,0.1]
Spalling Extent [0.28,-0.28] [0.1,-0.1]
Spalling Location [0.12,-0.12] [0,0.1]

The IMS is established by varying each of the damage parameters across their
potential values and combining the six values. A total of 4400 combinations of the
damage parameters are possible. In addition, these 4400 combinations represent the
images (predicted values) in the model falsification methodology.
Using the errors presented in Table 2, thresholds are defined for each
individual damage parameter since each parameter is a measurement type. The model
instances which are within the bounds defined by the errors are temporarily
considered to be candidate models; all others are falsified (Figure 1). Finally, each
model instance which is a candidate model on all six plots (for the six damage
parameters) is then taken to be a final candidate model, and the damage state index is
determined for each of these model instances (Figure 2).

RESULTS

The uncertainty in the damage state model was evaluated by carrying out the
error-domain model falisification approach described in the previous section for each
of the 4400 images (predicted values). Consider the example outlined in Figures 1
and 2. The RC column surface image has the following predicted measured values:
the column face pictured is the flexural face, both longitudinal and flexural cracks
exist, shear cracks exist in the lower 1/3rd of the column but do not exceed the limit
defined in Table 1, and spalling is present in the lower portion of the column with
longitudinal reinforcement exposed but not buckled.
Based on the thresholds (in red) which are clculated from the measurement
and modelling errors (Table 2), several other values are equally possible (candidate
models – marked in blue – in Figure 1). Candidates which appear in all of the six plots
are candidate models. Those that appear on five or less are falsified. The damage state

©  ASCE 4
Computing  in  Civil  Engineering  2015 664

for each of the candidate model sets is then determined using each model in the CMS.
Figure 2 displays all candidate models for this specific input image. The automated
machine-vision based method would have estimated the column’s damage state as
DS6. However, the uncertainty associated with each of the parameters shows that the
damage state could actually be DS6, DS7, DS8, DS9 or DS10.

(a) Column face (b) Longitudinal cracks

(c) Flexural cracks (d) Shear cracks

(e) Spalling location (f) Spalling extent

Figure 1. Model falsification for individual damage parameters of a sample


column image.

©  ASCE 5
Computing  in  Civil  Engineering  2015 665

Figure 2. Combined damage parameter model falsification, resulting in


ambiguous damage-state indices for a sample column image.

In addition, a sensitivity analysis was also carried out to establish with


damage state was most sensitive overall to the measurement and modelling error. For
each of the 4400 input images, the precision associated with each damage state was
calculated independently. Damage States 0-5 (DO-D2, S2-S3.1) are found to be the
most sensitive to the inherent uncertainties with an overall average identification
reliability of 0.33.

CONCLUSION

The results show that the existing RC column automated damage-index-


estimation model is ambiguous when considering the errors associated with currently
available image-recognition technology. Rather than identifying one damage state,
several candidate damage states are found in many situations.
Since Damage States 0-5 are the most sensitive to measurement error,
revisions to this part of the classification schema as well as efforts to reduce
interpretation errors that are associated with these states would most successfully
reduce classification ambiguities.

REFERENCES

©  ASCE 6
Computing  in  Civil  Engineering  2015 666

Abdel-Qader, I., Pashaie-Rad, S., Abudayyeh, O. And Yehia, S. (2006). “PCA-based


algorithm for unsupervised bridge crack detection,” Advances in Engrg
Software, 37(12): 771-778.
Applied Technology Council (ATC). (1989). “Procedures for Postearthquake Saftey
Evaluations of Buildings, Report ATC-20,” Applied Technology Council,
Redwood City, CA.
Applied Technology Council (ATC). (1995). “Addendum to the ATC-20
Postearthquake Building Safety Evaluation Procedures,” Applied Technology
Council, Redwood City, CA.
Bartel, J. (2001). “A picture of bridge health,” Nondestructive Testing Information
Analysis Center Newsletter, 27(1): 1-4.
Federal Emergency Management Agency (FEMA). (2006). “National Urban Search
and Rescue Response System – Structure Specialist Position Description,”
Available online:
http://www.disasterengineer.org/library/Struct%20Spec%20PD%20July%202006.pdf
German, S., Brilakis, I. and DesRoches, R. (2012). “Rapid entropy-based detection
and properties measurement of concrete spalling with machine vision for post-
earthquake safety assessments,” Advanced Engineering Informatics, 26(4):
846-858.
Goulet, J.-A. and Smith, I.F.C. (2011). “Prevention of over-instrumentation during
the design of a monitoring system for static load tests,” In 5th International
Conference on Structural Health Monitoring on Intelligent Infrastructure
(SHMII-5). Cancun, Mexico.
Goulet, J.-A. and Smith, I.F.C. (2013). “Predicting the usefulness of monitoring for
identifying the behavior of structures,” J. of Struct. Engrg., 139, SPECIAL
ISSUE: Real-World Applications for Structural Identification and Health
Monitoring Methodologies, 1716-1727.
Liu, Z., Shahrel, A., Ohashi, T., and Toshiaki, E. (2002). “Tunnel crack detection and
classification system based on image processing,” Machine Vision App. In
Ind. Insp., 4664: 145-152.
Paal, S.G., Jeon, J., Brilakis, I. and DesRoches, R. (2014). “Automated damage index
estimation of reinforced concrete columns for post-earthquake evaluations,” J.
of Struct. Engrg., 10.1061/(ASCE)ST.1943-541X.0001200, 04014228.
Paal, S.G., Brilakis, I. and DesRoches, R. (2014). “Automated spatial properties
measurement of concrete spalling through reinforcement detection,” J. of
Struct. Health Monit., (in review).
Yamaguchi, T. and Hashimoto, S., (2009). “Fast crack detection method for large-
size concrete surface images using percolation-based image processing,”
Machine Vision and App., 11(5): 797-809.
Zhu, Z., German, S. and Brilakis, I. (2011). “Visual retrieval of concrete crack
properties for automated post-earthquake structural safety evaluations,” J. of
Aut. In Constr., 20(7): 874-883.

©  ASCE 8
Computing  in  Civil  Engineering  2015 667

An Integrated BIM-GIS Framework for Utility Information Management and


Analyses

Jack C. P. Cheng 1 and Yichuan Deng 1


1
Department of Civil and Environmental Engineering,
The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon,
Hong Kong, China; PH (+852) 2358-8186; Email: [email protected], [email protected]

ABSTRACT

Currently, utility information such as alignment and owner information is


mostly managed in 2D CAD files or 2D geographical information system (GIS)
models, which have limited capacity for analyses and are sometimes not updated.
This paper presents an integrated 3D framework based on building information
modeling (BIM) and GIS technologies for managing and analyzing utility
information. In the framework, underground utility networks are modeled and
represented in GIS software to facilitate visualization. Semantic information such as
owner information and inspection records are also included in the 3D representations
of utilities in GIS software. The GIS models are then integrated in a BIM
environment based on ArcGIS using our developed BIM-GIS integration framework.
The BIM-GIS framework supports various analyses such as clash detection. Clashes
between existing utility networks and the designed pipelines can be identified and
resolved virtually ahead of time. Changes of the utility models can also be updated
conveniently in the integrated BIM-GIS framework. An example scenario that is
used to test and evaluate the developed framework will be presented in this paper.

KEYWORDS

Utility management; Geographic Information System; Building Information


Modeling; Construction planning.

RESEARCH BACKGROUND

Underground utilities are critical infrastructures for urban environments.


Different types of utility lines such as electricity pipelines, drainage lines, water
supply lines and gas supply lines make the underground spaces of urban environment
quite congested. For example, it was estimated by the Hong Kong government that
there were on average over 47 km of underground utility lines under each kilometer
of public road in Hong Kong (Candice 2011). However, these facilities are often
over-aged and with poor record keeping. Moreover, the records for underground
utilities are usually kept in 2D CAD drawings, which have limited capability to show
the real locations of utilities in subsurface. The poor management of underground
utility information causes some serious consequences. For example, during
excavation activities in construction sites, lack of knowledge about utilities below the
site  may  cause  destruction  to  utility  lines,  which  may  lead  to  disturbance  to  people’s  

©  ASCE
Computing  in  Civil  Engineering  2015 668

lives. It is reported that the utility lines in the United States were hit every minute
during construction activities (Su et al. 2013).
Several underground utilities management system have been proposed to
achieve better management of utility information. For example, Du et al. (2006)
proposed a system to integrate AEC software and database management system to
generate 3D utility lines from 2D drawings. Mendez et al. (2008) used a similar
approach to build 3D models from 2D drawings with help of height information.
Cypas et al. (2006) proposed a system to store utility information in 3D dimension.
However, they only focused on generation of 3D geometry from 2D drawings. None
of them have considered the updating of geometry and semantic information of
underground utilities, making the proposed systems inflexible against changes. In
addition, those systems did not support design and constructability checking for users
from the architecture, engineering and construction (AEC) industry.
In this paper, a framework that integrates Building Information Modeling
(BIM) with Geographic Information System (GIS) is proposed to help government
authorities to better manage utility information in 3D format. BIM provides flexible
model creation, modeling updating and semantic information updating, while GIS
provides the platform for visualization, model regeneration from CAD files and clash
detection. A data schema for storing underground utility information (UUI) was
proposed and implemented in the BIM-GIS integration in order to achieve seamless
data translation. The proposed framework supports model regeneration from 2D files,
model updating using BIM, and automatic design and constructability checking using
clash detection in GIS.

METHODOLOGY

Overview of Framework. The proposed framework mainly consists of three


components: (1) a GIS environment to store, visualize and analyze utility data, (2) a
BIM environment for model updating, and (3) an integration engine that links the
two environments, as illustrated in Figure 1. The framework leverages some existing
functions of current BIM and GIS platforms, and integrates with the additional
functions that we developed in this research. A data schema for UUI was designed
and implemented in the framework to define a common model for data storage and
conversion. The framework aims to provide three main functions for underground
utility management: (1) a model regeneration module that could automatically
generate 3D models from 2D as-built drawings combined with surveying data on
heights of utilities, (2) a model updating system that allows users to update the as-
built 3D model directly in BIM, and (3) a clash detection tool that allows designers
to upload the designed lines and identify potential collisions with existing lines, and
allows workers to check whether the construction activities would induce a hit to the
existing pipelines. The data conversion component in BIM-GIS integration engine
guarantees that the editing functions in BIM could be used to update the geometry
and semantic information in UUI.

©  ASCE
Computing  in  Civil  Engineering  2015 669

Model UUI Data


Regeneration
Implements Schema Maps to Updating of
Geometry
Model
Updating Updating of
Semantic
Clash Information
Detection
BIM Environment
GIS Environment
Data
Developed Conversion
functions

Existing CityGML IFC


functions

BIM-GIS Integration
Engine
Figure 1. Overview of the proposed BIM-GIS underground utility management framework

BIM and GIS data formats. BIM aims to store and manage the data about a
building or a facility throughout its lifecycle. Most BIM software, such as Autodesk
Revit, have powerful editing features that allow users to easily modify both geometry
and semantic information in models. Information from BIM also covers a wider
range than that in a GIS model. In the proposed BIM-GIS underground utility
management framework, Industry Foundation Classes (IFC) is used as the data
format to carry data from BIM and used in BIM-GIS integration.
GIS is a system to store, manipulate and analyze data in a geographic context.
Nowadays, 3D GIS models are becoming widely used in many applications. The
commonly used GIS platforms, such as ArcGIS, provide functions such as 3D model
visualization and multi-patch intersection analysis that could be further developed to
a clash detection module. CityGML is used in our framework to carry data from GIS
and used in the integration process.
UUI Data Schema. The schema design of the UUI system considers the
information available from both the BIM and GIS environments. The information
represented in the underground utility management system is divided into two
categories: geometry information and semantic information. The geometry
information contains both 2D lines from CAD raw data and 3D multi-patches
generated by the system. The 3D geometry is mapped to 2D drawings in order to
ensure an accurate record. The semantic information is derived from UUI
documentations, possibly generated from BIM or on-site inspections and surveys.
The detailed UUI schema used in the proposed framework is shown in Figure 2.
It is noticeable that the UUI schema provides a guideline for generating
models and inspection records. For example, the contents of Inspection Records
contains Heights that records the height of utility at certain locations and Line
Geometry Type that specifies whether the line being inspected is a straight line or a
curved one. When surveyors are doing on-site surveying using tools such as ground
penetration radar (GPR), they have to both record the inspection location and height

©  ASCE
Computing  in  Civil  Engineering  2015 670

information together. Once the data are compatible with the UUI schema, it could be
imported to the framework and used for further analysis.
Underground Utility Information
UUI Documentation
+SourceID: String
+SourceID: String +Owner: String
+RecordDate +Inspection Date: String
0..1 0..1 0..*
+Documentation Type: String +Utility Type: String
+GetSourceID(): String
+GetOwner(): String
+GetInspectionDate(): String
Semantic +GetUtilityType(): String

Inspection Records
+SurveyDate: String
0..1 +Surveyor: String 0..1
0..*
+LineGeometryType: (Line or Curve) UUI Geometry
+Heights: (location, height) 0..* +UUI Creation Date: String
+Width
+UUI Created by: String

Geometry 0..1
UUI 3D Multipatch
UUI CAD RAW DATA UUI 3D Geometry
0..1 +X: double
+CAD Creation Date: String +p1: UUI 3D Multipatch
0..1 +Y: double
+CAD Created by: String +source: String
+Z: double
+CAD Geometry: Polyline +intersects(input: List<Surface): boolean +Cosine: double

Figure 2. Information contained in the schema of UUI

Model Regeneration. Due to the congested nature of underground utility


lines, it is infeasible to store geometry information of utilities in 2D CAD drawings.
For example, two lines with the same longitude and latitude may be buried in
different depths. So the first step is to reconstruct 3D models of utility lines from as-
built 2D CAD drawings. As a preliminary study, only cylindrical utility lines are
currently considered in the model reconstruction process. The model regeneration
process used data from both 2D CAD raw data to obtain geometry and location
information and inspection records that stores information of heights of utility lines
at certain locations.
Du et al. (2006) proposed data structure and methods for regenerating 3D
utility model from 2D drawings. However, in our system, we developed more
functions such as proximity check and reconstruction of curved lines. The model
regeneration module was developed based on ArcObjects in ArcGIS and Visual
Basic. Three main functions are critical for the model regeneration process, which
are summarized in Figure 3. First, the proximity check guarantees that the inspection
record is consistent with the CAD drawings. The proximity check determines
whether the location of inspected point is on the utility line with a user defined offset
(usually within 0.5 meters). Second, the curved line reconstruction function ensures
that curves in CAD drawing could be approximated with a series of connected line
segments with a user defined maximum length (usually within 5 m). The line
segments are used to go through the proximity check and then reconstructed with
height information to 3D models. Finally, the 3D model with input line and radiation
of pipelines is created using the 3D Buffering toolbox in ArcObjects. After the model
regeneration process, 2D lines are transformed to 3D models with semantic
information which can be used for further analysis.
Clash Detection. When designing new utility lines, designers would be asked
to check whether the new lines conflict with the existing ones. However, this design
checking on 2D CAD drawings is not reliable. Many cities have set up call centers to
provide   “call   before   you   dig”   services   in   order   to   avoid   the   hits   on   existing   utility  

©  ASCE
Computing  in  Civil  Engineering  2015 671

networks during construction excavation process. However, finding such information


on 2D CAD drawings could be time consuming and tedious. With the help of 3D
utility models, the design and constructability checking process could be performed
in an automatic and much faster way.
Regeneration of 3D Utility Model
CAD Raw

Wrong
Get polylines
Data

Import CAD No record,


from CAD
abandon data
3D Buffering Read
Proximity Check
and construct semantic Output
<0.5 m offset
3D Model information
Surveying

Import Iterate
Read height
Data

Surveying surveying Yes


and radiation
Data data

Figure 3. The process of regenerating 3D models from 2D drawings and surveying records

ArcGIS provides a function to detect intersections between 3D models. The


function is customized with information from UUI to build a design and
construability checking system that could automatically determine whether the new
utility lines or upcoming construction activities would hit the existing lines. The
developed system considers not only physical clashes with the utility lines, but also
clashes with the safety buffer zone around  the  lines,  which  are  called  “soft  clashes”.  
The buffer zone has the same shape as the original lines but with a larger radius, as
shown in Figure 4. If clashes are detected, the users would be notified of the collision
parts and the owner information of the corresponding utility lines in order to allow
further actions to be taken.

A hard clash where the new line A soft clash where the new line (top) hits
(vertical) hits an existing one the buffer zone (in white), still unsafe
Figure 4. An illustration of hard clash and soft clash

Model Updating. BIM provides a rich information pool for the lifecycle of
buildings and facilities, such as construction documentations, as-built 3D models and
inspection records. Moreover, while the model editing features in GIS software are
weak, BIM software is often dedicated to sophisticated design process and has a
powerful editing function. The gap of information richness and editing functions
between BIM software and GIS software has motivated a model updating module for
utilities using BIM and GIS integration. This module aims to enrich and modify 3D
utility models in GIS environment using BIM software and update the models in the
background. When the model updating process is initiated, the 3D utility models are
exported as CityGML files along with a spreadsheet that maps some semantic

©  ASCE
Computing  in  Civil  Engineering  2015 672

information to objects. The CityGML file goes through the BIM-GIS integration
engine and is converted to IFC files that could be directly imported to most BIM
software. Users could make modifications to the IFC models. The modified BIM file
could then go back to the BIM integration engine and be converted to CityGML,
which could be viewed in the ArcGIS environment.
Two processes are critical in the model updating module. First, data
conversion between GIS models (CityGML) and BIM files (IFC) is performed
seamlessly using the BIM-GIS integration engine. The engine was developed based
on the techniques developed in our previous BIM-GIS integration frameworks
(Cheng et al. 2013; Cheng et al. 2015). The engine could achieve a bi-directional
mapping between CityGML and IFC files with no information loss. Second, the
semantic information from BIM is mapped to the UUI schema, as shown in Figure 5.
The information in these entities is mapped to items in the UUI schema so that it
could be stored in the GIS environment for 3D underground utility management.
IfcOwnerHistory
Underground Utility Information
+OwningApplication: IfcApplication[1..1]
+State: IfcStateEnum[0..1] +SourceID: String
+ChangeAction: IfcChangeActionEnum +Owner: String
+LastModifiedDate: IfcTimeStamp[0..1] +Inspection Date: String
+LastModifyingUser: IfcPersonAndOrganization[0..1] +Utility Type: String
+LastModifyingApplication: IfcApplication[0..1]
+CreationDate: IfcTimeStamp[1..1] +GetSourceID(): String
+GetOwner(): String
+GetInspectionDate(): String
+GetUtilityType(): String

IfcElement
+Tag: IfcIdentifier[0..1]
0..1
0..* 0..*
UUI Documentation

IfcDistributionElement +SourceID: String


+RecordDate
+Documentation Type: String

Geometry
IfcFlowSegment
+OverallHeight: IfcPositiveLengthMeasure[0..1]
UUI 3D Geometry
+OverallWidth: IfcPositiveLengthMeasure[0..1] +p1: UUI 3D Multipatch
+source: String

IfcProductRepresentation +intesects(input: List<Surface): boolean

+Name: IfcLabel[0..1]
+Description: IfcText[0..1]
0..1

UUI Geometry
IfcShapeModel +UUI Creation Date: String
+UUI Created by: String 0..1
+RepresentationIdentifier: IfcLabel[0..1]
+RepresentationTyper: IfcLabel[0..1]

IFC Entities UUI Schema


Figure 5. Mapping between IFC and UUI

USE CASE SCENARIO

The framework was implemented using ArcGIS as the GIS environment and
Autodesk Revit as the BIM environment. The functions in the framework were
developed using Visual Basic in ArcGIS with ArcObjects and C# in Revit with Revit
API. The BIM-GIS integration engine was developed using Java and packaged as a
stand-alone converter.
3D Model Regeneration. Figure 6 demonstrates the model regeneration
process using a CAD drawing and one set of on-site surveying records. It is
noticeable that the survey records should be complied with the UUI schema so that
the correct data could be imported to the GIS environment. The CAD drawings and
the inspection records were verified by the regeneration module using proximity

©  ASCE
Computing  in  Civil  Engineering  2015 673

check and then the 3D models were generated, along with some semantic
information from the inspection records. In the generating process, curved lines were
also generated using the curve reconstruction function.
Design Checking and Construability checking. The design checking
module was also tested on ArcGIS using newly designed utility lines against the
models generated from the regeneration process. In the test, the 3D models of new
lines were regenerated and tested against existing lines with buffering zoom. The
results show that there are 10 collisions between the planned new lines and the
existing lines (see Figure 7). The locations of collision could also be retrieved. The
design checking in 3D is automatic and more accurate than that of manual checking
using 2D drawing.

CAD
Regenerated 3D Models
Drawing
s

Inspection records in 3D
Surveying Records
model as attributes
Figure 6. Regeneration of 3D utility model

CONCLUSIONS

In this paper, an integrated BIM-GIS framework for improving the information


management and analyses of underground utilities by government authorities has
been proposed and implemented. The framework is in 3D and therefore provides
more capabilities than the current 2D-based utility management systems. A data
schema was proposed for underground utility information management. Different
functions such as model regeneration, design checking and construability analysis
have been developed. Model updating using the developed BIM-GIS integration
engine is also supported. The proposed framework was successfully implemented
and tested using ArcGIS and Revit BIM software in the preliminary study. This
framework is still under development and does not support the modeling of utility
connections and rectangular pipelines. In the next phase of this research, the authors
aim to seek cooperation with local utility management authorities to implement and
verify the framework. The computer-generated model regeneration models and clash
analysis reports will be compared to manual records and onsite measurements. The

©  ASCE
Computing  in  Civil  Engineering  2015 674

UUI data schema will be tested to see its coverage and effectiveness in
communicating utility information.

Figure 7. Design checking for new lines

REFERENCES

Candice, W. Y.-h. (2011). "Human factor domains utility safety in Hong Kong
construction industry. How should we contribute for the enhancement?"
Proc., the 2nd ICUMAS Conference, Hong Kong.
Cheng, J. C. P., Deng, Y., and Anumba, C. (2015). "Mapping BIM schema and 3D
GIS schema semi-automatically utilizing linguistic and text mining
techniques." Journal of Information Technology in Construction (ITcon), 20,
193-212.
Cheng, J. C. P., Deng, Y., and Du, Q. (2013). "Mapping between BIM models and
3D GIS city models of different levels of detail " Proc., the 13th
International Conference on Construction Applications of Virtual Reality,
London, United Kingdom.
Cypas, K., Parseliunas, E., and Aksamitauskas, C. (2006). "Storage of underground
utilities data in three-dimensional geoinformation system." Geodetski vestnik,
50(3), 481-491.
Du, Y., Zlatanova, S., and Liu, X. (2006). "Management and 3D visualisation of
pipeline networks using DBMS and AEC software." Proc., the ISPRS
Commission IV Symposium on Geospatial Databases for Sustainable
Development, Goa, India.
Mendez, E., Schall, G., Havemann, S., Fellner, D., Schmalstieg, D., and Junghanns,
S. (2008). "Generating semantic 3D models of underground infrastructure."
Computer Graphics and Applications, IEEE, 28(3), 48-57.
Su, X., Talmaki, S., Cai, H., and Kamat, V. R. (2013). "Uncertainty-aware
visualization and proximity monitoring in urban excavation: a geospatial
augmented reality approach." Visualization in Engineering, 1(1), 1-13.

©  ASCE
Computing  in  Civil  Engineering  2015 675

The Benefits of BIM Integration with Facilities Management: A Preliminary Case Study

S. Terreno; C. J. Anumba*; E. Gannon#; and C. Dubler#


*
Department of Architectural Engineering.
#
Office of Physical Plant.
Pennsylvania State University,
University Park, PA 16802.

Abstract
There is an increasing recognition that, to maximize the benefits of Building Information
Modeling (BIM), it should be deployed in such a way that it remains useful beyond the design
and construction phase of projects. This means that it should be utilized in the facilities
management phase of a constructed facility. This paper focuses on the identification of benefits
gained from the effective integration of BIM in Facilities Management. Drawing on literature
from previous research, it argues for the adoption of strategic approaches towards holistic
consideration of benefits. The paper also presents the findings of a preliminary case study based
on exploratory interviews with key personnel on a major university building project, and review
of project documents. Qualitative descriptions of benefits were obtained from these interviews
and are summarized in the paper. The paper concludes with a discussion of the benefits of
BIM/FM integration, with a view to a comprehensive determination of project gains within the
lifecycle of a project. The paper offers some conclusions that are aimed at the holistic
consideration of benefits through the project phases preceding handover and including those in
the lifecycle stage for a more wholesome assessment of the effectiveness of BIM implementation
in FM.

Keywords: BIM; Facilities management; Benefits; Benchmark

INTRODUCTION

The integration of BIM and Facilities Management (FM) is relatively new and offers huge
potential for cost savings and improved processes. The application of BIM in many facets of
building Operations and Maintenance (O&M) can result in more sustainable, efficient and well-
managed buildings (Becerik-Gerber et al., 2012). Areas identified for the application of BIM in
FM include improved energy and sustainability management; enhanced, real-time emergency
and space management; and visualization. The cost savings and efficiency improvements
associated with BIM/FM integration are based on increased accuracy of FM data and model,
automated generation of FM data , resulting in procedural expediency for daily operational
requirements, change-management proficiencies, and long-range planning (Sabol, 2013;
Becerik-Gerber et al., 2012; Arayici et al., 2012).BIM/FM integration is plagued by numerous
challenges. Talebi (2014) succinctly grouped these into procedural, social, technical and
associated cost facets. Table 1 below outlines the main challenges as summarized by Talebi
(2014).

©  ASCE
Computing  in  Civil  Engineering  2015 676

Table 1. BIM Adaptation and Implementation Challenges (Adapted from Talebi (2014).

CONTEXT CHANGE PROBLEMS


Procedural New business processes Immaturity of users; Lack of clear guidelines
New roles and responsibilities Ambiguity for integration into current processes
Contract Structure Dearth of newly structured contractual
agreements for BIM-based projects
Social Socio-Technical balance Unaligned coordination between social nature of
BIM projects and technological functionality
Technical BIM-FM software integration Interoperability constraints
Lack of technical alignment between BIM
software tools and FM processes
Lack of integration between project phases
Associated New software and hardware Increased associated costs such as training and
costs outsourcing

The most crucial challenge is the issue of interoperability (Sabol, 2013); with origins in the
initial handover process from construction to operations in one instance, or from traditional FM
data management software to BIM. It is also taken for granted that proper recording of
equipment and assets is undertaken during construction. This may prove critical if the FM team
is not brought in early enough in the project conception stages to define their requirements and
intended end uses, and thus collaborate towards a strategy for achievement. For existing
buildings, the difficulty faced in the mapping out of facility components may well be a herculean
task especially in large establishments with a collection of facilities. This may be further
complicated by the out-datedness or unavailability of as-built records and facility data. BIM
proponents are therefore hard-pressed to prove the validity of their assertions on the advantages
of BIM adoption in FM (Stowe et al., 2014). The financial investment in BIM/FM integration
may be substantial. Inherent in this is the substantive need for procedural overhaul and its
resulting impact on organizational productivity and efficiency during the transition process.
Thus, there is the need for a clear justification of returns.

APPROACHES TO DETERMINING THE BENEFITS OF BIM

Becerik-Gerber & Rice (2010) highlighted the difficulties in determining the tangible benefits of
BIM in practice. Although this is crucial to financial decision-making and a valuable tool for
marketing, few investigations have actually captured the tangible benefits of BIM; those that
have attempted to do so have focused only on a narrow scope.

Case study approaches to investigating the benefits of BIM capture most in detail, but are limited
by sample sizes, lack of quantification and objectivity (Taylor et al., 2010). Country-specific
studies and industry-wide surveys have been undertaken; with many experienced organizations
using BIM developing varying methods for the identification of returns. While this is good, the
overall approach to determination of returns in BIM is plagued by a lack of quantification and
objectivity. Stowe et al. (2014) argue that more effort should be given to developing a structured
approach to measurement for the eventual standardization of key performance metrics for future
©  ASCE
Computing  in  Civil  Engineering  2015 677

benchmarking. Becerik-Gerber & Rice (2010) shared this opinion by further reasoning that BIM
benefits should be investigated conjointly with an added dimension of a narrower focus on
disciplines in addition to a broader, global focus with consistency geared towards comparison
and maturity over time.

Sulankivi (2004) investigated the benefits of BIM in multi-partner projects applying the concept
of Concurrent Engineering (CE). Her study comprised a holistic dimension which ran parallel
with those of Becerik-Gerber & Rice (2010) and Lakka et al. (2001), where benefits were
classified in one of three ways. They were quantifiable either in monetary terms or ‘other ways’
or they were more qualitative. Becerik-Gerber & Rice (2010) described them as tangible
(monetary), semi-tangible and intangible benefits, ‘intangible’ implying a more qualitative
identification of rewards. Sulankivi (2004) used this approach in three case studies where the
categorizations were linked in a matrix resulting in a step-by-step approach to detailing. The
qualitative benefits were each elaborated upon to produce quantitative descriptions, which were
in turn given monetary value following detailed analysis. Further extraction of the most
repetitive benefits were thus recorded - useful for future benchmarking assessment as suggested
by Stowe et al. (2014). The impact and importance of each category was investigated in addition
to the customization of each benefit according to user group (designers, engineers etc.).

Numerous studies and surveys have focused on various aspects of BIM benefits following
investment. A summary of indices may thus be obtained by an extraction of a hybrid of
quantitative considerations. A decrease in RFIs and Change Orders leading to budget and
schedule conformance (Giel & Issa, 2013) constitute the most common indicators, paired with a
variety of other semi-tangible considerations such as productivity, contingency provisions and
response latency (Gilligan & Kunz, 2007). Young et al. (2009) included improved employee
productivity and communication, advantages inherent in prefabrication (Khanzode et al., 2008),
positive marketing, improved information distribution resulting in fewer delays and disputes with
less design and field errors (Sulankivi, 2004), and higher levels of managerial performance
(Vaughn et al. (2013). Suermann & Issa (2009) presented six KPIs comprising safety, cost, cost
per unit and overall cost, units per man-hour, on-time completion, and quality control/rework to
participants for ranking. Dehlin & Olofsson (2008) weighed their groupings of capital,
operational and indirect costs against quantified project and procedural benefits for an
establishment of financial returns.

The above considerations are primarily geared towards BIM investment in the project phases
preceding operation and maintenance, with no focus on the lifecycle phase. Becerik-Gerber &
Kensek (2009) highlighted the dire need for the development of core data sets geared towards
facilities management with a holistic embodiment of common elements. Teicholz (2013) adapted
ROI variables in a progressive manner from qualitative descriptions to financially quantitative in
a sample calculation of ROI for BIM integrated FM. However, there is need for a more detailed
investigation of the benefits to be gained at the FM stage of a building’s lifecycle. As a first step
towards this, a preliminary case study of a sports facility at a major university has been
undertaken to identify some of the benefits in BIM/FM integration.

©  ASCE
Computing  in  Civil  Engineering  2015 678

PRELIMINARY CASE STUDY

The case-study approach, while limited in size and broad objectivity, was adopted in this
research with a narrow focus on disciplines towards the extraction of qualitative descriptions of
benefits. This follows the step-by-step approach as utilized by Sulankivi (2004) and Teicholz
(2014) to capture intrinsic considerations for further categorization into semi-tangible benefits,
and eventual tangible determination. Whilst larger-scaled studies would capture a wider range of
benefits, the broad sampling of considerations would be subject to generalizations, with unique
project details not comprehensively captured.

A preliminary case study was undertaken to establish the critical success factors for BIM
integration in FM, as part of a larger study on BIM-integrated FM. The case study project was
provided by Penn State University’s Office of Physical Plant (OPP). The project was OPP’s first
BIM-FM integrated project which was based on Integrated Project Delivery (IPD). The facility
comprises a 6,014-seat, 200,000-square foot multi-purpose arena which replaced the 1,350-seat
Ice Pavilion on the Penn State campus, with some of the best facilities provided for Division I
hockey. It was novel in its approach, being the first IPD project to be carried out in the
institution’s history. All processes were collaborative from conception through to handover and
involved OPP staff from the early stages, working with the entire project team.

Four of the OPP participants were selected for exploratory interviews, based on the phases and
extents of their involvement within the project timeline (Table 1). Three sets of interviews were
conducted with key personnel of the Office of Physical Plant (OPP) who were most involved in
the project. A number of project documents (such as the BIM Plan, BIM Implementation
Exchanges, BIM Contract Addendum and Technical Reports) were also reviewed. The purpose
of the interview sessions was to extract qualitative descriptions of benefits reaped from the
integration of BIM in FM in the context of the IPD project as observed by each party. Figure 1
below summarizes the main steps of the research.

Selection of Review of
Literature Interview Analysis of
Interview Project
review Sessions Results
Participants Documents

Figure 1. Research Process.

FINDINGS AND DISCUSSION

The descriptions outlined in Table 2 represent a cross-section of BIM benefits with a particular
focus on FM integration. These characterize a first step towards the extraction of critical success
factors, which could be further expounded upon to deduce quantitative descriptions that can be
assessed for monetary value. The categorization of benefits according to project phase and type
of involvement presents an opportunity to comprehensively capture benefits.

©  ASCE
Computing  in  Civil  Engineering  2015 679

Table 2: Extraction of Qualitative Descriptions of Benefits

BIM FOCAL QUALITATIVE DESCRIPTIONS OF BENEFITS


AREA PROJECT BIM FACILITATORS FACILITY MANAGER
MANAGER (Concept to Lifecycle (Design Review to
(Concept to Phase) Lifecycle Phase)
Handover Phase)
PEOPLE More collaborative More collaborative More collaborative
approach to project approach to project and approach to project and
and problem-solving problem-solving problem-solving
Improved employee
productivity
Greater efficiency on the
job
PROCESS More detailed Clearer FM Shorter response time in
strategic planning requirement definition operations
with holistic for design &
considerations construction
More detailed and Incorporation of Reduced requirement for
extensive design requirements into data re-entry
reviews towards contract documents
more seamless
lifecycle integration
More detailed and Easier data retrieval
extensive design
reviews towards more
seamless lifecycle
integration
Richer data asset at More proactive
handover maintenance
Better emergency
preparedness
TECHNOLOGY Improved More Accurate
interoperability due to information from a data-
better planning rich asset
Automatically updated Automatically updated
model model
Seamlessly integrated Improved interoperability
software requiring no change from
current data system

The benefits listed above show increased advantage with the progression of project phases;
showing that benefits become more pronounced within the lifecycle of a facility. Being a sports
facility with a higher magnitude of optimized operations, safety and emergency-preparedness
priorities; it is natural that a Facility Performance Focus (Chotipanich & Lectariyanun, 2011) is
the main strategy for its management. This is evident in the Facility Manager’s responses,
detailing response time, data accessibility and retrieval and proactive maintenance all geared
©  ASCE
Computing  in  Civil  Engineering  2015 680

towards optimal performance and quick responses. This strategy varies widely from a ‘Value for
Money’ facility operating with economies of scale and a lean operation focus (Chotipanich &
Lectariyanun, 2011).

A goal-driven approach can be adopted, whilst focusing on the various disciplines involved in
the project in a bid to extract a broader range of considerations. The facility manager had more
benefits listed, thus attesting to the positive impact that the integration of BIM can make on FM.
In exploring further, one could speculate on whether open-ended statements of advantage such as
“more proactive maintenance” could infer longer equipment life and better maintainability.

Having “more accurate data” from the early project phases yielded long-term benefits in the
operational phase, expanding into “shorter response time, improved efficiency and productivity”
to name a few. This attests to the fact that an earlier application of BIM with an FM focus from
earlier project stages would yield increasing benefits in the long-term. The application of a
structured approach can thus yield an array of critical success factors which may or may not have
been considered in top-down planning but which open a multitude of possibilities for exploration
and determination of future key performance indicators. In addition, the Project Manager’s
description of “detailed and extensive design reviews” is evidence of benefits gained from
amendments which might have led to expensive change orders following latter FM inspections.
As an example, an observation was made by FM personnel following a 3D visualization session
in the CAVE (Cave Automatic Virtual Environment) laboratory which prompted preconstruction
changes that might have been costly following construction.

SUMMARY AND CONCLUSIONS

Literature from previous research was analyzed, uncovering the challenges of BIM/FM
integration, and exposing the lack of a detailed body of knowledge on which to base a
framework for the determination of benefits in BIM/FM integrated projects. It was noted that the
bulk of existing research into this area focused more on the conception through to construction
phase, with less attention paid to the broader lifecycle phase. The process for the determination
of benefits in the integration of BIM and FM thus lacks quantification and objectivity. A case
study approach comprised exploratory interviews with key FM personnel that participated in the
Case Study project, followed by a review of project documents. They highlighted a list of
benefits reaped from the integration of BIM with FM from early stages of the project lifecycle.
Qualitative descriptions of benefits were obtained from three viewpoints comprising the
Facilities Project Manager, BIM Group and FM personnel. Probing benefits by discipline
allowed for a more holistic extraction of opinions to capture every facet of the integration from
project inception through to present day FM. The facility manager noted that there was a definite
improvement to FM processes with the new implementation.
Based on the preliminary case study presented, this paper has identified the benefits of the
integration of BIM in FM. These include:
1. Noted improvements to collaboration
2. More detailed strategic planning with holistic considerations
3. More detailed and extensive design reviews towards more seamless lifecycle integration

©  ASCE
Computing  in  Civil  Engineering  2015 681

4. Clearer FM requirement definition for design & construction


5. Incorporation of requirements into contract documents
6. More Accurate information from a data-rich asset
7. Automatically updated model
8. Improved interoperability requiring no change from current data system
9. Increased employee productivity and efficiency
10. Easier data retrieval
11. Shorter response time in operations
12. More proactive maintenance, and an
13. Increased level of emergency preparedness.

The above-listed benefits are evidence of the effectiveness of the application of BIM from
project conception to facilities management, with the potential for tangible returns on
investment. The paper argued for detailed analysis of the advantages gained from BIM/FM
integration, which are crucial for owner buy-in to the substantial investment of time and effort
needed. Strategic approaches towards holistic consideration of benefits were explored with due
consideration of varied purpose - by research focus (top-down, goal-driven or process-driven),
and approach (case studies and surveys); of which the case study approach was adapted for this
research with the aim of obtaining qualitative descriptions of factors necessary for success. The
extraction of benefits from the integration of BIM in FM from project conception to the lifecycle
phase has yielded a rich description of factors deemed critical for the success of projects. It has
been found that much is to be gained from the adaptation of a lifecycle focus from project
conception, with benefits of this registering continual increase in the long-term. The facilities
management phase yielded a longer list of benefits with potential for increase over time. It has
also been found to be necessary to include considerations of benefits to be gained from different
project phases in order to embrace a more holistic consideration of benefits.
Future research should be geared towards the identification of critical success factors derived
from a study of benefits, towards the development of a structured model for the investigation of
Return on Investment (ROI) in the lifecycle phases of projects. It is expected that this will yield a
more standardized and holistic determination of key performance metrics for determining the
gains from the integration of BIM in FM.

REFERENCES

Arayici, Y., Onyenobi, T. C., & Egbu, C. O. (2012). Building information modelling (BIM) for
facilities management (FM): The MediaCity case study approach. International
Journal of 3D Information Modelling, 1(1), 55-73.

Becerik-Gerber, B. and Kensek, K. (2009). ”Building Information Modeling in Architecture,


Engineering, and Construction: Emerging Research Directions and Trends.” J. Prof.
Issues Eng. Educ. Pract., 136(3), 139–147.

©  ASCE
Computing  in  Civil  Engineering  2015 682

Becerik-Gerber, B., Ku, K., & Jazizadeh, F. (2012). BIM-enabled virtual and collaborative
construction engineering and management. Journal of Professional Issues in Engineering
Education & Practice, 138(3), 234-245.

Becerik-Gerber B, Rice S (2010) The perceived value of building information modeling in the
U.S. building industry, Journal of Information Technology in Construction (ITCon), Vol.
15, pg. 185-201, http://www.itcon.org/2010/15

Chotipanich, S. & Lertariyanun, V (2011) "A study of facility management strategy: the case of
commercial banks in Thailand", Journal of Facilities Management, Vol. 9 Iss: 4, pp.282 -
299

Dehlin S, Olofsson T (2008) An evaluation model for ICT investments in construction projects,
ITCon Vol. 13, Special Issue Case studies of BIM use , pg. 343-361,
http://www.itcon.org/2008/23

Giel, B. and Issa, R. (2013). ”Return on Investment Analysis of Using Building Information
Modeling in Construction.” J. Comput. Civ. Eng., 27(5), 511–521.

Gilligan B. and Kunz J. (2007). VDC use in 2007: Significant value, dramatic growth, and
apparent business opportunity, CIFE Homepage, Stanford University Center for
Integrated Facility Engineering.

Khanzode A., Fischer M., and Reed D. (2008). Benefits and lessons learned of implementing
building virtual design and construction (VDC) technologies for coordination of
mechanical, electrical and plumbing (MEP) systems of a large healthcare projects.
Journal of Information Technology in Construction (ITCon), Vol. 13, 324-342

Lakka, Antti, Sulankivi, Kristiina, and Luedke, Mary (2001). Measuring the benefits of CE-
environment solution in the multi-partner projects. In Proceedings, 2nd Worldwide ECCE
Symposium. June 6-8 2001, Espoo Finland.

Sabol, L. (2013). BIM technology for FM. BIM for Facility Managers, 1st Edition, New Jersey:
John Wiley & Sons, 17-45.

Stowe, K., Zhang, S., Teizer, J., and Jaselskis, E. (2014). "Capturing the Return on Investment of
All-In Building Information Modeling: Structured Approach." Pract. Period. Struct. Des.
Constr. , 10.1061/(ASCE)SC.1943-5576.0000221 , 04014027.

Suermann P, Issa R (2009) Evaluating industry perceptions of building information modelling


(BIM) impact on construction, ITCon Vol. 14, pg. 574-594,
http://www.itcon.org/2009/37

Sulankivi K. (2004). Benefits of centralized digital information management in multi partner


projects, Journal of Information Technology in Construction (ITCon), Vol. 9, pg. 35-63,
http://www.itcon.org/2004/3

©  ASCE
Computing  in  Civil  Engineering  2015 683

Talebi, S. (2014). Exploring advantages and challenges of adaptation and implementation of


BIM in project life cycle. In 2nd BIM International Conference on Challenges to
Overcome. BIMForum Portugal.)

Taylor, J. E., Dossick, C. S., and Garvin, M. (2010). “Meeting the Burden of Proof with Case-
Study Research”, Journal of Construction Engineering and Management, Vol. 137, No. 4,
303-311, ASCE.

Teicholz, P. (2013). BIM for Facility Managers. New Jersey, U.S.: John Wiley & Sons.

Vaughan, J., Leming, M., Liu, M., and Jaselskis, E. (2013). ”Cost-Benefit Analysis of
Construction Information Management System Implementation: Case Study.” J. Constr.
Eng. Manage., 139(4), 445–455.

Won, J., & Lee, G. (2010). Identifying the consideration factors for successful BIM projects. In
Proceedings of the International Conference on Computing in Civil and Building
Engineering, Nottingham (Vol. 30, pp. 143-148).

Young Jr, N., Jones, S., Bernstein, H., & Gudgel, J. (2009). The business value of BIM. New
York.

©  ASCE
Computing  in  Civil  Engineering  2015 684

A Computational Procedure for Generating Specimens of BIM and Point Cloud


Data for Building Change Detection

Ling Ma1, Rafael Sacks2, Reem Zeibak-Shini3 and Sagi Filin4


1
Postdoctoral Researcher, National Building Research Institute, Technion, Haifa
32000, Israel +972-4-8292245,[email protected]
2
Assoc. Prof., Head Dept. of Structural Engineering and Construction Management,
Technion, Haifa 32000, Israel +972-4-8293190, [email protected]
3
PhD Candidate, National Building Research Institute, Technion, Haifa 32000, Israel
+972-4-8293120, [email protected]
4
Assoc. Prof., Head Dept. of Transport Engineering and Geo-Informatics, Technion,
Haifa 32000, Israel +972-4-8295855, [email protected]

ABSTRACT
The potential for automated construction quality inspection, construction
progress tracking and post-earthquake damage assessment drives research in
interpretation of remote sensing data and compilation of semantic models of
buildings in different states. However, research efforts are often hampered by a lack
of full-scale datasets. This is particularly the case for earthquake damage assessment
research, where acquisition of scans is restricted by scarcity of access to
post-earthquake sites. To solve this problem, we have developed a procedure for
compiling digital specimens in both pre- and post-event states and for generating
synthetic data equivalent to which would result from laser scanning in the field. The
procedure is validated by comparing the physical and synthetic scans of a damaged
beam. Interpretation of the beam damage from the synthetic data demonstrates the
feasibility of using this procedure to replace physical specimens with digital models
for experimentation and for other civil engineering applications.
Keywords: Computational procedure; Laser scanning; BIM; Damage assessment;
Change detection.
INTRODUCTION
If ‘as-designed', 'as-built' and 'as-is' BIM (Building Information Modeling) models
can be compiled to represent the design, construction and maintenance phases of a
building project, then comparison of the models can serve different use-cases, such
as:
- construction progress tracking and quality checking can be abstracted as
change detection between 'as-designed' and 'as-built' states of the building,
and

©  ASCE
Computing  in  Civil  Engineering  2015 685

- disaster damage assessment and structural health monitoring can be


abstracted as change detection between 'as-built’ and 'as-is' states of the
building.
Remote sensing technologies, including terrestrial laser scanning (TLS),
photo/videogrammetry, etc., are potentially efficient and effective surveying tools
that can facilitate detection of changes among the different states of a building.
Bosché et al. (2014) explored the opportunity for frequent, detailed and semantically
rich assessment of as-built status of construction projects by joining point cloud data
(PCD) and 3D/4D BIM models that describe mechanical, electrical and plumbing
(MEP) works. Akinci et al. (2006) proposed a formalism for using advanced sensor
systems and integrated project models for active construction quality control.
Kashani et al. (2014) used PCD acquired by TLS before and after a tornado to
evaluate how much damage was caused to buildings. German et al. (2013) developed
an approach based on real-time analysis of video frames to identify the cracks in
concrete columns and other structural elements. Torok et al. (2014) used images
obtained from an unmanned robotic platform to similarly identify cracks and main
structural elements. All of these share the common thread of comparison of buildings
at different stages of their life-cycle.
Research and development (R&D) of such change detection systems commonly
defines their function as consisting of three major steps:
1) Choosing building specimens, scanning them and preparing point cloud data.
2) Semantic interpretation of the resulting PCD, with or without information from
a BIM model of an earlier state.
3) Identification and interpretation of the changes between the current and
previous states of the building. This stage may be performed with PCD or with
BIM information, and sometimes both.
However, R&D of the change detection systems has been often restricted by the lack
of available remote sensing data and building specimens. This is particularly so for
the case of earthquake damage assessment research, where acquisition of remote
sensing data and models is restricted by the scarcity of access to the post-earthquake
sites. Specimens of earthquake damaged buildings or components can be reproduced
by large-scale shaking tables (Kasai et al. 2010; van de Lindt et al. 2010; Panagiotou,
Restrepo, and Conte 2010), e.g., LHPOST, the large high performance outdoor shake
table at the Univ. of California, San Diego (Conte et al. 2004), and E-Defense, the
world's largest shaking table to date at the Hyogo Earthquake Engineering Research
Center (Tagawa and Kajiwara 2007). However, such large scale experiments have
prohibitive cost and produce single specimens with each test. Thus for the purposes
of R&D of change detection systems, a practical approach is needed for preparing
multiple highly detailed building specimens with low cost.

©  ASCE
Computing  in  Civil  Engineering  2015 686

In addition, there are no open repositories of PCD of damaged buildings with


sufficient resolution (i.e. resolution of TLS). Sample data of earthquake sites are
available at the lower resolution of airborne and space-based imagery for site-scale
monitoring.
To overcome these problems, we have developed a computational procedure for
preparing datasets that include BIM models of different states of a building, synthetic
TLS PCD sets that can be used for development and rigorous testing of change
detection systems. This paper presents a validation of the computational procedure
and demonstrates the feasibility of its use as a substitute for physical experiments.
The following section describes the procedure. In section 3, the procedure is
validated by comparing the field scan of a damaged reinforced concrete beam with
the synthetic scan of the BIM model of the same damaged beam. In section 4,
feasibility is further demonstrated by comparing the semantic interpretation results
obtained from interpreting the synthetic PCD set with the results from the real PCD
set of the same damaged beam.
PROCEDURE FOR PREPARING SYNTHETIC SAMPLE DATA

BIM model as digital specimen


The computational procedure begins with compiling BIM models of a building in its
different states. BIM is a process of producing 3D building models that represent a
building and its components together with the components’ functional characteristics.
It is broadly used to provide semantically rich representation of design intent
(Eastman et al. 2011). It is applicable over the whole life-cycle of a building. A
recent extension proposed for the IFC standard open BIM data schema
(BuildingSmart 2013) lays the groundwork for modeling of the damaged state of
building components (Ma, Sacks, and Zeibak-Shini 2014). The user begins by
modeling the ‘as-built’ state of a building from engineering drawings. Next, the user
models the ‘as-damaged’ state of the building by examining photos of the building
taken after the earthquake event. The ‘as-damaged’ state is compiled in the BIM
software by performing CSG clipping and displacement operations on the ‘as-built’
BIM model of the building, thus ensuring that the component dimensions are correct.
However, the objects that result from CSG clipping are represented as separate
building components, rather than as the parts of one damaged component. As a result,
the model must be semantically enriched using custom-built software that parses the
IFC file of the model and encodes the identity of each original component to the
corresponding objects that represent its damaged state.
The damaged reinforced concrete beam shown in Figure 1 was used for testing and
validation of the procedure. The beam was damaged in unrelated research at the
National Building Research Institute (NBRI), Technion, Haifa (Schwarz, Hanaor,

©  ASCE
Computing  in  Civil  Engineering  2015 687

and Yankelevsky 2008). Figure 1 also shows an 'as-damaged' BIM model that was
prepared to model the beam.

(a) (b)
Figure 1. (a) Photo of a damaged beam prepared at NBRI, (b) 'as-damaged'
BIM model of the beam.

Laser scanning emulator


TLS is a 3D surveying technology that has been in practical application since the
mid-to-late 1990s. The attractiveness of such a system can be explained by its ability
to provide directly dense and accurate 3D point-clouds without physical access to the
studied area. Its ability to rapidly capture highly accurate wide three-dimensional
regions without being influenced by scene complexity, makes it appropriate for
reliable detection of spatial changes.
In order to generate PCD samples, we implemented a laser scanning emulator (Ma et
al. 2014). From a BIM model, the emulator can generate synthetic PCD in a manner
similar to the way that TLS works in the field. The emulator transmits virtual laser
beams from a fixed position/viewpoint to 'hit' the 3D model in a given direction. The
scanning direction is defined by two parameters, i.e., pan and tilt angle, in a manner
similar to that in which the laser beam deflection unit of a physical scanner moves.
The emulator traverses all the pan and tilt angles to collect the closet intersection of
the virtual laser beam to the 3D model, resulting in synthetic PCD. The emulator
software can also introduce range inaccuracy and position inaccuracy to the synthetic
data to simulate the measurement noise present in field scans.
The damaged beam (Figure 1a) was scanned using a Leica ScanStation C10 (LEICA
2014) at NBRI. A synthetic scan of the 'as-damaged' BIM model was performed
from the same viewpoint as the field scan using the custom-built emulator (Figure
1b). The PCD sets from the field and the synthetic scans are compared in Figure 2.

(a) (b)
Figure 2. (a) Field scan, (b) synthetic scan of the damaged beam.

©  ASCE
Computing  in  Civil  Engineering  2015 688

VALIDATION OF THE SYNTHETIC SAMPLE


The first validation of the 'as-damaged' BIM model and the synthetic PCD resulting
from the emulator is performed by comparing the point clouds directly. This tests
both the modeling accuracy and the scanning emulation in a single step, thus testing
the computational procedure as a whole. A minor drawback is that it is not possible
to automatically distinguish the source of any error.
The PCD is usually represented using Euclidian coordinates of 3D points (i.e., X, Y,
Z) in which the points are irregularly spaced. In a spherical coordinate system, where
the points are represented as pan angle, tilt angle and range distance, the data is more
conveniently organized because the angular spacing of the scanning process is fixed.
The data can be interpreted as 2.5D range image whose image intensity values
represent the range distances. Thus due to the regularity of the representation, when
the range images of two PCDs are aligned, the differences between the two datasets
can be derived directly from the differences of the intensity values or range distances
(Zeibak and Filin 2007).

Figure 3. Range difference image of the synthetic and the physical scan

Figure 4. Histogram of range difference between the synthetic and real scan
(background points in either scan are excluded)

The difference between the synthetic scan and the physical scan data is depicted in
the range difference image shown in Figure 3. The blue pixels reflect background in
both PCD sets. The range differences between the two datasets are represented in
grey scale. Perfectly matched areas are represented in black pixels, while areas with
extreme differences are shown in white pixels. The extreme difference (white pixels)
in the right side of Figure 4 results from modeling inaccuracy, because in the

©  ASCE
Computing  in  Civil  Engineering  2015 689

'as-damaged' model, the beam was broken into only three parts, whereas in reality the
right part is progressively bent and could be divided into more pieces. The more
detailed the model is, the more similar the model and physical specimen are.
Nevertheless, the resulting PCD are perfectly matched in the other parts, which
demonstrates the validness of the computational procedure. From the histogram,
shown in Figure 4, it is estimated that 95% of the points are well matched
(differences are less than 0.5mm, and background points are not taken into account).
Additional tests have been performed with models of two full-scale buildings,
yielding very satisfactory results (Ma et al. 2014).

VALIDATION OF SUITABILITY FOR R&D


The motivation for developing the computational procedure was to provide
specimens for 'Scan-to-BIM' and change detection research. In order to validate the
suitability of the resulting datasets to substitute for physical experiments, we compile
the 'as-damaged' 3D model from both the physical and synthetic scans using a
heuristic algorithm, and then compare the similarity of the two resulting models.
The algorithm starts by extracting planar segments from the PCD, then assembles the
planar segments to 3D solids using information from the ‘as-made’ BIM model
(section shape data). Segments that share edges are identified as connected segments.
Then, the min-max bounding box is identified based on pairs of connected and
perpendicular faces. The 'as-damaged' model (in red) compiled from the synthetic
scan is compared with the 'as-built' model (in black) of the beam in Figure 5.

Figure 5. Comparison of 'as-damaged' and 'as-built' models.

By analyzing the model changes, the beam deflection can be derived as the projected
distance from the lowest point in the 'as-damaged' model to the bottom surface of the
'as-built' model. In this case the deflection was 182 mm. A dimensional comparison
of the system generated and user prepared 'as-damaged' model is shown in Table 1,
which reflects that using the heuristic algorithm for reconstruction of the
'as-damaged' beam model is very promising but still needs improved.
Table 1. Comparison of the system generated and user prepared 'as-damaged'
model (all dimensions are mm units).
Segment System generated model User prepared model
ID Length Width Height Length Width Height
1 1,378 234 292 1,099 240 300

©  ASCE
Computing  in  Civil  Engineering  2015 690

2 280 234 292 163 240 300


3 2,276 234 292 2,638 240 300
CONCLUSION
R&D of change detection systems for buildings is often hampered by a lack of
testing specimen and data. A computational procedure using BIM and a laser
scanning emulator was developed for preparing synthetic BIM and PCD specimens
for the R&D of systems for rapid change or damage detection among the changing
states of a building. The procedure was validated by comparing the physical and
synthetic BIM and PCD datasets. Tests for the case of earthquake damage
assessment demonstrate the feasibility and the advantages of using this procedure to
substitute computer modeling for costly physical experiments. An additional
advantage of the computational procedure to prepare specimens for scan to BIM
research is that the benchmark, i.e., the user-prepared BIM model of the digital
specimen, is available for testing the system generated result, which greatly
facilitates system validation.
ACKNOWLEDGMENT
This research was funded in part by the Insurance Companies Association of Israel.
Dr. Ma is supported in part at the Technion by a fellowship from the Israel Council
for Higher Education. The assistance of undergraduate research assistant Ashrant
Aryal is greatly appreciated.
References
Akinci, B., F. Boukamp, C. Gordon, D. Huber, C. Lyons, and K. Park. 2006. “A
Formalism for Utilization of Sensor Systems and Integrated Project Models
for Active Construction Quality Control.” Automation in Construction 15 (2):
124–38.
Bosché, F., A. Guillemet, Y. Turkan, C. Haas, and R. Haas. 2014. “Tracking the
Built Status of MEP Works: Assessing the Value of a Scan-vs-BIM System.”
Journal of Computing in Civil Engineering 28 (4): 05014004.
doi:10.1061/(ASCE)CP.1943-5487.0000343.
BuildingSmart. 2013. “Industry Foundation Classes Release 4 (IFC4).”
http://www.buildingsmart-tech.org/ifc/IFC4/final/html/index.htm.
Conte, Joel P, J Enrique Luco, JI Restrepo, Frieder Seible, and Lelli Van Den Einde.
2004. “UCSD-NEES Large High Performance Outdoor Shake Table.” In .
Eastman, C.M., P. Teicholz, R. Sacks, and K. Liston. 2011. BIM Handbook: A Guide
to Building Information Modeling for Owners, Managers, Architects,
Engineers, Contractors, and Fabricators. Hoboken, NJ: John Wiley and
Sons.

©  ASCE
Computing  in  Civil  Engineering  2015 691

German, S., J. Jeon, Z. Zhu, C. Bearman, I. Brilakis, R. DesRoches, and L. Lowes.


2013. “Machine Vision-Enhanced Postearthquake Inspection.” Journal of
Computing in Civil Engineering 27 (6): 622–34.
doi:10.1061/(ASCE)CP.1943-5487.0000333.
Kasai, Kazuhiko, Hiroshi Ito, Yoji Ooki, Tsuyoshi Hikino, Koichi Kajiwara, Shojiro
Motoyui, Hitoshi Ozaki, and Masato Ishii. 2010. “Full-Scale Shake Table
Tests of 5-Story Steel Building with Various Dampers.” In .
Kashani, A., P. Crawford, S. Biswas, A. Graettinger, and D. Grau. 2014. “Automated
Tornado Damage Assessment and Wind Speed Estimation Based on
Terrestrial Laser Scanning.” Journal of Computing in Civil Engineering, May,
04014051. doi:10.1061/(ASCE)CP.1943-5487.0000389.
LEICA. 2014. “Leica ScanStation C10 - The All-in-One Laser Scanner for Any
Application - Leica Geosystems - HDS.” Accessed October 6.
http://hds.leica-geosystems.com/en/Leica-ScanStation-C10_79411.htm.
Ma, L, R Sacks, and R Zeibak-Shini. 2014. “Information Modeling of
Earthquake-Damaged Reinforced Concrete Structures.” Advanced
Engineering Informatics.
Ma, L, R Sacks, R Zeibak-Shini, A Aryal, and S Filin. 2014. “Preparation of
Synthetic ‘As-Damaged’ Models for Post-Earthquake BIM Reconstruction
Research.” Journal of Computing in Civil Engineering.
Panagiotou, Marios, José I Restrepo, and Joel P Conte. 2010. “Shake-Table Test of a
Full-Scale 7-Story Building Slice. Phase I: Rectangular Wall.” Journal of
Structural Engineering 137 (6): 691–704.
Schwarz, S., A. Hanaor, and D. Yankelevsky. 2008. Horizontal Load Resistance RC
Frames with Masonary Infill Panels. National Building Research Institute,
Technion.
Tagawa, Y, and K Kajiwara. 2007. “Controller Development for the E-Defense
Shaking Table.” Proceedings of the Institution of Mechanical Engineers, Part
I: Journal of Systems and Control Engineering 221 (2): 171–81.
Torok, M., M. Golparvar-Fard, and K. Kochersberger. 2014. “Image-Based
Automated 3D Crack Detection for Post-Disaster Building Assessment.”
Journal of Computing in Civil Engineering 28 (5): A4014004.
doi:10.1061/(ASCE)CP.1943-5487.0000334.
Van de Lindt, John W, Shiling Pei, Steven E Pryor, H Shimizu, and H Isoda. 2010.
“Experimental Seismic Response of a Full-Scale Six-Story Light-Frame
Wood Building.” Journal of Structural Engineering 136 (10): 1262–72.
Zeibak, R., and S. Filin. 2007. “Change Detection via Terrestrial Laser Scanning.”
International Archives of Photogrammetry and Remote Sensing 36 (3/W52):
430–35.

©  ASCE
Computing  in  Civil  Engineering  2015 692

Understanding the Science of Virtual Design and Construction: What It Takes


to Go beyond Building Information Modeling

David K. H. Chua, M.ASCE1 and Justin K. W. Yeoh2


1
Department of Civil and Environmental Engineering, National University of
Singapore, No. 1 Engineering Drive 2, Block E1A #07-03, Singapore 117576. E-mail:
[email protected]
2
Department of Civil and Environmental Engineering, National University of
Singapore, No. 1 Engineering Drive 2, Block E1A #07-03, Singapore 117576. E-mail:
[email protected]

Abstract

Building Information Modeling (BIM) is gaining wide-spread acceptance in


the Architectural, Engineering and Construction (AEC) industry. The implementation
of BIM has focused largely on Architectural, Structural, as well as specialist
Mechanical and Electrical disciplines, but it remains perceived mainly as a design
tool or at best a visualization tool in 3D or 4D. Instead, we have to go beyond BIM
and begin to embrace the notion of virtual design and construction (VDC). It is a
concept or approach to build, visualize, analyze, and evaluate project performance
virtually and early before a large expenditure of time and resources is made. Although
BIM could form the backbone of this approach, its success requires a relook of the
processes. Intelligent Project Realization (IPR) is introduced which characterizes the
‘science’ behind this approach, with an analysis methodology supported by Multiple
Domain Matrices (MDM). There is also a lean perspective that should govern this
transformation in project delivery.

INTRODUCTION

Building Information Modeling (BIM) has been gaining popularity in the


Architectural, Engineering and Construction (AEC) industry, with its promise of
increased productivity, enhanced collaboration and tighter knowledge integration.
The implementation of BIM has focused largely on Architectural, Structural, as well
as specialist Mechanical and Electrical disciplines, but it remains perceived mainly as
a design tool or at best a visualization tool in 3D or 4D.
It is thought that the much needed transformation needed within the AEC
industry may be achieved through the use of technology. However, despite the large
investment in equipment, CAD and web-based collaboration technologies over the
past two decades, the industry has not seen the anticipated gains in productivity
(Howard and Björk 2008). One root cause is the information and knowledge
fragmentation within a project both vertically over project phases and horizontally

©  ASCE 1
Computing  in  Civil  Engineering  2015 693

over the value chain. Consequently, decisions made are often less than the optimal.
Another is the inherent wastes occurring in the construction delivery processes.
Meanwhile, insights gained from the transformation that has taken place in the
manufacturing, logistics, retailing and banking industries, suggest that change arises
through the adoption and continuous development of new technology (information
technology) and processes. Of the latter, for example, lean principles have resulted in
staggering improvement in productivity for some companies in the automobile sector
(Holweg 2007).
Thus, while BIM has gained increasing acceptance in the AEC industry, its
adoption must be accompanied by a transformation also in the processes surrounding
this technology. BIM can provide the technology to facilitate the sharing of
information, and provide the collaboration across organisation and phases. But still, in
many places it is perceived mainly as a design tool or at best a visualisation tool in
3D or 4D. In manufacturing though, the digital environment of product design has
moved beyond design to enable analysis, manufacturability, modularisation and
production and has also encompassed its supply chain. To realise the full potential of
the digital environment of an AEC project in the same way, the industry must
embrace the notion of Virtual Design and Construction (VDC) (Kam and Fischer
2004). In simple terms, it is a concept or approach to build, visualise, analyse, and
evaluate project performance virtually and early before a large expenditure of time
and resources is made.
The objective of this paper is to begin with the essence of VDC, and from
there set out the potential of VDC in delivering project value. The science of VDC is
described in the form of Intelligent Project Realization (IPR), whereby the principles
of achieving VDC are formalized. At the same time, it will briefly mention concepts
of lean thinking through Multiple Domain Matrices (MDM) in as much as it is
relevant to achieving project transformation.

INTELLIGENT PROJECT REALIZATION (IPR): THE ESSENCE OF VDC

In essence, VDC facilitates the realization of project objectives through the


management of integrated multi-disciplinary performance models. By implication,
virtual implies that the process of design and construction is accomplished digitally
on computers before the project is physically built with large expenditures of time,
resources and cost. The multi-disciplinary facet ensures the involvement of the
various stakeholders of the project and that their requirements are promptly valued
and managed. The integrated environment and models enable the necessary analysis
to be implemented to ensure project value is achieved or even optimized. The models
comprise three key elements to provide the multi-disciplinary integration, namely,
Product, Organization and Process or POP. The product perspective describes the
components within the project facilities while organization comprises the
corresponding project team responsible for the related design and construction
processes of the components. BIM furnishes the modelling tools to capture these
three aspects of VDC.
In addition to the POP elements, we propose a framework termed Intelligent
Project Realization (IPR) which describes the virtual design and construction process

©  ASCE 2
Computing  in  Civil  Engineering  2015 694

systematically, with the purpose of reproducible success in implementation. In other


words, a ‘science’ to explain the essence of VDC as applied to an AEC project. To
describe this essence (Figure 1), two additional elements are added to POP:
Performance Models (P) and Inter-Dependencies & Constraints (i), or iPPOP.

Figure 1. Integrated Project Realization Framework


Performance Models represent the goals and outcomes of the different
stakeholders. As BIM models are integrated and multi-disciplinary, it provides the
medium for allowing collaboration of all project participants or stakeholders
including the Architect, Engineer, Contractor, Owner and subcontractors on a shared
data platform. They are also performance models in the sense that they exhibit some
level of capabilities to analyze, evaluate and predict project performance in relation to
specified project objectives. Otherwise, they may be used with other specialized
analytical tools for this purpose.
Inter-Dependencies & Constraints depict the parameter inter-dependencies
that exist between stakeholders, processes and products, as well as the constraints
brought about by the project. Examples of these include supply chain issues, key
resource constraints and regulatory document submittals/approvals. These
dependencies and constraints are important information requirements which need to
be captured, as their absence or mismanagement may prevent work from being
carried out.

©  ASCE 3
Computing  in  Civil  Engineering  2015 695

Three “Pillars” are also proposed in Figure 1: Information Management,


Process Reengineering and Intelligence & Automation. These pillars represent the
actions required to undertake in a project to ensure successful transformation via the
IPR framework. As an analogy, these pillars are the “signposts” along the journey
that points toward successful VDC implementation.

Information Management. There are three perspectives of the workings of


IPR that are key to harnessing its full potential to transform the AEC industry. The
first is the management of information within the project lifecycle. There should be at
least two governing principles. The first is to provide a model fit for the intended
downstream use. This should ensure the necessary levels of detail that are needed to
be furnished to different team members at different stages of the project in order that
they can correctly perform the functions required, and make the appropriate decisions
in a timely manner. In other words, it facilitates project lifecycle management, in the
same way that the product lifecycle is efficiently managed in manufacturing. The
enabling technology here is the adoption of appropriate modelling guidelines and
standards for interoperability, e.g. IFC, gbXML, CityGML or BCF. The second
principle is to get the model right the first time. This requires that the model conforms
to project requirements and its intention is maintained throughout, and that mistakes
are arrested and eliminated from propagating down the design process. These project
requirements are represented as the performance models, and should be defined at the
onset of the project.

Process Reengineering. The second perspective follows from the first, the
process viewpoint. This is where the process for IPR has to take advantage of the
potential that BIM technology provides, moving away from the mere use of BIM for
modelling and visualizing aspects of the project that warrants attention from various
stakeholders. The approach is to move towards integration where a single project
model is built by different disciplines and the relevant information extracted and
analyzed to evaluate the outcomes against project objectives or goals.
Consequently, the design of the process flow in IPR is governed by the
availability of information, and the use of the information. It is no longer necessary to
do things the traditional way. The workflow has to take advantage of the availability
of information. For example, with a large developer of public housing, a typical
design workflow takes over six weeks to develop the mass model and then manually
evaluate the model against a set of design criteria. Any change at this stage will mean
another equally long rework cycle. The approach adopted in our research was to
develop an intelligent design analyzer which facilitates rapid massing design with
information-embedded objects so that model performance can be incrementally
checked as design progresses. The objective is to significantly shorten the cycle and
eliminate rework. With availability of the right information, it is possible to evaluate
the model further upstream and stop deficient design from cascading.
Another important critical element is the metrics governing the performance
of the models. These measurement metrics provide the measures to compare alternate
designs. Examples of such metrics that had been devised include, buildability score,
constructability score, GreenMarkTM etc. The metrics determines the information that

©  ASCE 4
Computing  in  Civil  Engineering  2015 696

will be required and this in turn governs the workflow that is needed to ensure the
workflow supports model performance evaluation. An overriding principle in the
design of the workflow including checks, quality and decision nodes, is the
implementation of a lean approach which serves to reduce wastes.

Intelligence and Automation. The third perspective is the use of intelligence


in IPR. This is the basis for automation. In the first place, such systems comprise a
repository of design and construction knowledge. A series of automated tasks are then
performed on the model by reference to this knowledge. For example, design
detailing can be automated using knowledge from design codes, which can then drive
automated prefabrication processes. Also, design can be checked for
manufacturability before it is sent to the Computer Numerical Control (CNC) Cutter
for automated fabrication. Such implementations enable breakthroughs in process
transformation by ensuring consistent high-quality rapid processes to be performed
that will effect significant gains in project performance.
Much work has been done in attempts to incorporate intelligence into design
and construction processes by a number of researchers. Some examples include work
by Song and Chua (2006), Chua et al. (2013), and Aram et al. (2013).

Lean Principles. The fundamental philosophy that underpins the IPR


framework is that of the Lean Principle. This means that waste should be identified
and eliminated in a timely fashion. Two sources of waste which have been
encountered in our experience include: Waste from Rework, leading to longer cycle
times than necessary; and, Waste from Over-processing, resulting from unnecessary
compliance checks. The pillars of IPR embody this fundamental thinking, and the
following MDM methodology provides the mechanism in which to achieve these lean
principles.

ANALYZING IPR USING MULTIPLE DOMAIN MATRIX (MDM)

The underlying complexity of VDC can be handled using a systematic


approach to organize, integrate and analyze the project information. To this end,
Multiple Domain Matrix (MDM) method is used to integrate the iPPOP perspectives
of IPR (Figure 2). MDM is an extension of the Design Structure Matrix (DSM) and
Domain Mapping Matrix (DMM) methodologies (Danilovic and Browning 2007),
and is particularly well suited to applications of complex systems, like AEC projects.
Its advantages include being a succinct representation and analysis methodology,
which provides an intuitive understanding to users (Browning 2001).

Use of DSM and MDM in BIM. DSM and MDM have been used to analyze
various aspects of BIM in recent literature. Gu and London (2010) used DSM to
analyze the readiness of the Australian AEC industry for BIM adoption, and
identified several factors relating to the tool adoption, functional need and strategic
issues during the adoption process as obstacles.

©  ASCE 5
Computing  in  Civil  Engineering  2015 697

Performance Performance- Performance- Performance- Performance-


Models Product Organization Process Dependencies

Product- Product-
Product Product-Process
Organization Dependencies

Organization- Organization-
Organization
Process Dependencies

Process-
Process
Dependencies

Legend:
Inter-
Dependencies/
DSM DMM Constraints

Figure 2. iPPOP Dimensions of IPR Framework as MDM

Hickethier et al. (2011) and Jacob and Varghese (2012) both employed MDM
and DSM to study the BIM development process improvement problem. They also
demonstrated some aspects of the dimensions of Product-Process-People, which
correspond to the POP elements of VDC within their papers. Hickethier et al. (2011)
focused on identifying discrepancies in the BIM Model through a visual analysis of
the “Should” and “As is” perspective of communication flows between the modelers.
Jacob and Varghese (2012) characterized the interactions between the process and
product domains during the BIM Execution Phase, using IFC data as the interface
between the domains.

Formalizing the Pillars of IPR from MDM Analysis Techniques. The BIM
development process within VDC exposes many dependencies between the
components of the BIM model, which were hidden in the traditional CAD process.
The consequence is that the VDC process creates a complex system which is often
tightly coupled, leading to multiple potential iterations and feedback loops. The three
pillars of IPR provide the mechanism for systematically decoupling the system.
The analysis techniques associated with MDM include: 1. Clustering
(Identifying groups of elements, particularly within DMMs), 2. Partitioning
(Sequencing of elements according to their logical ordering, particularly within
DSMs), 3. Tearing (Identifying interactions for temporary removal, after which the
DSM or DMM is re-clustered or re-sequenced). The following table demonstrates
how some of the aforementioned techniques are carried out to support the pillars of
IPR.

©  ASCE 6
Computing  in  Civil  Engineering  2015 698

Table 1. Pillars of IPR and corresponding MDM technique


IPR Analysis Description of MDM Analysis Technique
Information Management:
Verified by Performance-Process DMM.
Fit-for-Purpose Modeling
Apply Clustering to Process-Dependency DMM to
identify and analyze decision requirements,
Information Management: allowing us to achieve just-in-time decision
Model Right the First Time making, where the information required for a
decision is delivered just as it is needed.

Apply Partitioning to Process DSM to leverage


Process Re-engineering: concurrent processes, consequently achieving
Lean Workflows leaner workflows.

Clustering of Organization-Dependency/
Organization-Process DMMs to identify optimal
Process Re-engineering:
organizational configurations to implement “Big
Designing the “Big Room”
Room” design strategies.

Another mechanism provided by IPR is the


representation of early information. MDM
Intelligence and Automation: clustering and sequencing analysis is used to
Modeling early information identify which information, constraints or
for decision making parameters may be moved upstream to achieve a
leaner VDC process.

Use Automation as a means of removing the


Intelligence and Automation: dependency between components completely (also
Identifying impact of known as ‘tearing’). This gives us the means to
automation identify and analyze the potential impact of
automation on VDC.

CONCLUSION AND FUTURE RECOMMENDATIONS

This paper has set out to explain reasons why the promise of BIM has not
fully materialised for the AEC industry. Consequently, we underscore the importance
of VDC, and reiterate that the industry should take steps towards embracing this
concept.
While VDC provides the methodology needed by the industry, it does not
provide adequate drivers to guide the adoption. To this end, we introduced two other
dimensions of Performance Models and Inter-dependencies to POP, to provide a
holistic perspective of the VDC process, called IPR. Moreover, the IPR framework
provides the industry with the science required to analyze and replicate successful
implementation of VDC, through the pillars of: Information Management, Process
Reengineering and Intelligence & Automation.

©  ASCE 7
Computing  in  Civil  Engineering  2015 699

This paper also proposes the use of MDM as a modeling representation, as


well as an analytical methodology for enabling the aforementioned three pillars. The
structure of MDM lends itself nicely to spanning the five domains of IPR, represented
by iPPOP. The various clustering, sequencing and tearing analysis provide useful
insights into improving the VDC process, thereby achieving the lean principles which
underpin IPR.
Ongoing research include validating the analytical methodology proposed
with a case study. Presently, a case study of an integrated Big Room is being
analyzed using the IPR framework, to determine an optimal collaborative strategy.

REFERENCES

Aram, S., Eastman, C., and Sacks, R. (2013). "Requirements for BIM platforms in the
concrete reinforcement supply chain." Automation in Construction, 35(0), 1-
17.
Browning, T. R. (2001). "Applying the design structure matrix to system
decomposition and integration problems: a review and new directions."
Engineering Management, IEEE Transactions on, 48(3), 292-306.
Chua, D. K. H., Nguyen, T. Q., and Yeoh, K. W. (2013). "Automated construction
sequencing and scheduling from functional requirements." Automation in
Construction, 35(0), 79-88.
Danilovic, M., and Browning, T. R. (2007). "Managing complex product
development projects with design structure matrices and domain mapping
matrices." International Journal of Project Management, 25(3), 300-314.
Gu, N., and London, K. (2010). "Understanding and facilitating BIM adoption in the
AEC industry." Automation in Construction, 19(8), 988-999.
Hickethier, G., Tommelein, I. D., Hofmann, M., Lostuvali, B., and Gehbauer, F.
(2011). "MDM as a Tool for Process Improvement in Building Modeling."
DSM 2011: Proceedings of the 13th International DSM Conference, S. D.
Eppinger, M. Maurer, K. Eben, and U. Lindemann, eds.Cambridge,
Massachusetts, USA, 349-362.
Holweg, M. (2007). "The genealogy of lean production." Journal of Operations
Management, 25(2), 420-437.
Howard, R., and Björk, B.-C. (2008). "Building information modelling – Experts’
views on standardisation and industry deployment." Advanced Engineering
Informatics, 22(2), 271-280.
Jacob, J., and Varghese, K. (2012). "A Model for Product-Process Integration in the
Building Industry Using Industry Foundation Classes and Design Structure
Matrix." Construction Research Congress 2012, 582-590.
Kam, C., and Fischer, M. (2004). "Capitalizing on early project decision-making
opportunities to improve facility design, construction, and life-cycle
performance—POP, PM4D, and decision dashboard approaches." Automation
in Construction, 13(1), 53-65.
Song, Y., and Chua, D. K. H. (2006). "Modeling of Functional Construction
Requirements for Constructability Analysis." Journal of Construction
Engineering and Management, 132(12), 1314-1326.

©  ASCE 8
Computing  in  Civil  Engineering  2015 700

Crane Load Positioning and Sway Monitoring Using an Inertial Measurement Unit

Yihai Fang1 and Yong K. Cho2


1
Ph.D. Student, School of Civil and Environmental Engineering, Georgia Institute of Technology,
790 Atlanta Dr. N.W., Atlanta, GA 30332-0355. E-mail: [email protected]
2
Associate Professor, School of Civil and Environmental Engineering, Georgia Institute of
Technology, 790 Atlanta Dr. N.W., Atlanta, GA 30332-0355. E-mail: [email protected]

Abstract
Crane operation is one of the most essential activities on construction sites. However, operating a
crane is a sophisticated job which requires the operator to have extensive skills and experience,
and most importantly a comprehensive understanding of crane motions. Besides typical crane
motions such as boom slew, hoist, and extension, monitoring and controlling the position of the
load is extremely important to avoid struck-by accidents caused by crane load, especially when
the load swings as a result of wind and inertia. Although typical motions can be captured by
some existing techniques, a reliable approach to position the load and monitor the load sway
remains missing. This study proposes an orientation-based approach for tracking crane load
position and monitoring load sway in daily lifting activities. This approach adopts an off-the-
shelf inertial measurement unit (IMU) module for measuring load orientation, and an efficient
algorithm for converting orientation measurements to load positions. The proposed approach was
tested in two load sway scenarios in a controlled lab environment. Test results indicate that the
proposed approach correctly converted orientation measurements to accurate load positions and
reconstructed the load sway trajectory in both linear and circular sway motions. Enabling
continuously monitoring of crane load motion, this approach augments the crane motion
information obtained by typical crane motion capturing systems. The findings in the research
will advance the safety practices in crane lifting activities.

Keywords: Crane operation; Load sway; Inertial measurement unit

©  ASCE
Computing  in  Civil  Engineering  2015 701

INTRODUCTION
Crane is one of the most extensively used heavy equipment in the construction industry. More
than 125,000 cranes are in operation every day on construction sites, responsible for lifting and
transporting materials, equipment, and personnel. However, due to their huge size and mass, any
mistake in crane operation could potentially result in catastrophic consequences, such as injuries
and fatalities. A report (McCann and Gittleman 2009) from the Center for Construction Research
and Training (CPWR) reveals that crane-related incidents in the US construction industry
between 1992 and 2006 led to 632 deaths in 610 cases, an average of 43 deaths per year. This
report further states that, in crane-related accidents, the second leading cause of death after
electrocutions (25%) was struck-by crane loads (21%). Many struck-by accidents are due to
crane operator’s failure of monitoring and controlling the crane load. Crane maneuvers, such as
high speed move and sudden stop, generate undesirable load sway as a result of inertia. In
addition, severe wind also contributes to load sway during lifting jobs. Crane operators must be
highly skilled and experienced to accurately assess the distance from the load to nearby
obstructions and to successfully maintain adequate clearance from them. However, sitting in the
cabin, operators have a limited field of view of the load and surrounding objects. In blind lift
scenarios, operators in most cases have even no direct line of sight to the load (Neitzel et al.
2001). Therefore, even skilled and experienced operator cannot completely eliminate load sway
with little information of the load position. The most obvious consequence of load sway is
colliding with surrounding objects which could lead to accidents involving equipment damage,
injury, and even fatality. Besides relying on human skill and experience, load sway-related
accidents can be reduced through the assistance of technology such as sensor systems that can
monitor crane load position, and detect and control load sway. The first step towards this
direction is to automatically monitor and detect load sway in real-time. Towards this direction,
this study proposes an orientation based approach for continuously monitoring crane load sway
and tracking load position. Load orientation is measured using an inertial measurement unit
(IMU) sensor and the estimated position and trajectory of the load are calculated by converting
the Euler angle measurements to Cartesian coordinates. The following sections introduce the
work related to this approach, the method used for converting the Euler angle measurements to
Cartesian coordinates, and the preliminary results in a controlled lab environment.

RELATED WORK
Much research has been focusing on acquiring the information of crane motions and different
approaches and technology have been investigated to achieve this goal. Zhang et al. (2011)
employed Ultra-wide band (UWB) technology to estimate the crane pose in near real-time. In
this system, UWB readers are deployed around the lifting scene and UWB tags are mounted on
different spots of crane boom and the lifted object. However, the system fails to reliably track the
load position because the tags on the crane load cannot be detected continuously due to serious
signal loss which is a common limitation of UWB technology. Lee et al. (2009) developed a
laser technology based robotic crane system to improve the crane lifting productivity. The laser
sensor, installed at the tip of the luffing boom, measures the vertical distance of a lifted object
using a laser beam reflected from the reflection board installed on the hook block. Nevertheless,
it was pointed out that the elevation measurement might not be accurate or even detectable due to
load sway. As another attempt to towards this direction, Lee et al. (2012) introduced an anti-
collision system for tower cranes which employs video camera and other sensors to monitor

©  ASCE
Computing  in  Civil  Engineering  2015 702

crane motions and check potential collision with surrounding objects. For load positioning in
particular, they added an encoder sensor, although less accurate, to compensate the unreliable
measurement performance from the laser sensor. However, both systems suffer from the low
reliability and they could only measure the vertical distance from the boom to the load but fail to
measure the three dimensional position of the load. Using cameras is another popular solution for
increasing the crane operators’ visibility and situational awareness. Shapira et al. (2008)
developed a vision system for tower cranes to increase the operators’ visibility to the load during
loading and unloading. This system consists of a high-resolution video camera, a wireless
communication module, and a central computer and monitor installed in the crane cabin. The
camera is mounted on the trolley and directed downward facing the load or the loading area with
the hook constantly located at the center of the image. It is claimed that this system is able to
increase the productivity in crane lifting tasks and subsequently save costs. However, the
capability of enhancing safety was not validated in the evaluation of such system. Although the
camera provides an alternative perspective for load monitoring, the accuracy of the position
estimation is still subject to the operator’s perception of depth. Furthermore, crane camera
system is sensitive to light condition and obstructions. In the commercial market, some
companies provide an integrated solution that adopts different kind of sensors to keep track of
the crane motion and load status. For instance, they use angular and linear sensors for inclination
and length measurement, and weight and pressure sensors for overload detection. Often the
system contains a display placed in the crane cabin to show the operator with critical
measurements such as load moment and boom angle/length, and if necessary, alert them with
dangerous situations such as overload. However, such systems don’t track the load position and
provide no assistance or warning if the load swings or in proximity to surrounding objects. This
load motion information is particularly important in blind lifting jobs where the operator doesn’t
have a direct line of sight on the load.

METHODOLOGY
Inertial Measurement Unit (IMU) is an electronic device that measures velocity, orientation, and
gravitational forces, using a combination of accelerometers and gyroscopes, sometimes also
magnetometers. IMU sensors were originally developed for the maneuver of aircraft and
spacecraft, such as unmanned aviation vehicle (UAV) and satellites, to report inertial
measurements to the pilot. Recently, IMU sensor was widely used as orientation sensors in the
human field of motion for sport technique training and animation applications. A typical IMU
sensor contains angular and linear accelerometers to keep track the changes in position and
gyroscopes for maintaining an absolute angular reference. Generally, an IMU sensor has least
one accelerometer and gyroscope for each of the three axes: pitch (nose up and down), yaw (nose
left and right) and roll (clockwise or counter-clockwise). When rigidly mounted to an object, the
IMU sensor measures the liner and angular acceleration and automatically calculates the
orientation of the attached object. In the particular case of load sway, it is assumed that the cable
length is known and the cable is rigid. Therefore, the load sway motions can be simplified to a
typical 3-dimentional (3D) pendulum motion (Figure 1a). Given the measured angular
orientation on each axis (Figure 1b) and the cable length, the estimated position of the load
relative to the fixed point can be calculated by converting the Euler angle measurements to
Cartesian coordinates in the local coordinate system (Figure 1c).

©  ASCE
Computing  in  Civil  Engineering  2015 703

Yaw (x1,y1)

Roll
X
(0,0)

Pitch

(a) (b) (c)


Figure 1: Transforming the augular measurements to absolute positions

This process of converting the Euler angle measurements to Cartesian coordinates is


demonstrated in the following steps. Firstly, one single load orientation measurement can be
decomposed in to three elemental rotations, yaw, pitch, and roll. The yaw, pitch, and roll
rotations can comprehensively represent a 3D rigid body in any orientation. This three elemental
rotations and the individual rotation matrices are defined as: a yaw is a counterclockwise rotation
of about the z-axis; a pitch is a counterclockwise rotation of about the y-axis; a roll is a
counterclockwise rotation of about the x-axis. The rotation matrices for the three elemental
rotations are given as below.

Therefore, a single rotation matrix can be formed by multiplying the yaw, pitch, and roll
rotation matrices as below.

Since the load sway motion is a simple 3-dimentional pendulum motion, the load
trajectory actually lies on the internal surface of a sphere with the radius of the cable length.
Hence, the unit vector on local z-axis always points to the center of the sphere. Therefore,
converting the Euler angle measurements to Cartesian coordinates is simplified to converting this
unit vector on local z-axis to a vector on global coordinate system according to the single
rotation matrix containing three elemental rotations. Thus, the load position can be estimated by
multiplying the rotation matrix with the unit vector (0, 0, 1) and the cable length L.

©  ASCE
Computing  in  Civil  Engineering  2015 704

Itt should be noted


n that thee initial orienntation meassurement is nott necessarilyy (0,
0, 0) sincce the surface where the sensor is plaace might noot be compleetely level. Therefore,
T a
deviationn from the orrigin can be expected in the estimateed load trajecctory calculaated based on
this methhod. To miniimize this errror, this metthod takes thhe initial loadd orientationn into
consideraation. The orrientation daata measuredd when the looad is static is averaged and introducced
to the callculation as the
t offset forr correcting the deviatioon.

PILOT EXPERIME
E ENT AND RESULTS
R
The goal of this expeeriment is to test if the prroposed apprroach is capable of monnitoring cranee
load swaay and trackinng load posiition in a conntrolled lab environment
e t. The first taask is to sim
mulate
the condiition of cran
ne load sway. As shown in i Figure 2, a tripod wass used to estaablish a
stationaryy point abov
ve the load. Fixed
F on the stationary point,
p a nylonn cord hangeed a cylindriical
load and their connecction was seccured evenlyy on the periimeter of thee load (Figurre 2). In this
experimeent, an IMU sensor, MPU U-6000 from m InvenSense Inc. was em mployed to measure
m load
angular data.
d This sen
nsor containns a 3-axis acccelerometerr and a 3-axiis gyroscopee and the sennsor
is encapssulated in an
n Arduino compatible sysstem, APM 2.6 2 Set. The onboard miicroprocessoor is
capable of
o automaticcally convertting angular velocity meaasurements fromf the gyrroscope to
orientatioon values. Th
he IMU senssor was rigiddly attached to the centerr of the uppeer load surfaace,
and it waas connectedd to a computter via a USB B cable for data
d transmiission and poower supply.

Figure 2:: Load sway simulation in


i a lab enviironment andd IMU sensoor on the load

W the purp
With pose of testinng the perforrmance of thhe coordinatee conversionn method, tw wo
typical swway scenarioos were simuulated in the lab environnment. The fiirst sway sceenario simulates
the most typical cranne load swayy motion, a liinear pendulum motion. In the experriment, the looad
was releaased 0.4 metters off its reesting equilibbrium positioon. Consequuently, the looad swung baack
and forthh in one direcction. The swway continueed for roughhly 30 secondds and was terminated
t
manuallyy. During thee sway, 313 load orientaation measureements weree taken by thhe IMU at a rate r
of 10Hz. After the ex xperiment thhe data was downloaded
d to a laptop for
f post-proccessing. The
conversioon method in ntroduced inn the methoddology sectioon was impleemented in Microsoft
M Exxcel.
Figure 3 shows the reesults of loadd positions anda estimateed sway trajeectory. The results
r are
presentedd in XY plan ne, XZ planee, and YZ plaane, respectiively. The reed dots indiccates load
positionss converted from
f orientattion measureement, and thet blue liness shows the estimated
trajectoryy obtained byy linking thee load positioons in sequeence. In the XY
X plane vieew, the loadd
sway trajjectory is appproximatelyy a straight line and it is visually
v mmetric abouut the origin point.
sym
The XY and YZ plan ne views shoow a regular arc trajectorry that fits veery well withh the actual
linear penndulum swaay trajectory..

©  ASCE
Computing  in  Civil  Engineering  2015 705

0.2 0.2

0.1 0.1

0 0

Y 3 Y
-0.1 idea of improving -0.1

X performance in a Z
-0.2
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1
sustainable
0.2 0.3 0.4 0.5 0.1 0.05 0 -0.05 -0.1
-0.2

industry. Despite
0.1 its importance, Converted
0.05
Z LCA load position
0
implementation in
-0.05
X the construction Estimated
-0.1 trajectory
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 industry
0.2 has not
0.3 0.4 0.5
been as smooth as
Figure 3: Converted load positions and estimated sway trajectory in linear sway scenario
in other more
The second sway scenario simulatesindustrialized
a more complex circular pendulum motion. In reality,
this type of sway motion usually happens industries due tojobs involving extreme maneuvers and/or
during lifting
under strong wind. In the second sway various scenario,inherent
the load was released 0.3 meters off its resting
equilibrium position with a random lateral force.(Khasreen
features As a result, the load started a circular sway
et al., 2009).
motion. The sway continued for roughly 80 seconds and was terminated manually. In total 815
load orientation measurements were taken The by application
the IMU at ofa rate of 10Hz. The results were shown
in XY, XZ, and YZ planes in Figure 4.LCA It was in observed that the load swung in an irregular oval-
shaped trajectory. construction
projects may be
0.3 addressed from 0.3

two
0.2
different 0.2

perspectives: on
0.1 0.1
the one hand as an
0 assessment of 0
individual
-0.1 construction -0.1

Y materials or Y
-0.2
components, and -0.2

X on the other hand Z


-0.3 -0.3
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 as a global
0.2 0.3 process
0.4 0.5 0.1 0.05 0 -0.05 -0.1
considering
0.1
0.05
construction as a Converted
0
Z whole (Ortiz et al., load position
-0.05 2009).
X Estimated
-0.1 Environmental trajectory
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1assessment
0.2 0.3 of 0.4
the 0.5
Figure 4: Converted load positions andwhole estimated sway trajectory in circular sway scenario
construction (e.g. a
building) will
©  ASCE trigger an arduous
process due to the
complicated
Computing  in  Civil  Engineering  2015 706

Solely visualiizing the final trajectoryy results in tw


wo-dimensioonal planes iss not adequaate to
validate the
t accuracyy of the convversion methhod for circullar sway mootion. Thereffore, the resuults
were visuualized in th
hree-dimensioonal virtual environmennt enabled byy a game enggine, Unity 3D. 3
The convverted load positions
p werre created inn Unity 3D anda the swayy trajectory was
w estimated by
interpolaating adjacen
nt positions. The load sw way motion was w reconstruucted by animating the
virtual looad movemen nt along the estimated trrajectory. A visual compparison betw ween the videeo
data of acctual sway motion
m and thhe estimatedd sway motioon is shown in Figure 5. Red dots arre the
convertedd load positiions and bluee lines are esstimated swaay trajectoryy. By visualizzing the traccking
data, the load sway motion
m can be
b accuratelyy reconstructted in a virtuual environmment. This
reconstruucted load sw
way motion canc be used for further analysis
a and potentially for
f real-timee
monitorinng and collission warningg.

Figure 5:: Comparison


n between video
v data annd estimated position in circular
c swaay scenario

DISCUS SSION
Compareed to existing g techniquess for crane motion
m tracking, the proposed approaach has severral
advantagges. Firstly annd foremost, this approaach accuratelly monitors and visualizes the three--
dimensioonal position n of the load.. Secondly, it
i continuoussly and reliabbly collects load positionn
data desppite of the in
nfluence of weather
w and visibility.
v Laastly, this appproach is coomparativelyy
easy to im
mplement in n daily crane activity withh little mainntenance effoorts. As the first
f phase inn this
research,, this paper reports
r the prreliminary teest results annd indicates the feasibilitty and great
potential of the propo osed approacch. It shouldd be noted thhat a controllled lab environment migght
not truly reflect real crane
c liftingg characteristtics. Several issues in fieeld operationn might
underminne the perforrmance of thhe proposed approach.
a It is assumed in this approoach that thee
crane cabble is rigid an
nd inelastic but in realityy the cable might
m bent when
w the loadd is very lighht or
when exttreme maneu uver is applieed. Additionnally, differeent rigging methods
m (e.g.., two-leggedd and
four-leggged bridle) annd load shappes (e.g., flatt plate, long rebar) mighht result in slightly differrent
sway mootion which could
c affect the accuracyy of the results.

©  ASCE
Computing  in  Civil  Engineering  2015 707

CONCLUSION AND FUTURE WORK


The lack of knowledge of the crane load position and sway induces potential risks in crane lifting
activities that would lead to collisions and other consequences. This problem amplifies in lifting
jobs in confined space or with visibility issues, such as blind lifts. This research introduces an
IMU sensor based approach for tracking load position and monitoring load sway. This approach
adopts an efficient methods that converts the orientation measurements taken by the IMU sensor
to load positions. Furthermore, it reconstructs the load sway motion in a virtual environment by
interpolating load positions and orientations. A controlled lab test validated the proposed
approach in two typical crane load sway motions through two-dimensional plane-view
comparison and a three-dimensional video comparison between the position results to video clips
as ground truth. The results indicate that the proposed approach is able to accurately estimate
crane load position and continuously monitor load sway. Enabling continuously monitoring of
crane load motion, this approach augments the crane motion information obtained by typical
crane motion capturing systems. The next step of this research is to enable wireless data
collection and real-time data processing capabilities. The upgraded system will be tested in a
field environment with real cranes and loads. Elasticity of crane cable and different types of
rigging will be considered in the test to improve the accuracy and reliability of the IMU-based
approach.

REFERENCES
Hwang, S. (2012). Ultra-wide band technology experiments for real-time prevention of tower
crane collisions. Automation in Construction, 22, 545-553.

Lee, G., Kim, H. H., Lee, C. J., Ham, S. I., Yun, S. H., Cho, H., & Kim, K. (2009). A laser-
technology-based lifting-path tracking system for a robotic tower crane. Automation in
Construction, 18(7), 865-874.

Lee, G., Cho, J., Ham, S., Lee, T., Lee, G., Yun, S. H., & Yang, H. J. (2012). A BIM-and sensor-
based tower crane navigation system for blind lifts. Automation in construction, 26, 1-10.

McCann, M., Gittleman, J. (2009). “Crane-related deaths in construction and recommendations


for their prevention.” The Center of Construction Research and Training.
<http://www.cpwr.com/research/crane-related-deaths-construction-and-
recommendations-their-prevention> (Dec. 9, 2013)

Neitzel, R. L., Seixas, N. S., & Ren, K. K. (2001). A review of crane safety in the construction
industry. Applied Occupational and Environmental Hygiene, 16(12), 1106-1117.

Shapira, A., Rosenfeld, Y., & Mizrahi, I. (2008). Vision system for tower cranes. Journal of
Construction Engineering and Management, 134(5), 320-332.

Zhang, C., Hammad, A., & Rodriguez, S. (2011). Crane pose estimation using UWB real-time
location system. Journal of Computing in Civil Engineering, 26(5), 625-637.

©  ASCE
Computing  in  Civil  Engineering  2015 708

Sustainable Construction Enhanced through Building Information Modeling

L. Alvarez-Anton1; J. Díaz1; and D. Castro-Fresno2


1
Department of Architecture and Civil Engineering, THM (University of Applied
Sciences, Giessen), Wiesenstrasse 14, 35390 Giessen, Germany. E-mail:
[email protected]; [email protected]
2
GITECO Research Group, University of Cantabria, E.T.S. Ingenieros de Caminos,
Av. de los Castros s/n, 39005, Santander (Spain). E-mail: [email protected]

Abstract

There is little doubt that the implementation of sustainability in the


construction industry is currently an obligation. Despite all the initiatives that are
being promoted internationally, there is still much to be done. This development will
require new procedures and a change of thinking in the industry. A global approach
which considers construction during its whole life cycle from a triple perspective is
essential. Building Information Modeling (BIM) may be seen as a suitable
methodology for the industry’s transformation towards sustainability. In order to
achieve a wider perspective during the decision-making process, environmental
information can be combined with BIM. The integration of environmental criteria in
smart objects contained in different BIM libraries is surely a promising initial step.
This paper will present an evaluation of such an approach and highlight its respective
strengths and weaknesses.

SUSTAINABLE CONSTRUCTION

Since the first definition of sustainability came out, different attempts for its
implementation have been developed all over the world. As is the case in other
sectors, the construction industry, which is currently one of the biggest economic
engines with great influence on the life quality of the citizens and tremendous impacts
on resources consumption worldwide, is also trying to achieve this aim (Berry &
McCarthy, 2011). Nevertheless, adopting sustainability in the construction industry
will require enormous effort and should be undertaken by all the stakeholders
involved.
Assessing a construction’s sustainable performance requires a systematic and
objective method which covers the whole life cycle of the facility and at the same
time analyzes environmental, social and economic factors (Burdová & Vilceková,
2012). Furthermore, it should be taken into account that defining overall performance
is a complicated process due to the involvement of different stakeholders in different

©  ASCE 1
Computing  in  Civil  Engineering  2015 709

processes and because of the particular features of the construction industry itself
(Haapio & Viitaniemi, 2008). The fragmented nature of construction processes often
hinders the implementation of sustainable principles and highlights the importance of
an interdisciplinary working platform (Dowsett & Harty, 2014).
There is a pressing need for design tools which can integrate all the
information to be evaluated in order to achieve a sustainable performance. These
tools should render it possible to compare various alternatives when making decisions.
Thus the amount of information required is in itself one of the difficulties that
decision-making methodologies have to deal with.
All in all, new methodologies based on integrative work and synergy
generation are required to achieve sustainable construction. Moreover, management
systems have to be improved and decision-making has to be based on long-term
thinking. Due to its special features, Building Information Modeling (BIM) is able to
fulfill such requirements and can be highlighted as one of the most suitable
methodologies for steering the construction industry towards a more sustainable
performance.

BUILDING INFORMATION MODELING – POTENTIAL FOR


SUSTAINABILITY

Building Information Modeling is in a position to transform the construction


industry due to the fact that it not only constitutes a new way of working, but also
requires a change in thinking on the part of the stakeholders. Moreover, it provides
extensive support for integrated design which, in turn, can be considered a key factor
in sustainability (McGraw Hill Construction, 2010).
BIM improves information management, thus enhancing communication
among stakeholders. Moreover, all project information, including as-built
documentation, is embedded in the model, which allows it to be used in later project
phases (Shennan, 2014). Improved information flow leads to better project
performance, while transparency and collaborative work between stakeholders is
strengthened (HM Government, 2008). Nevertheless, despite all the advantages of
BIM, its potential for sustainability could be enhanced even further by the integration
of environmental information.

ENVIRONMENTAL CRITERIA CONSIDERATION - LIFE CYCLE


ASSESSMENT

The construction industry is an industry with high environmental impact


throughout the life cycle of its products (American Institute of Architects, 2010). This
raises the question of implementing methods to evaluate environmental performance
and compare different approaches.
Currently, the most used method in supporting decision-making processes is
Life Cycle Assessment (LCA) (American Institute of Architects, 2010). Ortiz et al.
(2009) pointed out that LCA implementation is essential in improving construction
processes, and therefore sustainability. Its implementation should not be based solely
on providing commercially viable, eco-friendly products for clients, but also on the

©  ASCE 2
Computing  in  Civil  Engineering  2015 710

idea of improving performance in a sustainable industry. Despite its importance, LCA


implementation in the construction industry has not been as smooth as in other more
industrialized industries due to various inherent features (Khasreen et al., 2009).
The application of LCA in construction projects may be addressed from two
different perspectives: on the one hand as an assessment of individual construction
materials or components, and on the other hand as a global process considering
construction as a whole (Ortiz et al., 2009). Environmental assessment of the whole
construction (e.g. a building) will trigger an arduous process due to the complicated
nature of the construction industry. Each project is unique and has a different life
span with a complex variety of products involved. Therefore, as a first step towards
integrating environmental information into BIM methodology, this study only focuses
on a materials-based approach.

LINK BETWEEN ENVIRONMENTAL INFORMATION AND BIM

The task of extracting building information from the BIM model and
integrating it into the LCA tool in order to perform environmental assessment is a
time-consuming and complicated process. Therefore a more efficient alternative has
to be developed. It should be mentioned at this point that several attempts have
already been made to integrate BIM and LCA tools, some of which consider the
whole construction in the assessment.
One example of such an approach is LCADesign, which is based on extracting
data (building quantity data) from the BIM model by means of an IFC file format and
using them for an environmental evaluation. LCADesign combines extracted data
from the BIM model with life cycle inventory data, thus obtaining different
environmental indicators and creating the opportunity of comparing various design
alternatives (Tucker, et al., 2003). When using this approach, environmental
evaluation is performed with software which is external to the BIM platform and
therefore automatic feedback into the BIM platform is no longer possible.
In November 2013 Tally was released. This new application is compatible
with the BIM software Revit. It enables environmental evaluation of the different
construction materials and estimation of the various environmental impacts of the
whole building. In addition, it compares different design alternatives. The procedure
is based on establishing relationships between different BIM elements included in the
model and materials included in the LCA database of Tally. In this way, it becomes
possible to quantify the environmental impacts arising in the different categories.
Thus the process of developing an assessment of the whole construction has been
simplified. However, the relation between model materials and Tally database has to
be set by the user. Moreover, it has to be indicated that it is only available for Revit
users (KT Innovations, 2014).
The attempts highlighted above aim at assessing the whole construction. In
this study, the starting point is a perspective based on the building’s materials and
components. It will seek a way of including environmental information in BIM
platforms in order to serve as a predesign assessment tool. Taking into account what
has been presented so far, the aim is to profit from the information derived from
environmental assessments from the early design phases on and use it as support in

©  ASCE 3
Computing  in  Civil  Engineering  2015 711

decision-making. For this purpose, a link between information contained in the LCA
databases and BIM methodology is required (Álvarez & Díaz, 2014).

Figure 1. Sought link between LCA databases and BIM methodology and tools.

In order to achieve the above-mentioned link, it is necessary to use


information from the LCA databases together with the BIM objects (Díaz et.al, 2014).
Today there is a wide variety of LCA databases on the market, but most of them have
information relating to different fields. There are only very few databases (e.g.
Ökobau.dat) that are specifically oriented towards construction products (in this case
products used in German-speaking countries). Moreover, the elements included in
these databases have to be adapted according to certain rules, so that they match the
different elements used in the construction industry.
On the other hand, the BIM smart object disposes of physical characteristics
comprising geometric data of high accuracy and contains a set of functions that
enable interaction with other objects or users. This interaction is precisely what
makes the difference between smart objects and normal objects. The combination and
interrelation of these smart objects result in the development of a virtual model (de
Vries, et.al, 2012). Such smart objects are available in various BIM libraries and are
currently in widespread use among the professionals of the construction industry
(Heidari, et. al, 2014).

So as to be useful and efficient, complex information has to be presented in an


easy and understandable way. This gives designers direct access at all times and
allows them to consider environmental criteria as well as technical, economic and
social criteria when selecting materials.
Thus we have the following procedure: the environmental information of a
specific element is calculated using one of the LCA tools currently available on the
market; the environmental indicators are calculated and the information is included in
the features of the smart object contained in the BIM library. As shown in Figure 2,
environmental criteria are then included in the object’s properties together with the
rest of the information.

©  ASCE 4
Computing  in  Civil  Engineering  2015 712

Figure 2. Environmental information included in a smart object.

ASSESSMENT OF THE SUGGESTED APPROACH

The point of departure is the search for a link between environmental


information and BIM platform based on a material-oriented approach. Access to the
environmental information should be easy, fast and extensive. These are requisites
that can be fulfilled by the use of BIM object libraries.
In order to assess the suggested approach and identify its advantages and
future research needs, a SWOT analysis is presented. It will offer an evaluation from
an internal point of view (strengths and weaknesses) and from an external perspective
(opportunities and threats).

Internal Assessment
Strengths:
- It enables automatic access to the environmental information of the
element when selecting it. Users do not have to refer to other platforms
to check this information; they can access it directly from the BIM tool.
- It helps to increase social awareness of environmental aspects.
- Once the initial effort of calculating different environmental indicators
of elements in the library has been made, there is no need to repeat the
task for every single project.
Weaknesses:
- The presented information may be too complex to be interpreted by
designers, taking into account the limited time available for deciding

©  ASCE 5
Computing  in  Civil  Engineering  2015 713

and the lack of experience that designers have with relation to


environmental assessment.
- It is only an evaluation of a particular element; it does not allow
assessment of the whole construction.
- It does not cover the whole life cycle of the element; there are various
uncertainties that cannot be known (e.g. future use of the element as
part of a specific infrastructure).
- The distance from manufacturer to construction site should be
calculated for each particular project, but that would not be possible
when using this approach, because the environmental information is
calculated in general without being adapted to a specific project.
- A previous assessment has to be done using an external LCA tool in
order to include the information in the object.

External assessment
Opportunities:
- Manufacturers could offer a complete evaluation and all the required
information about their products.
- Governments all over the world are starting to ask for environmental
information.
- It is integrated in the BIM platform.
- BIM library use is widespread among engineers and designers.
Threats:
- There is a lack of training regarding environmental issues among
stakeholders in the construction industry.
- Effort, time and investment are required to ensure that environmental
criteria are taken into account when making decisions.
- There are difficulties attached to evaluating environmental criteria
together with other criteria in order to reach a solution based on a
balance between the various factors.

The presented solution has positive aspects and strengths, as shown in the
assessment, but there are also other aspects which reduce the effectiveness of this
option; the main ones are summarized in the following paragraphs.
On the one hand, a detailed environmental assessment of the different
elements included in BIM libraries should be developed using an external LCA tool
and then this information has to be inserted in the information of the object. Such a
process requires time and effort as well as a change in the manufacturer’s way of
thinking. Nevertheless, once the object includes the environmental information, it can
be used whenever it is needed without repeating the time-consuming task of
performing the environmental assessment.
On the other hand, due to the fact that each construction project is unique and
has its own location, it is quite difficult to include the distance from manufacturer to
construction site as a general factor in the environmental assessment. Therefore, some
assumptions have to be made, which means that the accuracy of the assessment is
reduced. Moreover, understanding the environmental results is one of the main

©  ASCE 6
Computing  in  Civil  Engineering  2015 714

drawbacks of this solution. In general, social awareness about environmental costs


needs to be increased.
This study is intended to support the idea of considering environmental
criteria during the early design phase. Such an approach constitutes a simple and
easy-to-use assessment tool for decision making. It has a positive influence on
engineers and designers in that they become aware of the environmental performance
of their designs, a concept that might otherwise be ignored. This approach not only
helps designers and engineers to develop a deeper understanding of the respective
project, but it also help manufacturers to be aware of the environmental performance
of their products.

CONCLUSIONS

The environmental information of construction elements should be included


among the rest of the features and properties of the different objects in BIM libraries.
Manufacturers generally give information about the main features, composition and
advantages of their products, and in the same way they should also show the results of
the environmental assessment of these elements. That will make manufacturers more
competitive from a sustainability point of view; they should realize the importance of
environmental aspects and try to improve their products and processes accordingly.
Basically, designers cannot calculate environmental information, but it is
extremely useful for them to have this information to hand when selecting materials.
In this way they will be more conscious of the environmental performance of their
designs and could improve the overall performance of the project.
Integrating environmental information in smart objects is interesting, since it
helps to evaluate environmental criteria along with other criteria at the design stage.
This, in turn, helps to achieve a more sustainable construction. There are some
drawbacks that have to be improved. Nevertheless such an approach might trigger a
change in the way of thinking and working in the construction industry.

©  ASCE 7
Computing  in  Civil  Engineering  2015 715

REFERENCES

Álvarez Antón, L. & Díaz, J. (2014). “Integration of Life Cycle Assessment in a BIM
Environment”, Procedia Engineering, 85, pp. 26 – 32.
American Institute of Architects (2010). “AIA Guide to Building Life Cycle
Assessment in Practice”. American Institute of Architects. Washington DC.
Berry, C. & McCarthy, S. (2011) Guide to sustainable procurement in construction,
Ciria, London.
Burdová, E. K. & Vilceková, S. (2012). “Building Environmental Assessment of
Construction and Building Materials”. Journal of Frontiers in Construction
Engineering, 1(1), pp. 1-7.
de Vries, B., Allameh, E. & Heidari Jozam, M. (2012). “Smart-BIM (Building
Information Modeling.” Proceedings of the 29th International Symposium of
Automation and Robotics in Construction, ISARC. Eindhoven, Nehterlands,
26th June 2012.
Díaz, J., Álvarez Antón, L., Anis, K. & Drogemuller, R. (2014). “Synergies for
sustainable construction based on BIM and LCA.” Tagungsband BIM
Kongress 2014. pp. 108 – 125, GRIN Verlag GmbH.
Dowsett, R. M. & Harty, C. F. (2014) “Evaluating the benefits of BIM for sustainable
design – A review”. Proceeding 29th Annual Association of Researcher in
Construction Management Conference, ARCOM 2013, pp. 13 – 23.
Haapio, A. & Viitaniemi, P. (2008). “A critical review of building environmental
assessment tools.” Environmental Impact Assessment Review, 28(7), pp. 469-
482.
Heidari, M., Allameh, E., de Vries, B., Timmermans, H., Jessurn, J. & Mozaffar, F.
(2014). “Smart-BIM virtual prototype implementation”. Automation in
Construction, 39, pp. 134 – 144.
HM Government (2008). Strategy for sustainable construction. Department for
Business, Enterprise & Regulatory Reform, London.
Khasreen, M. M., Banfill, P. F., & Menzies, G. F. (2009). “Life-Cycle Assessment
and the Environmental Impact of Buildings: A Review.” Sustainability, 1(3),
pp. 674 - 701.
KT Innovations (2014). “Tally. Overview”. http://www.choosetally.com/overview/>
(Dec. 2, 2014).
McGraw Hill Construction (2010). SmartMarket Report. Green BIM. McGraw-Hill
Construction. Bedford.
Ortiz, O., Catells, F. & Sonnemann, G. (2009). “Sustainability in the construction
industry: A review of recent developments based on LCA.” Construction and
Building Materials, 23, pp. 28 - 29.
Shennan, R. (2014). “BIM advances sustainability”. Mott MacDonald,
https://www.mottmac.com/views/bim-advances-sustainability>(Dec. 4, 2014).
Tucker, S. N., Ambrose, M.D. & Johnston, D.R. (2003). “Integrating eco-efficiency
assessment of commercial buildings into the design process: LCADesign”.
International Conference on Smart and Sustainable Built Environment,
November 2003, Brisbane

©  ASCE 8

You might also like